* [PATCH bpf-next 0/3] tracing: Expose tracepoint BTF ids via tracefs
@ 2026-05-15 16:41 Mykyta Yatsenko
2026-05-15 16:41 ` [PATCH bpf-next 1/3] bpf: Export btf_get_module_btf() and btf_relocate_id() Mykyta Yatsenko
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Mykyta Yatsenko @ 2026-05-15 16:41 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor,
peterz, rostedt
Cc: Mykyta Yatsenko
BPF and other consumers that want to attach to or decode a generic
tracepoint need three pieces of BTF information for it:
- the BTF of the object that owns the tracepoint's types
- the FUNC_PROTO describing the tracepoint arguments (with names),
consumed by raw_tp / tp_btf BPF programs
- the STRUCT id of trace_event_raw_<call>, the ring-buffer record
consumed by classic BPF_PROG_TYPE_TRACEPOINT programs
Today none of this is easily discoverable from userspace. The kernel
knows the ids - resolve_btfids fills them in at link time - but
consumers have to search them by the naming convention
("__bpf_trace_<name>", "trace_event_raw_<name>"), walking BTF for
every tracepoint.
This series stores those ids in trace_event_class and exposes them
via events/<sys>/<event>/btf_ids, e.g.
# cat /sys/kernel/tracing/events/sched/sched_switch/btf_ids
btf_obj_id: 1
raw_btf_id: 28882
tp_btf_id: 106335
# bpftool btf dump id 1 root_id 28882 format raw
[28882] FUNC_PROTO '(anon)' ret_type_id=0 vlen=5
'__data' type_id=9
'preempt' type_id=60674
'prev' type_id=219
'next' type_id=219
'prev_state' type_id=108689
# bpftool btf dump id 1 root_id 106335 format raw
[106335] STRUCT 'trace_event_raw_sched_switch' size=64 vlen=9
'ent' type_id=104654 bits_offset=0
'prev_comm' type_id=580 bits_offset=64
'prev_pid' type_id=92875 bits_offset=192
'prev_prio' type_id=79365 bits_offset=224
'prev_state' type_id=83958 bits_offset=256
'next_comm' type_id=580 bits_offset=320
'next_pid' type_id=92875 bits_offset=448
'next_prio' type_id=79365 bits_offset=480
'__data' type_id=407 bits_offset=512
For per-syscall events (all sharing the same dispatcher), raw_btf_id
is 0 — raw_tp / tp_btf programs attach to raw_syscalls/sys_{enter,exit},
not per-syscall events:
# cat /sys/kernel/tracing/events/syscalls/sys_enter_write/btf_ids
btf_obj_id: 1
raw_btf_id: 0
tp_btf_id: 106540
This unlocks few usecases for consumers:
- Resolving tp_btf attach targets and argument types directly,
instead of constructing "__bpf_trace_*" names and
re-discovering them in vmlinux BTF.
- Get a stable, machine-readable contract for tracepoint payloads,
with field names preserved.
Patch 1 exports the two BTF helpers the tracing core needs.
Patch 2 wires DECLARE_EVENT_CLASS to publish the ids, adds the tracefs
reader, and wires the syscall classes so per-syscall events
carry tp_btf_id (raw_btf_id is 0 there — see above).
Patch 3 adds a selftest covering the sched_switch tracepoint.
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
Mykyta Yatsenko (3):
bpf: Export btf_get_module_btf() and btf_relocate_id()
tracing: Expose tracepoint BTF ids via tracefs
selftests/bpf: Add test for tracepoint btf_ids tracefs file
include/linux/btf.h | 2 +
include/linux/trace_events.h | 9 ++
include/trace/trace_events.h | 24 ++++
kernel/bpf/btf.c | 6 +-
kernel/trace/trace_events.c | 81 +++++++++++++-
kernel/trace/trace_syscalls.c | 17 +++
.../testing/selftests/bpf/prog_tests/tp_btf_ids.c | 124 +++++++++++++++++++++
7 files changed, 260 insertions(+), 3 deletions(-)
---
base-commit: 3fe213c040b3a175f6051ed3aaf823a430ac0b08
change-id: 20260508-generic_tracepoint-d488a5a7ab18
Best regards,
--
Mykyta Yatsenko <yatsenko@meta.com>
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH bpf-next 1/3] bpf: Export btf_get_module_btf() and btf_relocate_id()
2026-05-15 16:41 [PATCH bpf-next 0/3] tracing: Expose tracepoint BTF ids via tracefs Mykyta Yatsenko
@ 2026-05-15 16:41 ` Mykyta Yatsenko
2026-05-15 17:56 ` sashiko-bot
2026-05-15 16:41 ` [PATCH bpf-next 2/3] tracing: Expose tracepoint BTF ids via tracefs Mykyta Yatsenko
2026-05-15 16:41 ` [PATCH bpf-next 3/3] selftests/bpf: Add test for tracepoint btf_ids tracefs file Mykyta Yatsenko
2 siblings, 1 reply; 8+ messages in thread
From: Mykyta Yatsenko @ 2026-05-15 16:41 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor,
peterz, rostedt
Cc: Mykyta Yatsenko
From: Mykyta Yatsenko <yatsenko@meta.com>
Make btf_get_module_btf() and btf_relocate_id() available outside
kernel/bpf/btf.c so the tracing core can look up module BTF and
translate ids stored by resolve_btfids (which are local to a
module's split BTF) into the runtime ids used by the kernel.
Used by the upcoming events/<sys>/<event>/btf_ids tracefs interface.
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
include/linux/btf.h | 2 ++
kernel/bpf/btf.c | 6 ++++--
2 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/include/linux/btf.h b/include/linux/btf.h
index 240401d9b25b..273a93a3b2bd 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -235,6 +235,8 @@ int btf_check_and_fixup_fields(const struct btf *btf, struct btf_record *rec);
bool btf_type_is_void(const struct btf_type *t);
s32 btf_find_by_name_kind(const struct btf *btf, const char *name, u8 kind);
s32 bpf_find_btf_id(const char *name, u32 kind, struct btf **btf_p);
+struct btf *btf_get_module_btf(const struct module *module);
+__u32 btf_relocate_id(const struct btf *btf, __u32 id);
const struct btf_type *btf_type_skip_modifiers(const struct btf *btf,
u32 id, u32 *res_id);
const struct btf_type *btf_type_resolve_ptr(const struct btf *btf,
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 17d4ab0a8206..b92914dd224c 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6429,12 +6429,13 @@ struct btf *btf_parse_vmlinux(void)
* split BTF ids will need to be mapped to actual base/split ids for
* BTF now that it has been relocated.
*/
-static __u32 btf_relocate_id(const struct btf *btf, __u32 id)
+__u32 btf_relocate_id(const struct btf *btf, __u32 id)
{
if (!btf->base_btf || !btf->base_id_map)
return id;
return btf->base_id_map[id];
}
+EXPORT_SYMBOL_GPL(btf_relocate_id);
#ifdef CONFIG_DEBUG_INFO_BTF_MODULES
@@ -8496,7 +8497,7 @@ struct module *btf_try_get_module(const struct btf *btf)
/* Returns struct btf corresponding to the struct module.
* This function can return NULL or ERR_PTR.
*/
-static struct btf *btf_get_module_btf(const struct module *module)
+struct btf *btf_get_module_btf(const struct module *module)
{
#ifdef CONFIG_DEBUG_INFO_BTF_MODULES
struct btf_module *btf_mod, *tmp;
@@ -8525,6 +8526,7 @@ static struct btf *btf_get_module_btf(const struct module *module)
return btf;
}
+EXPORT_SYMBOL_GPL(btf_get_module_btf);
static int check_btf_kconfigs(const struct module *module, const char *feature)
{
--
2.53.0-Meta
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH bpf-next 2/3] tracing: Expose tracepoint BTF ids via tracefs
2026-05-15 16:41 [PATCH bpf-next 0/3] tracing: Expose tracepoint BTF ids via tracefs Mykyta Yatsenko
2026-05-15 16:41 ` [PATCH bpf-next 1/3] bpf: Export btf_get_module_btf() and btf_relocate_id() Mykyta Yatsenko
@ 2026-05-15 16:41 ` Mykyta Yatsenko
2026-05-15 18:25 ` sashiko-bot
2026-05-15 16:41 ` [PATCH bpf-next 3/3] selftests/bpf: Add test for tracepoint btf_ids tracefs file Mykyta Yatsenko
2 siblings, 1 reply; 8+ messages in thread
From: Mykyta Yatsenko @ 2026-05-15 16:41 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor,
peterz, rostedt
Cc: Mykyta Yatsenko
From: Mykyta Yatsenko <yatsenko@meta.com>
Add events/<sys>/<event>/btf_ids, a per-template file that exposes
the BTF ids resolve_btfids fills in for each tracepoint:
btf_obj_id BTF object owning the ids below
raw_btf_id FUNC_PROTO of __bpf_trace_<call> (named args), consumed
by raw_tp / tp_btf BPF programs
tp_btf_id trace_event_raw_<call> ring-buffer record, consumed by
classic BPF_PROG_TYPE_TRACEPOINT programs
DECLARE_EVENT_CLASS now emits a 2-entry BTF_ID_LIST (FUNC __bpf_trace_*
and STRUCT trace_event_raw_*) and stores the pointer in
trace_event_class.
Per-syscall events under syscalls/ share the handcrafted classes
event_class_syscall_{enter,exit} instead of going through
DECLARE_EVENT_CLASS. Wire those classes to the BTF id lists
generated for sys_enter / sys_exit so all ~700 per-syscall
events expose the shared dispatcher prototype and record.
The per-syscall events do not own their own tracepoint
(they share sys_enter/sys_exit), so raw_btf_id is reported as 0
on those events; the meaningful raw_btf_id is exposed on
raw_syscalls/sys_{enter,exit}/btf_ids where raw_tp / tp_btf
programs can actually attach.
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
include/linux/trace_events.h | 9 +++++
include/trace/trace_events.h | 24 +++++++++++++
kernel/trace/trace_events.c | 81 ++++++++++++++++++++++++++++++++++++++++++-
kernel/trace/trace_syscalls.c | 17 +++++++++
4 files changed, 130 insertions(+), 1 deletion(-)
diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index d49338c44014..3d55b3cc014a 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -298,6 +298,15 @@ struct trace_event_class {
struct list_head *(*get_fields)(struct trace_event_call *);
struct list_head fields;
int (*raw_init)(struct trace_event_call *);
+#ifdef CONFIG_BPF_EVENTS
+ /*
+ * Per-template BTF ids set by DECLARE_EVENT_CLASS via BTF_ID() and
+ * patched by resolve_btfids at link time. NULL for handcrafted classes.
+ * [0] FUNC __bpf_trace_<template>
+ * [1] STRUCT trace_event_raw_<template>
+ */
+ const u32 *btf_ids;
+#endif
};
extern int trace_event_reg(struct trace_event_call *event,
diff --git a/include/trace/trace_events.h b/include/trace/trace_events.h
index fbc07d353be6..09ad57ac4b73 100644
--- a/include/trace/trace_events.h
+++ b/include/trace/trace_events.h
@@ -19,6 +19,7 @@
*/
#include <linux/trace_events.h>
+#include <linux/btf_ids.h>
#ifndef TRACE_SYSTEM_VAR
#define TRACE_SYSTEM_VAR TRACE_SYSTEM
@@ -397,6 +398,27 @@ static inline notrace int trace_event_get_offsets_##call( \
#define _TRACE_PERF_INIT(call)
#endif /* CONFIG_PERF_EVENTS */
+#ifdef CONFIG_BPF_EVENTS
+/*
+ * Per-template BTF id list, populated at link time by resolve_btfids:
+ * [0] FUNC __bpf_trace_<call> (the BPF dispatcher)
+ * [1] STRUCT trace_event_raw_<call> (the ring-buffer record)
+ * Exposed via the events/<sys>/<name>/btf_ids tracefs file.
+ */
+#define _TRACE_BTF_IDS_DECLARE(call) \
+ extern u32 __bpf_trace_btf_ids_##call[]; \
+ BTF_ID_LIST_GLOBAL(__bpf_trace_btf_ids_##call, 2) \
+ BTF_ID(func, __bpf_trace_##call) \
+ BTF_ID(struct, trace_event_raw_##call)
+
+#define _TRACE_BTF_IDS_INIT(call) \
+ .btf_ids = __bpf_trace_btf_ids_##call,
+
+#else
+#define _TRACE_BTF_IDS_DECLARE(call)
+#define _TRACE_BTF_IDS_INIT(call)
+#endif /* CONFIG_BPF_EVENTS */
+
#include "stages/stage6_event_callback.h"
@@ -474,6 +496,7 @@ static inline void ftrace_test_probe_##call(void) \
#undef DECLARE_EVENT_CLASS
#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \
_TRACE_PERF_PROTO(call, PARAMS(proto)); \
+_TRACE_BTF_IDS_DECLARE(call) \
static char print_fmt_##call[] = print; \
static struct trace_event_class __used __refdata event_class_##call = { \
.system = TRACE_SYSTEM_STRING, \
@@ -483,6 +506,7 @@ static struct trace_event_class __used __refdata event_class_##call = { \
.probe = trace_event_raw_event_##call, \
.reg = trace_event_reg, \
_TRACE_PERF_INIT(call) \
+ _TRACE_BTF_IDS_INIT(call) \
};
#undef DECLARE_EVENT_SYSCALL_CLASS
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index c46e623e7e0d..d463fb44c790 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -22,6 +22,7 @@
#include <linux/sort.h>
#include <linux/slab.h>
#include <linux/delay.h>
+#include <linux/btf.h>
#include <trace/events/sched.h>
#include <trace/syscall.h>
@@ -2200,6 +2201,62 @@ event_id_read(struct file *filp, char __user *ubuf, size_t cnt, loff_t *ppos)
}
#endif
+#ifdef CONFIG_BPF_EVENTS
+static ssize_t
+event_btf_ids_read(struct file *filp, char __user *ubuf, size_t cnt, loff_t *ppos)
+{
+ struct trace_event_file *file;
+ struct trace_event_call *call;
+ const struct btf_type *t;
+ struct module *mod = NULL;
+ u32 raw_id, tp_id;
+ const u32 *ids;
+ struct btf *btf;
+ char buf[128];
+ int len;
+
+ /* Module unload could free call->class and ids[] mid-read. */
+ guard(mutex)(&event_mutex);
+
+ file = event_file_file(filp);
+ if (!file)
+ return -ENODEV;
+
+ call = file->event_call;
+ ids = call->class->btf_ids;
+ if (!ids)
+ return -ENOENT;
+ if (!(call->flags & TRACE_EVENT_FL_DYNAMIC))
+ mod = (struct module *)call->module;
+
+ btf = btf_get_module_btf(mod);
+ if (IS_ERR_OR_NULL(btf))
+ return -ENOENT;
+
+ /* Module-local ids in ids[] need base+local relocation. */
+ tp_id = btf_relocate_id(btf, ids[1]);
+
+ /*
+ * Without FL_TRACEPOINT the dispatcher is shared (e.g. all
+ * per-syscall events fan out from __bpf_trace_sys_enter), so
+ * raw_btf_id has no per-event attach point — report 0.
+ */
+ if (call->flags & TRACE_EVENT_FL_TRACEPOINT) {
+ t = btf_type_by_id(btf, btf_relocate_id(btf, ids[0]));
+ raw_id = t ? t->type : 0;
+ } else {
+ raw_id = 0;
+ }
+
+ len = scnprintf(buf, sizeof(buf),
+ "btf_obj_id: %u\nraw_btf_id: %u\ntp_btf_id: %u\n",
+ btf_obj_id(btf), raw_id, tp_id);
+ btf_put(btf);
+
+ return simple_read_from_buffer(ubuf, cnt, ppos, buf, len);
+}
+#endif
+
static ssize_t
event_filter_read(struct file *filp, char __user *ubuf, size_t cnt,
loff_t *ppos)
@@ -2700,6 +2757,13 @@ static const struct file_operations ftrace_event_id_fops = {
};
#endif
+#ifdef CONFIG_BPF_EVENTS
+static const struct file_operations ftrace_event_btf_ids_fops = {
+ .read = event_btf_ids_read,
+ .llseek = default_llseek,
+};
+#endif
+
static const struct file_operations ftrace_event_filter_fops = {
.open = tracing_open_file_tr,
.read = event_filter_read,
@@ -3093,6 +3157,14 @@ static int event_callback(const char *name, umode_t *mode, void **data,
}
#endif
+#ifdef CONFIG_BPF_EVENTS
+ if (call->class->btf_ids && strcmp(name, "btf_ids") == 0) {
+ *mode = TRACE_MODE_READ;
+ *fops = &ftrace_event_btf_ids_fops;
+ return 1;
+ }
+#endif
+
#ifdef CONFIG_HIST_TRIGGERS
if (strcmp(name, "hist") == 0) {
*mode = TRACE_MODE_READ;
@@ -3147,7 +3219,14 @@ event_create_dir(struct eventfs_inode *parent, struct trace_event_file *file)
.callback = event_callback,
},
#endif
-#define NR_RO_EVENT_ENTRIES (1 + IS_ENABLED(CONFIG_PERF_EVENTS))
+#ifdef CONFIG_BPF_EVENTS
+ {
+ .name = "btf_ids",
+ .callback = event_callback,
+ },
+#endif
+#define NR_RO_EVENT_ENTRIES (1 + IS_ENABLED(CONFIG_PERF_EVENTS) + \
+ IS_ENABLED(CONFIG_BPF_EVENTS))
/* Readonly files must be above this line and counted by NR_RO_EVENT_ENTRIES. */
{
.name = "enable",
diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
index e98ee7e1e66f..9134461a8def 100644
--- a/kernel/trace/trace_syscalls.c
+++ b/kernel/trace/trace_syscalls.c
@@ -1303,12 +1303,26 @@ struct trace_event_functions exit_syscall_print_funcs = {
.trace = print_syscall_exit,
};
+#ifdef CONFIG_BPF_EVENTS
+/*
+ * BTF id lists generated by DECLARE_EVENT_CLASS for the sys_enter and
+ * sys_exit tracepoints. The auto-generated event_class_sys_{enter,exit}
+ * is unused (per-syscall events share the handcrafted classes below),
+ * but the id lists themselves are global and reusable.
+ */
+extern u32 __bpf_trace_btf_ids_sys_enter[];
+extern u32 __bpf_trace_btf_ids_sys_exit[];
+#endif
+
struct trace_event_class __refdata event_class_syscall_enter = {
.system = "syscalls",
.reg = syscall_enter_register,
.fields_array = syscall_enter_fields_array,
.get_fields = syscall_get_enter_fields,
.raw_init = init_syscall_trace,
+#ifdef CONFIG_BPF_EVENTS
+ .btf_ids = __bpf_trace_btf_ids_sys_enter,
+#endif
};
struct trace_event_class __refdata event_class_syscall_exit = {
@@ -1321,6 +1335,9 @@ struct trace_event_class __refdata event_class_syscall_exit = {
},
.fields = LIST_HEAD_INIT(event_class_syscall_exit.fields),
.raw_init = init_syscall_trace,
+#ifdef CONFIG_BPF_EVENTS
+ .btf_ids = __bpf_trace_btf_ids_sys_exit,
+#endif
};
unsigned long __init __weak arch_syscall_addr(int nr)
--
2.53.0-Meta
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH bpf-next 3/3] selftests/bpf: Add test for tracepoint btf_ids tracefs file
2026-05-15 16:41 [PATCH bpf-next 0/3] tracing: Expose tracepoint BTF ids via tracefs Mykyta Yatsenko
2026-05-15 16:41 ` [PATCH bpf-next 1/3] bpf: Export btf_get_module_btf() and btf_relocate_id() Mykyta Yatsenko
2026-05-15 16:41 ` [PATCH bpf-next 2/3] tracing: Expose tracepoint BTF ids via tracefs Mykyta Yatsenko
@ 2026-05-15 16:41 ` Mykyta Yatsenko
2026-05-15 18:36 ` sashiko-bot
2 siblings, 1 reply; 8+ messages in thread
From: Mykyta Yatsenko @ 2026-05-15 16:41 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor,
peterz, rostedt
Cc: Mykyta Yatsenko
From: Mykyta Yatsenko <yatsenko@meta.com>
Read events/bpf_testmod/bpf_testmod_test_read/btf_ids and verify the
exported FUNC_PROTO matches the testmod tracepoint signature
(__data, struct task_struct *task, struct bpf_testmod_test_read_ctx
*ctx) and the record struct trace_event_raw_bpf_testmod_test_read
carries the fields declared by TP_STRUCT__entry.
Use the testmod tracepoint so the test exercises the module/split-BTF
path (btf_relocate_id) rather than vmlinux only, and falls back from
/sys/kernel/tracing to /sys/kernel/debug/tracing when tracefs is not
mounted at the new location.
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
.../testing/selftests/bpf/prog_tests/tp_btf_ids.c | 124 +++++++++++++++++++++
1 file changed, 124 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/tp_btf_ids.c b/tools/testing/selftests/bpf/prog_tests/tp_btf_ids.c
new file mode 100644
index 000000000000..7a893ae523e0
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/tp_btf_ids.c
@@ -0,0 +1,124 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include <bpf/btf.h>
+
+#define TRACEFS "/sys/kernel/tracing"
+#define DEBUGFS_TRACING "/sys/kernel/debug/tracing"
+#define EVENT_SUBPATH "events/bpf_testmod/bpf_testmod_test_read/btf_ids"
+
+struct btf_ids_info {
+ __u32 obj_id;
+ __u32 raw_id;
+ __u32 tp_id;
+};
+
+static const char *btf_ids_path(char *buf, size_t sz)
+{
+ if (access(TRACEFS "/trace", F_OK) == 0)
+ snprintf(buf, sz, "%s/%s", TRACEFS, EVENT_SUBPATH);
+ else
+ snprintf(buf, sz, "%s/%s", DEBUGFS_TRACING, EVENT_SUBPATH);
+ return buf;
+}
+
+static int read_btf_ids(struct btf_ids_info *info)
+{
+ char path[256], buf[256];
+ int fd, n;
+
+ fd = open(btf_ids_path(path, sizeof(path)), O_RDONLY);
+ if (fd < 0)
+ return -errno;
+
+ n = read(fd, buf, sizeof(buf) - 1);
+ close(fd);
+ if (n <= 0)
+ return -EIO;
+ buf[n] = '\0';
+
+ if (sscanf(buf,
+ "btf_obj_id: %u\nraw_btf_id: %u\ntp_btf_id: %u\n",
+ &info->obj_id, &info->raw_id, &info->tp_id) != 3)
+ return -EINVAL;
+ return 0;
+}
+
+static const char *param_name(struct btf *btf, const struct btf_param *p)
+{
+ return btf__name_by_offset(btf, p->name_off);
+}
+
+static const char *member_name(struct btf *btf, const struct btf_member *m)
+{
+ return btf__name_by_offset(btf, m->name_off);
+}
+
+void test_tp_btf_ids(void)
+{
+ const struct btf_type *proto_t, *rec_t;
+ const struct btf_param *params;
+ const struct btf_member *members;
+ struct btf_ids_info info;
+ struct btf *vmlinux_btf, *btf;
+ const char *name;
+ int err;
+
+ err = read_btf_ids(&info);
+ if (!ASSERT_OK(err, "read btf_ids"))
+ return;
+
+ ASSERT_GT(info.obj_id, 0, "obj_id non-zero");
+ ASSERT_GT(info.raw_id, 0, "raw_id non-zero");
+ ASSERT_GT(info.tp_id, 0, "tp_id non-zero");
+
+ vmlinux_btf = btf__load_vmlinux_btf();
+ if (!ASSERT_OK_PTR(vmlinux_btf, "load vmlinux BTF"))
+ return;
+
+ /* Module BTF is split BTF; load with vmlinux as base. */
+ btf = btf__load_from_kernel_by_id_split(info.obj_id, vmlinux_btf);
+ if (!ASSERT_OK_PTR(btf, "load module BTF")) {
+ btf__free(vmlinux_btf);
+ return;
+ }
+
+ /*
+ * raw_btf_id should be the FUNC_PROTO of __bpf_trace_<call>:
+ * void *__data, struct task_struct *task,
+ * struct bpf_testmod_test_read_ctx *ctx
+ */
+ proto_t = btf__type_by_id(btf, info.raw_id);
+ if (!ASSERT_OK_PTR(proto_t, "raw type_by_id"))
+ goto out;
+ if (!ASSERT_TRUE(btf_is_func_proto(proto_t), "raw is FUNC_PROTO"))
+ goto out;
+ ASSERT_EQ(btf_vlen(proto_t), 3, "func_proto arg count");
+
+ params = btf_params(proto_t);
+ ASSERT_STREQ(param_name(btf, ¶ms[0]), "__data", "arg0 name");
+ ASSERT_STREQ(param_name(btf, ¶ms[1]), "task", "arg1 name");
+ ASSERT_STREQ(param_name(btf, ¶ms[2]), "ctx", "arg2 name");
+
+ /*
+ * tp_btf_id should be STRUCT trace_event_raw_<call> with the
+ * fields declared by TP_STRUCT__entry plus the common header.
+ */
+ rec_t = btf__type_by_id(btf, info.tp_id);
+ if (!ASSERT_OK_PTR(rec_t, "tp type_by_id"))
+ goto out;
+ if (!ASSERT_TRUE(btf_is_struct(rec_t), "tp is STRUCT"))
+ goto out;
+ name = btf__name_by_offset(btf, rec_t->name_off);
+ ASSERT_STREQ(name, "trace_event_raw_bpf_testmod_test_read",
+ "tp struct name");
+
+ members = btf_members(rec_t);
+ ASSERT_STREQ(member_name(btf, &members[0]), "ent", "field0 name");
+ ASSERT_STREQ(member_name(btf, &members[1]), "pid", "field1 name");
+ ASSERT_STREQ(member_name(btf, &members[2]), "comm", "field2 name");
+ ASSERT_STREQ(member_name(btf, &members[3]), "off", "field3 name");
+ ASSERT_STREQ(member_name(btf, &members[4]), "len", "field4 name");
+out:
+ btf__free(btf);
+ btf__free(vmlinux_btf);
+}
--
2.53.0-Meta
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next 1/3] bpf: Export btf_get_module_btf() and btf_relocate_id()
2026-05-15 16:41 ` [PATCH bpf-next 1/3] bpf: Export btf_get_module_btf() and btf_relocate_id() Mykyta Yatsenko
@ 2026-05-15 17:56 ` sashiko-bot
2026-05-15 19:54 ` Mykyta Yatsenko
0 siblings, 1 reply; 8+ messages in thread
From: sashiko-bot @ 2026-05-15 17:56 UTC (permalink / raw)
To: Mykyta Yatsenko; +Cc: bpf
Thank you for your contribution! Sashiko AI review found 1 potential issue(s) to consider:
- [Medium] API Asymmetry and Resource Leak due to unexported btf_put()
--
commit 51050bef777abcb3d2969804b1e3693feb10a170
Author: Mykyta Yatsenko <yatsenko@meta.com>
bpf: Export btf_get_module_btf() and btf_relocate_id()
This commit makes btf_get_module_btf() and btf_relocate_id() available outside
kernel/bpf/btf.c by exporting them. This allows the tracing core to look up
module BTF and translate split BTF ids.
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index 77af44d8a3ad3..aa88d18247bda 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
[ ... ]
> @@ -8500,7 +8501,7 @@ struct module *btf_try_get_module(const struct btf *btf)
> /* Returns struct btf corresponding to the struct module.
> * This function can return NULL or ERR_PTR.
> */
> -static struct btf *btf_get_module_btf(const struct module *module)
> +struct btf *btf_get_module_btf(const struct module *module)
> {
[ ... ]
> @@ -8529,6 +8530,7 @@ static struct btf *btf_get_module_btf(const struct module *module)
>
> return btf;
> }
> +EXPORT_SYMBOL_GPL(btf_get_module_btf);
Because btf_get_module_btf() increments the reference count on the returned
struct btf object using btf_get(), callers are required to release this
reference by calling btf_put().
Since btf_put() is not exported, won't loadable modules using this new API
fail to link or be forced to leak the struct btf reference?
Alternatively, if these functions are only meant for the tracing core, which
is always built-in, are these EXPORT_SYMBOL_GPL declarations necessary?
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260515-generic_tracepoint-v1-0-aa619fa94132@meta.com?part=1
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next 2/3] tracing: Expose tracepoint BTF ids via tracefs
2026-05-15 16:41 ` [PATCH bpf-next 2/3] tracing: Expose tracepoint BTF ids via tracefs Mykyta Yatsenko
@ 2026-05-15 18:25 ` sashiko-bot
0 siblings, 0 replies; 8+ messages in thread
From: sashiko-bot @ 2026-05-15 18:25 UTC (permalink / raw)
To: Mykyta Yatsenko; +Cc: bpf
Thank you for your contribution! Sashiko AI review found 1 potential issue(s) to consider:
- [High] The global `event_mutex` is held across a `copy_to_user()` operation, creating a denial-of-service vulnerability and latency spikes.
--
commit 5bcf593e0ef00b79b0e87af32e5fdb4e5481d768
Author: Mykyta Yatsenko <yatsenko@meta.com>
tracing: Expose tracepoint BTF ids via tracefs
This commit introduces a new tracefs file btf_ids for each event. It exposes
the BTF object ID and tracepoint/raw BTF IDs populated by resolve_btfids at
link time to allow raw_tp and tp_btf BPF programs to properly attach.
> diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
> index c46e623e7e0de..d463fb44c790a 100644
> --- a/kernel/trace/trace_events.c
> +++ b/kernel/trace/trace_events.c
[ ... ]
> @@ -2200,6 +2201,62 @@ event_id_read(struct file *filp, char __user *ubuf, size_t cnt, loff_t *ppos)
> }
> #endif
>
> +#ifdef CONFIG_BPF_EVENTS
> +static ssize_t
> +event_btf_ids_read(struct file *filp, char __user *ubuf, size_t cnt, loff_t *ppos)
> +{
> + struct trace_event_file *file;
> + struct trace_event_call *call;
> + const struct btf_type *t;
> + struct module *mod = NULL;
> + u32 raw_id, tp_id;
> + const u32 *ids;
> + struct btf *btf;
> + char buf[128];
> + int len;
> +
> + /* Module unload could free call->class and ids[] mid-read. */
> + guard(mutex)(&event_mutex);
> +
> + file = event_file_file(filp);
> + if (!file)
> + return -ENODEV;
[ ... ]
> + len = scnprintf(buf, sizeof(buf),
> + "btf_obj_id: %u\nraw_btf_id: %u\ntp_btf_id: %u\n",
> + btf_obj_id(btf), raw_id, tp_id);
> + btf_put(btf);
> +
> + return simple_read_from_buffer(ubuf, cnt, ppos, buf, len);
Does this hold the global event_mutex during copy_to_user()?
Because guard(mutex)(&event_mutex) holds the lock until the function returns,
the mutex remains locked during simple_read_from_buffer(), which internally
calls copy_to_user().
Could this allow a user to intentionally stall the global tracing subsystem?
If the user-space buffer is backed by userfaultfd or a slow FUSE mount, the
page fault could block indefinitely while holding event_mutex, which protects
system-wide tracing operations (including module loading and unloading).
Looking at event_filter_read() nearby, it explicitly releases event_mutex
before calling simple_read_from_buffer(). Would it be safer to use a similar
pattern here by dropping the lock once the buffer string is populated?
> +}
> +#endif
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260515-generic_tracepoint-v1-0-aa619fa94132@meta.com?part=2
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next 3/3] selftests/bpf: Add test for tracepoint btf_ids tracefs file
2026-05-15 16:41 ` [PATCH bpf-next 3/3] selftests/bpf: Add test for tracepoint btf_ids tracefs file Mykyta Yatsenko
@ 2026-05-15 18:36 ` sashiko-bot
0 siblings, 0 replies; 8+ messages in thread
From: sashiko-bot @ 2026-05-15 18:36 UTC (permalink / raw)
To: Mykyta Yatsenko; +Cc: bpf
Thank you for your contribution! Sashiko AI review found 2 potential issue(s) to consider:
- [Medium] Missing test environment check for `bpf_testmod` module causes spurious test failures.
- [Medium] Out-of-bounds array reads and cascading failures due to non-aborting assertions and missing length checks.
--
commit 1c2fbe8e9b6c6c1082b4ae630c67ceb3b5db79b5
Author: Mykyta Yatsenko <yatsenko@meta.com>
selftests/bpf: Add test for tracepoint btf_ids tracefs file
This commit adds a selftest to verify that the BTF IDs exposed in the
tracefs btf_ids file for a module tracepoint match the expected FUNC_PROTO
and STRUCT representations. It uses the bpf_testmod_test_read tracepoint
to exercise the module/split-BTF path.
> diff --git a/tools/testing/selftests/bpf/prog_tests/tp_btf_ids.c b/tools/testing/selftests/bpf/prog_tests/tp_btf_ids.c
> new file mode 100644
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/tp_btf_ids.c
[ ... ]
> +void test_tp_btf_ids(void)
> +{
> + const struct btf_type *proto_t, *rec_t;
> + const struct btf_param *params;
> + const struct btf_member *members;
> + struct btf_ids_info info;
> + struct btf *vmlinux_btf, *btf;
> + const char *name;
> + int err;
> +
Since this test relies on bpf_testmod_test_read from the bpf_testmod.ko
kernel module, should there be a check for env.has_testmod here?
Without it, if the module is not loaded, the test might fail with a missing
tracefs directory instead of gracefully skipping via test__skip().
> + err = read_btf_ids(&info);
> + if (!ASSERT_OK(err, "read btf_ids"))
> + return;
[ ... ]
> + /*
> + * raw_btf_id should be the FUNC_PROTO of __bpf_trace_<call>:
> + * void *__data, struct task_struct *task,
> + * struct bpf_testmod_test_read_ctx *ctx
> + */
> + proto_t = btf__type_by_id(btf, info.raw_id);
> + if (!ASSERT_OK_PTR(proto_t, "raw type_by_id"))
> + goto out;
> + if (!ASSERT_TRUE(btf_is_func_proto(proto_t), "raw is FUNC_PROTO"))
> + goto out;
> + ASSERT_EQ(btf_vlen(proto_t), 3, "func_proto arg count");
> +
> + params = btf_params(proto_t);
> + ASSERT_STREQ(param_name(btf, ¶ms[0]), "__data", "arg0 name");
> + ASSERT_STREQ(param_name(btf, ¶ms[1]), "task", "arg1 name");
> + ASSERT_STREQ(param_name(btf, ¶ms[2]), "ctx", "arg2 name");
Does this code risk reading out of bounds if btf_vlen(proto_t) is less than 3?
The ASSERT_EQ macro logs a failure but does not inherently abort the
function execution. Should this be wrapped in a conditional check, such as
if (!ASSERT_EQ(...)) goto out, to prevent accessing params[2] if the length
is too small?
> +
> + /*
> + * tp_btf_id should be STRUCT trace_event_raw_<call> with the
> + * fields declared by TP_STRUCT__entry plus the common header.
> + */
> + rec_t = btf__type_by_id(btf, info.tp_id);
> + if (!ASSERT_OK_PTR(rec_t, "tp type_by_id"))
> + goto out;
> + if (!ASSERT_TRUE(btf_is_struct(rec_t), "tp is STRUCT"))
> + goto out;
> + name = btf__name_by_offset(btf, rec_t->name_off);
> + ASSERT_STREQ(name, "trace_event_raw_bpf_testmod_test_read",
> + "tp struct name");
> +
> + members = btf_members(rec_t);
> + ASSERT_STREQ(member_name(btf, &members[0]), "ent", "field0 name");
> + ASSERT_STREQ(member_name(btf, &members[1]), "pid", "field1 name");
> + ASSERT_STREQ(member_name(btf, &members[2]), "comm", "field2 name");
> + ASSERT_STREQ(member_name(btf, &members[3]), "off", "field3 name");
> + ASSERT_STREQ(member_name(btf, &members[4]), "len", "field4 name");
Similarly, is it necessary to assert btf_vlen(rec_t) and conditionally jump
to out if it is too small before accessing members[0] through members[4]?
If the structure layout changes or is malformed, this might read adjacent
BTF data.
> +out:
> + btf__free(btf);
> + btf__free(vmlinux_btf);
> +}
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260515-generic_tracepoint-v1-0-aa619fa94132@meta.com?part=3
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next 1/3] bpf: Export btf_get_module_btf() and btf_relocate_id()
2026-05-15 17:56 ` sashiko-bot
@ 2026-05-15 19:54 ` Mykyta Yatsenko
0 siblings, 0 replies; 8+ messages in thread
From: Mykyta Yatsenko @ 2026-05-15 19:54 UTC (permalink / raw)
To: sashiko-reviews; +Cc: bpf
On 5/15/26 6:56 PM, sashiko-bot@kernel.org wrote:
> Thank you for your contribution! Sashiko AI review found 1 potential issue(s) to consider:
> - [Medium] API Asymmetry and Resource Leak due to unexported btf_put()
> --
>
> commit 51050bef777abcb3d2969804b1e3693feb10a170
> Author: Mykyta Yatsenko <yatsenko@meta.com>
>
> bpf: Export btf_get_module_btf() and btf_relocate_id()
>
> This commit makes btf_get_module_btf() and btf_relocate_id() available outside
> kernel/bpf/btf.c by exporting them. This allows the tracing core to look up
> module BTF and translate split BTF ids.
>
>> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
>> index 77af44d8a3ad3..aa88d18247bda 100644
>> --- a/kernel/bpf/btf.c
>> +++ b/kernel/bpf/btf.c
> [ ... ]
>> @@ -8500,7 +8501,7 @@ struct module *btf_try_get_module(const struct btf *btf)
>> /* Returns struct btf corresponding to the struct module.
>> * This function can return NULL or ERR_PTR.
>> */
>> -static struct btf *btf_get_module_btf(const struct module *module)
>> +struct btf *btf_get_module_btf(const struct module *module)
>> {
> [ ... ]
>> @@ -8529,6 +8530,7 @@ static struct btf *btf_get_module_btf(const struct module *module)
>>
>> return btf;
>> }
>> +EXPORT_SYMBOL_GPL(btf_get_module_btf);
>
> Because btf_get_module_btf() increments the reference count on the returned
> struct btf object using btf_get(), callers are required to release this
> reference by calling btf_put().
>
> Since btf_put() is not exported, won't loadable modules using this new API
> fail to link or be forced to leak the struct btf reference?
>
> Alternatively, if these functions are only meant for the tracing core, which
> is always built-in, are these EXPORT_SYMBOL_GPL declarations necessary?
>
yes, I should drop EXPORT_SYMBOL_GPL, and move these function prototypes
to the header.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2026-05-15 19:54 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-15 16:41 [PATCH bpf-next 0/3] tracing: Expose tracepoint BTF ids via tracefs Mykyta Yatsenko
2026-05-15 16:41 ` [PATCH bpf-next 1/3] bpf: Export btf_get_module_btf() and btf_relocate_id() Mykyta Yatsenko
2026-05-15 17:56 ` sashiko-bot
2026-05-15 19:54 ` Mykyta Yatsenko
2026-05-15 16:41 ` [PATCH bpf-next 2/3] tracing: Expose tracepoint BTF ids via tracefs Mykyta Yatsenko
2026-05-15 18:25 ` sashiko-bot
2026-05-15 16:41 ` [PATCH bpf-next 3/3] selftests/bpf: Add test for tracepoint btf_ids tracefs file Mykyta Yatsenko
2026-05-15 18:36 ` sashiko-bot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox