* [PATCH bpf-next 00/17] bpf: tracing_multi link
@ 2026-02-20 10:06 Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 01/17] ftrace: Add ftrace_hash_count function Jiri Olsa
` (16 more replies)
0 siblings, 17 replies; 38+ messages in thread
From: Jiri Olsa @ 2026-02-20 10:06 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
hi,
adding tracing_multi link support that allows fast attachment
of tracing program to many functions.
RFC version: https://lore.kernel.org/bpf/20260203093819.2105105-1-jolsa@kernel.org/
Changes to RFC:
- added ftrace_hash_count as wrapper for hash_count [Steven]
- added trampoline mutex pool [Andrii]
- reworked 'struct bpf_tramp_node' separatoin [Andrii]
- the 'struct bpf_tramp_node' now holds pointer to bpf_link,
which is similar to what we do for uprobe_multi;
I understand it's not a fundamental change compared to previous
version which used bpf_prog pointer instead, but I don't see better
way of doing this.. I'm happy to discuss this further if there's
better idea
- reworked 'struct bpf_fsession_link' based on bpf_tramp_node
- made btf__find_by_glob_kind function internal helper [Andrii]
- many small assorted fixes [Andrii,CI]
- added session support [Leon Hwang]
- added cookies support
- added more tests
Note I plan to send linkinfo/fdinfo support separately.
TODO: add rollback tests, add f*.multi.s tests, add trigger bench
---
Jiri Olsa (17):
ftrace: Add ftrace_hash_count function
bpf: Use mutex lock pool for bpf trampolines
bpf: Add struct bpf_trampoline_ops object
bpf: Add struct bpf_tramp_node object
bpf: Factor fsession link to use struct bpf_tramp_node
bpf: Add multi tracing attach types
bpf: Add bpf_trampoline_multi_attach/detach functions
bpf: Add support for tracing multi link
bpf: Add support for tracing_multi link cookies
bpf: Add support for tracing_multi link session
libbpf: Add support to create tracing multi link
selftests/bpf: Add tracing multi skel/pattern/ids attach tests
selftests/bpf: Add tracing multi intersect tests
selftests/bpf: Add tracing multi cookies test
selftests/bpf: Add tracing multi session test
selftests/bpf: Add tracing multi attach fails test
selftests/bpf: Add tracing multi attach benchmark test
arch/arm64/net/bpf_jit_comp.c | 58 ++++-----
arch/s390/net/bpf_jit_comp.c | 42 +++---
arch/x86/net/bpf_jit_comp.c | 54 ++++----
include/linux/bpf.h | 87 +++++++++----
include/linux/bpf_types.h | 1 +
include/linux/ftrace.h | 1 +
include/linux/trace_events.h | 6 +
include/uapi/linux/bpf.h | 9 ++
kernel/bpf/bpf_struct_ops.c | 27 ++--
kernel/bpf/btf.c | 4 +
kernel/bpf/syscall.c | 91 ++++++++-----
kernel/bpf/trampoline.c | 457 ++++++++++++++++++++++++++++++++++++++++++++++++++++------------
kernel/bpf/verifier.c | 26 +++-
kernel/trace/bpf_trace.c | 120 ++++++++++++++++-
kernel/trace/ftrace.c | 7 +-
net/bpf/bpf_dummy_struct_ops.c | 14 +-
net/bpf/test_run.c | 3 +
tools/include/uapi/linux/bpf.h | 10 ++
tools/lib/bpf/bpf.c | 9 ++
tools/lib/bpf/bpf.h | 5 +
tools/lib/bpf/libbpf.c | 200 ++++++++++++++++++++++++++++
tools/lib/bpf/libbpf.h | 15 +++
tools/lib/bpf/libbpf.map | 1 +
tools/testing/selftests/bpf/Makefile | 7 +-
tools/testing/selftests/bpf/prog_tests/tracing_multi.c | 595 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
tools/testing/selftests/bpf/progs/tracing_multi_attach.c | 26 ++++
tools/testing/selftests/bpf/progs/tracing_multi_bench.c | 13 ++
tools/testing/selftests/bpf/progs/tracing_multi_check.c | 165 ++++++++++++++++++++++++
tools/testing/selftests/bpf/progs/tracing_multi_intersect_attach.c | 42 ++++++
tools/testing/selftests/bpf/progs/tracing_multi_session_attach.c | 27 ++++
30 files changed, 1874 insertions(+), 248 deletions(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/tracing_multi.c
create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_attach.c
create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_bench.c
create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_check.c
create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_intersect_attach.c
create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_session_attach.c
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH bpf-next 01/17] ftrace: Add ftrace_hash_count function
2026-02-20 10:06 [PATCH bpf-next 00/17] bpf: tracing_multi link Jiri Olsa
@ 2026-02-20 10:06 ` Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 02/17] bpf: Use mutex lock pool for bpf trampolines Jiri Olsa
` (15 subsequent siblings)
16 siblings, 0 replies; 38+ messages in thread
From: Jiri Olsa @ 2026-02-20 10:06 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding external ftrace_hash_count function so we could get hash
count outside of ftrace object.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/ftrace.h | 1 +
kernel/trace/ftrace.c | 7 ++++++-
2 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 1a4d36fc9085..a1ea6ab29407 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -415,6 +415,7 @@ struct ftrace_hash *alloc_ftrace_hash(int size_bits);
void free_ftrace_hash(struct ftrace_hash *hash);
struct ftrace_func_entry *add_ftrace_hash_entry_direct(struct ftrace_hash *hash,
unsigned long ip, unsigned long direct);
+unsigned long ftrace_hash_count(struct ftrace_hash *hash);
/* The hash used to know what functions callbacks trace */
struct ftrace_ops_hash {
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 1ce17c8af409..dd1844f882cd 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -6288,11 +6288,16 @@ int modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
}
EXPORT_SYMBOL_GPL(modify_ftrace_direct);
-static unsigned long hash_count(struct ftrace_hash *hash)
+static inline unsigned long hash_count(struct ftrace_hash *hash)
{
return hash ? hash->count : 0;
}
+unsigned long ftrace_hash_count(struct ftrace_hash *hash)
+{
+ return hash_count(hash);
+}
+
/**
* hash_add - adds two struct ftrace_hash and returns the result
* @a: struct ftrace_hash object
--
2.52.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH bpf-next 02/17] bpf: Use mutex lock pool for bpf trampolines
2026-02-20 10:06 [PATCH bpf-next 00/17] bpf: tracing_multi link Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 01/17] ftrace: Add ftrace_hash_count function Jiri Olsa
@ 2026-02-20 10:06 ` Jiri Olsa
2026-02-20 10:57 ` bot+bpf-ci
2026-02-20 19:58 ` Alexei Starovoitov
2026-02-20 10:06 ` [PATCH bpf-next 03/17] bpf: Add struct bpf_trampoline_ops object Jiri Olsa
` (14 subsequent siblings)
16 siblings, 2 replies; 38+ messages in thread
From: Jiri Olsa @ 2026-02-20 10:06 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding mutex lock pool that replaces bpf trampolines mutex.
For tracing_multi link coming in following changes we need to lock all
the involved trampolines during the attachment. This could mean thousands
of mutex locks, which is not convenient.
As suggested by Andrii we can replace bpf trampolines mutex with mutex
pool, where each trampoline is hash-ed to one of the locks from the pool.
It's better to lock all the pool mutexes (64 at the moment) than
thousands of them.
Removing the mutex_is_locked in bpf_trampoline_put, because we removed
the mutex from bpf_trampoline.
Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/bpf.h | 2 --
kernel/bpf/trampoline.c | 74 +++++++++++++++++++++++++++++++----------
2 files changed, 56 insertions(+), 20 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index cd9b96434904..46bf3d86bdb2 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1335,8 +1335,6 @@ struct bpf_trampoline {
/* hlist for trampoline_ip_table */
struct hlist_node hlist_ip;
struct ftrace_ops *fops;
- /* serializes access to fields of this trampoline */
- struct mutex mutex;
refcount_t refcnt;
u32 flags;
u64 key;
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 952cd7932461..05dc0358654d 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -30,6 +30,45 @@ static struct hlist_head trampoline_ip_table[TRAMPOLINE_TABLE_SIZE];
/* serializes access to trampoline tables */
static DEFINE_MUTEX(trampoline_mutex);
+#define TRAMPOLINE_LOCKS_BITS 6
+#define TRAMPOLINE_LOCKS_TABLE_SIZE (1 << TRAMPOLINE_LOCKS_BITS)
+
+static struct {
+ struct mutex mutex;
+ struct lock_class_key key;
+} *trampoline_locks;
+
+static struct mutex *trampoline_locks_lookup(struct bpf_trampoline *tr)
+{
+ return &trampoline_locks[hash_64((u64) tr, TRAMPOLINE_LOCKS_BITS)].mutex;
+}
+
+static void trampoline_lock(struct bpf_trampoline *tr)
+{
+ mutex_lock(trampoline_locks_lookup(tr));
+}
+
+static void trampoline_unlock(struct bpf_trampoline *tr)
+{
+ mutex_unlock(trampoline_locks_lookup(tr));
+}
+
+static int __init trampoline_locks_init(void)
+{
+ int i;
+
+ trampoline_locks = kmalloc_array(TRAMPOLINE_LOCKS_TABLE_SIZE,
+ sizeof(trampoline_locks[0]), GFP_KERNEL);
+ if (!trampoline_locks)
+ return -ENOMEM;
+
+ for (i = 0; i < TRAMPOLINE_LOCKS_TABLE_SIZE; i++) {
+ lockdep_register_key(&trampoline_locks[i].key);
+ mutex_init_with_key(&trampoline_locks[i].mutex, &trampoline_locks[i].key);
+ }
+ return 0;
+}
+
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex);
@@ -71,7 +110,7 @@ static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, unsigned long ip,
/* This is called inside register_ftrace_direct_multi(), so
* tr->mutex is already locked.
*/
- lockdep_assert_held_once(&tr->mutex);
+ lockdep_assert_held_once(trampoline_locks_lookup(tr));
/* Instead of updating the trampoline here, we propagate
* -EAGAIN to register_ftrace_direct(). Then we can
@@ -102,7 +141,7 @@ static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, unsigned long ip,
* mutex_trylock(&tr->mutex) to avoid deadlock in race condition
* (something else is making changes to this same trampoline).
*/
- if (!mutex_trylock(&tr->mutex)) {
+ if (!mutex_trylock(trampoline_locks_lookup(tr))) {
/* sleep 1 ms to make sure whatever holding tr->mutex makes
* some progress.
*/
@@ -129,7 +168,7 @@ static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, unsigned long ip,
break;
}
- mutex_unlock(&tr->mutex);
+ trampoline_unlock(tr);
return ret;
}
#endif
@@ -359,7 +398,6 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key, unsigned long ip)
head = &trampoline_ip_table[hash_64(tr->ip, TRAMPOLINE_HASH_BITS)];
hlist_add_head(&tr->hlist_ip, head);
refcount_set(&tr->refcnt, 1);
- mutex_init(&tr->mutex);
for (i = 0; i < BPF_TRAMP_MAX; i++)
INIT_HLIST_HEAD(&tr->progs_hlist[i]);
out:
@@ -844,9 +882,9 @@ int bpf_trampoline_link_prog(struct bpf_tramp_link *link,
{
int err;
- mutex_lock(&tr->mutex);
+ trampoline_lock(tr);
err = __bpf_trampoline_link_prog(link, tr, tgt_prog);
- mutex_unlock(&tr->mutex);
+ trampoline_unlock(tr);
return err;
}
@@ -887,9 +925,9 @@ int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
{
int err;
- mutex_lock(&tr->mutex);
+ trampoline_lock(tr);
err = __bpf_trampoline_unlink_prog(link, tr, tgt_prog);
- mutex_unlock(&tr->mutex);
+ trampoline_unlock(tr);
return err;
}
@@ -999,14 +1037,15 @@ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog,
if (!tr)
return -ENOMEM;
- mutex_lock(&tr->mutex);
+ trampoline_lock(tr);
shim_link = cgroup_shim_find(tr, bpf_func);
if (shim_link) {
/* Reusing existing shim attached by the other program. */
bpf_link_inc(&shim_link->link.link);
- mutex_unlock(&tr->mutex);
+ trampoline_unlock(tr);
+
bpf_trampoline_put(tr); /* bpf_trampoline_get above */
return 0;
}
@@ -1026,11 +1065,11 @@ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog,
shim_link->trampoline = tr;
/* note, we're still holding tr refcnt from above */
- mutex_unlock(&tr->mutex);
+ trampoline_unlock(tr);
return 0;
err:
- mutex_unlock(&tr->mutex);
+ trampoline_unlock(tr);
if (shim_link)
bpf_link_put(&shim_link->link.link);
@@ -1056,9 +1095,9 @@ void bpf_trampoline_unlink_cgroup_shim(struct bpf_prog *prog)
if (WARN_ON_ONCE(!tr))
return;
- mutex_lock(&tr->mutex);
+ trampoline_lock(tr);
shim_link = cgroup_shim_find(tr, bpf_func);
- mutex_unlock(&tr->mutex);
+ trampoline_unlock(tr);
if (shim_link)
bpf_link_put(&shim_link->link.link);
@@ -1076,14 +1115,14 @@ struct bpf_trampoline *bpf_trampoline_get(u64 key,
if (!tr)
return NULL;
- mutex_lock(&tr->mutex);
+ trampoline_lock(tr);
if (tr->func.addr)
goto out;
memcpy(&tr->func.model, &tgt_info->fmodel, sizeof(tgt_info->fmodel));
tr->func.addr = (void *)tgt_info->tgt_addr;
out:
- mutex_unlock(&tr->mutex);
+ trampoline_unlock(tr);
return tr;
}
@@ -1096,7 +1135,6 @@ void bpf_trampoline_put(struct bpf_trampoline *tr)
mutex_lock(&trampoline_mutex);
if (!refcount_dec_and_test(&tr->refcnt))
goto out;
- WARN_ON_ONCE(mutex_is_locked(&tr->mutex));
for (i = 0; i < BPF_TRAMP_MAX; i++)
if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[i])))
@@ -1382,6 +1420,6 @@ static int __init init_trampolines(void)
INIT_HLIST_HEAD(&trampoline_key_table[i]);
for (i = 0; i < TRAMPOLINE_TABLE_SIZE; i++)
INIT_HLIST_HEAD(&trampoline_ip_table[i]);
- return 0;
+ return trampoline_locks_init();
}
late_initcall(init_trampolines);
--
2.52.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH bpf-next 03/17] bpf: Add struct bpf_trampoline_ops object
2026-02-20 10:06 [PATCH bpf-next 00/17] bpf: tracing_multi link Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 01/17] ftrace: Add ftrace_hash_count function Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 02/17] bpf: Use mutex lock pool for bpf trampolines Jiri Olsa
@ 2026-02-20 10:06 ` Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 04/17] bpf: Add struct bpf_tramp_node object Jiri Olsa
` (13 subsequent siblings)
16 siblings, 0 replies; 38+ messages in thread
From: Jiri Olsa @ 2026-02-20 10:06 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
In following changes we will need to override ftrace direct attachment
behaviour. In order to do that we are adding struct bpf_trampoline_ops
object that defines callbacks for ftrace direct attachment:
register_fentry
unregister_fentry
modify_fentry
The new struct bpf_trampoline_ops object is passed as an argument to
__bpf_trampoline_link/unlink_prog functions.
At the moment the default trampoline_ops is set to the current ftrace
direct attachment functions, so there's no functional change for the
current code.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
kernel/bpf/trampoline.c | 54 +++++++++++++++++++++++++++++------------
1 file changed, 39 insertions(+), 15 deletions(-)
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 05dc0358654d..e9f0152289a4 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -69,6 +69,14 @@ static int __init trampoline_locks_init(void)
return 0;
}
+struct bpf_trampoline_ops {
+ int (*register_fentry)(struct bpf_trampoline *tr, void *new_addr, void *data);
+ int (*unregister_fentry)(struct bpf_trampoline *tr, u32 orig_flags, void *old_addr,
+ void *data);
+ int (*modify_fentry)(struct bpf_trampoline *tr, u32 orig_flags, void *old_addr,
+ void *new_addr, bool lock_direct_mutex, void *data);
+};
+
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex);
@@ -425,7 +433,7 @@ static int bpf_trampoline_update_fentry(struct bpf_trampoline *tr, u32 orig_flag
}
static int unregister_fentry(struct bpf_trampoline *tr, u32 orig_flags,
- void *old_addr)
+ void *old_addr, void *data)
{
int ret;
@@ -439,7 +447,7 @@ static int unregister_fentry(struct bpf_trampoline *tr, u32 orig_flags,
static int modify_fentry(struct bpf_trampoline *tr, u32 orig_flags,
void *old_addr, void *new_addr,
- bool lock_direct_mutex)
+ bool lock_direct_mutex, void *data __maybe_unused)
{
int ret;
@@ -453,7 +461,7 @@ static int modify_fentry(struct bpf_trampoline *tr, u32 orig_flags,
}
/* first time registering */
-static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
+static int register_fentry(struct bpf_trampoline *tr, void *new_addr, void *data __maybe_unused)
{
void *ip = tr->func.addr;
unsigned long faddr;
@@ -475,6 +483,12 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
return ret;
}
+static struct bpf_trampoline_ops trampoline_ops = {
+ .register_fentry = register_fentry,
+ .unregister_fentry = unregister_fentry,
+ .modify_fentry = modify_fentry,
+};
+
static struct bpf_tramp_links *
bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total, bool *ip_arg)
{
@@ -642,7 +656,8 @@ static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key, int size)
return ERR_PTR(err);
}
-static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex)
+static int bpf_trampoline_update_ops(struct bpf_trampoline *tr, bool lock_direct_mutex,
+ struct bpf_trampoline_ops *ops, void *data)
{
struct bpf_tramp_image *im;
struct bpf_tramp_links *tlinks;
@@ -655,7 +670,7 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
return PTR_ERR(tlinks);
if (total == 0) {
- err = unregister_fentry(tr, orig_flags, tr->cur_image->image);
+ err = ops->unregister_fentry(tr, orig_flags, tr->cur_image->image, data);
bpf_tramp_image_put(tr->cur_image);
tr->cur_image = NULL;
goto out;
@@ -726,11 +741,11 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
WARN_ON(tr->cur_image && total == 0);
if (tr->cur_image)
/* progs already running at this address */
- err = modify_fentry(tr, orig_flags, tr->cur_image->image,
- im->image, lock_direct_mutex);
+ err = ops->modify_fentry(tr, orig_flags, tr->cur_image->image,
+ im->image, lock_direct_mutex, data);
else
/* first time registering */
- err = register_fentry(tr, im->image);
+ err = ops->register_fentry(tr, im->image, data);
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
if (err == -EAGAIN) {
@@ -760,6 +775,11 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
goto out;
}
+static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex)
+{
+ return bpf_trampoline_update_ops(tr, lock_direct_mutex, &trampoline_ops, NULL);
+}
+
static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(struct bpf_prog *prog)
{
switch (prog->expected_attach_type) {
@@ -804,7 +824,9 @@ static int bpf_freplace_check_tgt_prog(struct bpf_prog *tgt_prog)
static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
struct bpf_trampoline *tr,
- struct bpf_prog *tgt_prog)
+ struct bpf_prog *tgt_prog,
+ struct bpf_trampoline_ops *ops,
+ void *data)
{
struct bpf_fsession_link *fslink = NULL;
enum bpf_tramp_prog_type kind;
@@ -862,7 +884,7 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
} else {
tr->progs_cnt[kind]++;
}
- err = bpf_trampoline_update(tr, true /* lock_direct_mutex */);
+ err = bpf_trampoline_update_ops(tr, true /* lock_direct_mutex */, ops, data);
if (err) {
hlist_del_init(&link->tramp_hlist);
if (kind == BPF_TRAMP_FSESSION) {
@@ -883,14 +905,16 @@ int bpf_trampoline_link_prog(struct bpf_tramp_link *link,
int err;
trampoline_lock(tr);
- err = __bpf_trampoline_link_prog(link, tr, tgt_prog);
+ err = __bpf_trampoline_link_prog(link, tr, tgt_prog, &trampoline_ops, NULL);
trampoline_unlock(tr);
return err;
}
static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
struct bpf_trampoline *tr,
- struct bpf_prog *tgt_prog)
+ struct bpf_prog *tgt_prog,
+ struct bpf_trampoline_ops *ops,
+ void *data)
{
enum bpf_tramp_prog_type kind;
int err;
@@ -915,7 +939,7 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
}
hlist_del_init(&link->tramp_hlist);
tr->progs_cnt[kind]--;
- return bpf_trampoline_update(tr, true /* lock_direct_mutex */);
+ return bpf_trampoline_update_ops(tr, true /* lock_direct_mutex */, ops, data);
}
/* bpf_trampoline_unlink_prog() should never fail. */
@@ -926,7 +950,7 @@ int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
int err;
trampoline_lock(tr);
- err = __bpf_trampoline_unlink_prog(link, tr, tgt_prog);
+ err = __bpf_trampoline_unlink_prog(link, tr, tgt_prog, &trampoline_ops, NULL);
trampoline_unlock(tr);
return err;
}
@@ -1058,7 +1082,7 @@ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog,
goto err;
}
- err = __bpf_trampoline_link_prog(&shim_link->link, tr, NULL);
+ err = __bpf_trampoline_link_prog(&shim_link->link, tr, NULL, &trampoline_ops, NULL);
if (err)
goto err;
--
2.52.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH bpf-next 04/17] bpf: Add struct bpf_tramp_node object
2026-02-20 10:06 [PATCH bpf-next 00/17] bpf: tracing_multi link Jiri Olsa
` (2 preceding siblings ...)
2026-02-20 10:06 ` [PATCH bpf-next 03/17] bpf: Add struct bpf_trampoline_ops object Jiri Olsa
@ 2026-02-20 10:06 ` Jiri Olsa
2026-02-20 10:58 ` bot+bpf-ci
` (3 more replies)
2026-02-20 10:06 ` [PATCH bpf-next 05/17] bpf: Factor fsession link to use struct bpf_tramp_node Jiri Olsa
` (12 subsequent siblings)
16 siblings, 4 replies; 38+ messages in thread
From: Jiri Olsa @ 2026-02-20 10:06 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding struct bpf_tramp_node to decouple the link out of the trampoline
attachment info.
At the moment the object for attaching bpf program to the trampoline is
'struct bpf_tramp_link':
struct bpf_tramp_link {
struct bpf_link link;
struct hlist_node tramp_hlist;
u64 cookie;
}
The link holds the bpf_prog pointer and forces one link - one program
binding logic. In following changes we want to attach program to multiple
trampolines but we want to keep just one bpf_link object.
Splitting struct bpf_tramp_link into:
struct bpf_tramp_link {
struct bpf_link link;
struct bpf_tramp_node node;
};
struct bpf_tramp_node {
struct bpf_link *link;
struct hlist_node tramp_hlist;
u64 cookie;
};
The 'struct bpf_tramp_link' defines standard single trampoline link
and 'struct bpf_tramp_node' is the attachment trampoline object with
pointer to the bpf_link object.
This will allow us to define link for multiple trampolines, like:
struct bpf_tracing_multi_link {
struct bpf_link link;
...
int nodes_cnt;
struct bpf_tracing_multi_node nodes[] __counted_by(nodes_cnt);
};
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
arch/arm64/net/bpf_jit_comp.c | 58 +++++++++---------
arch/s390/net/bpf_jit_comp.c | 42 ++++++-------
arch/x86/net/bpf_jit_comp.c | 54 ++++++++---------
include/linux/bpf.h | 56 +++++++++++-------
kernel/bpf/bpf_struct_ops.c | 27 +++++----
kernel/bpf/syscall.c | 39 ++++++------
kernel/bpf/trampoline.c | 105 ++++++++++++++++-----------------
net/bpf/bpf_dummy_struct_ops.c | 14 ++---
8 files changed, 207 insertions(+), 188 deletions(-)
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index 7a530ea4f5ae..bca0e9df7767 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -2288,24 +2288,24 @@ bool bpf_jit_supports_subprog_tailcalls(void)
return true;
}
-static void invoke_bpf_prog(struct jit_ctx *ctx, struct bpf_tramp_link *l,
+static void invoke_bpf_prog(struct jit_ctx *ctx, struct bpf_tramp_node *node,
int bargs_off, int retval_off, int run_ctx_off,
bool save_ret)
{
__le32 *branch;
u64 enter_prog;
u64 exit_prog;
- struct bpf_prog *p = l->link.prog;
+ struct bpf_prog *p = node->link->prog;
int cookie_off = offsetof(struct bpf_tramp_run_ctx, bpf_cookie);
enter_prog = (u64)bpf_trampoline_enter(p);
exit_prog = (u64)bpf_trampoline_exit(p);
- if (l->cookie == 0) {
+ if (node->cookie == 0) {
/* if cookie is zero, one instruction is enough to store it */
emit(A64_STR64I(A64_ZR, A64_SP, run_ctx_off + cookie_off), ctx);
} else {
- emit_a64_mov_i64(A64_R(10), l->cookie, ctx);
+ emit_a64_mov_i64(A64_R(10), node->cookie, ctx);
emit(A64_STR64I(A64_R(10), A64_SP, run_ctx_off + cookie_off),
ctx);
}
@@ -2355,7 +2355,7 @@ static void invoke_bpf_prog(struct jit_ctx *ctx, struct bpf_tramp_link *l,
emit_call(exit_prog, ctx);
}
-static void invoke_bpf_mod_ret(struct jit_ctx *ctx, struct bpf_tramp_links *tl,
+static void invoke_bpf_mod_ret(struct jit_ctx *ctx, struct bpf_tramp_nodes *tn,
int bargs_off, int retval_off, int run_ctx_off,
__le32 **branches)
{
@@ -2365,8 +2365,8 @@ static void invoke_bpf_mod_ret(struct jit_ctx *ctx, struct bpf_tramp_links *tl,
* Set this to 0 to avoid confusing the program.
*/
emit(A64_STR64I(A64_ZR, A64_SP, retval_off), ctx);
- for (i = 0; i < tl->nr_links; i++) {
- invoke_bpf_prog(ctx, tl->links[i], bargs_off, retval_off,
+ for (i = 0; i < tn->nr_nodes; i++) {
+ invoke_bpf_prog(ctx, tn->nodes[i], bargs_off, retval_off,
run_ctx_off, true);
/* if (*(u64 *)(sp + retval_off) != 0)
* goto do_fexit;
@@ -2497,10 +2497,10 @@ static void restore_args(struct jit_ctx *ctx, int bargs_off, int nregs)
}
}
-static bool is_struct_ops_tramp(const struct bpf_tramp_links *fentry_links)
+static bool is_struct_ops_tramp(const struct bpf_tramp_nodes *fentry_nodes)
{
- return fentry_links->nr_links == 1 &&
- fentry_links->links[0]->link.type == BPF_LINK_TYPE_STRUCT_OPS;
+ return fentry_nodes->nr_nodes == 1 &&
+ fentry_nodes->nodes[0]->link->type == BPF_LINK_TYPE_STRUCT_OPS;
}
static void store_func_meta(struct jit_ctx *ctx, u64 func_meta, int func_meta_off)
@@ -2521,7 +2521,7 @@ static void store_func_meta(struct jit_ctx *ctx, u64 func_meta, int func_meta_of
*
*/
static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
- struct bpf_tramp_links *tlinks, void *func_addr,
+ struct bpf_tramp_nodes *tnodes, void *func_addr,
const struct btf_func_model *m,
const struct arg_aux *a,
u32 flags)
@@ -2537,14 +2537,14 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
int run_ctx_off;
int oargs_off;
int nfuncargs;
- struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
- struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
- struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
+ struct bpf_tramp_nodes *fentry = &tnodes[BPF_TRAMP_FENTRY];
+ struct bpf_tramp_nodes *fexit = &tnodes[BPF_TRAMP_FEXIT];
+ struct bpf_tramp_nodes *fmod_ret = &tnodes[BPF_TRAMP_MODIFY_RETURN];
bool save_ret;
__le32 **branches = NULL;
bool is_struct_ops = is_struct_ops_tramp(fentry);
int cookie_off, cookie_cnt, cookie_bargs_off;
- int fsession_cnt = bpf_fsession_cnt(tlinks);
+ int fsession_cnt = bpf_fsession_cnt(tnodes);
u64 func_meta;
/* trampoline stack layout:
@@ -2590,7 +2590,7 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
cookie_off = stack_size;
/* room for session cookies */
- cookie_cnt = bpf_fsession_cookie_cnt(tlinks);
+ cookie_cnt = bpf_fsession_cookie_cnt(tnodes);
stack_size += cookie_cnt * 8;
ip_off = stack_size;
@@ -2687,20 +2687,20 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
}
cookie_bargs_off = (bargs_off - cookie_off) / 8;
- for (i = 0; i < fentry->nr_links; i++) {
- if (bpf_prog_calls_session_cookie(fentry->links[i])) {
+ for (i = 0; i < fentry->nr_nodes; i++) {
+ if (bpf_prog_calls_session_cookie(fentry->nodes[i])) {
u64 meta = func_meta | (cookie_bargs_off << BPF_TRAMP_COOKIE_INDEX_SHIFT);
store_func_meta(ctx, meta, func_meta_off);
cookie_bargs_off--;
}
- invoke_bpf_prog(ctx, fentry->links[i], bargs_off,
+ invoke_bpf_prog(ctx, fentry->nodes[i], bargs_off,
retval_off, run_ctx_off,
flags & BPF_TRAMP_F_RET_FENTRY_RET);
}
- if (fmod_ret->nr_links) {
- branches = kcalloc(fmod_ret->nr_links, sizeof(__le32 *),
+ if (fmod_ret->nr_nodes) {
+ branches = kcalloc(fmod_ret->nr_nodes, sizeof(__le32 *),
GFP_KERNEL);
if (!branches)
return -ENOMEM;
@@ -2724,7 +2724,7 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
}
/* update the branches saved in invoke_bpf_mod_ret with cbnz */
- for (i = 0; i < fmod_ret->nr_links && ctx->image != NULL; i++) {
+ for (i = 0; i < fmod_ret->nr_nodes && ctx->image != NULL; i++) {
int offset = &ctx->image[ctx->idx] - branches[i];
*branches[i] = cpu_to_le32(A64_CBNZ(1, A64_R(10), offset));
}
@@ -2735,14 +2735,14 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
store_func_meta(ctx, func_meta, func_meta_off);
cookie_bargs_off = (bargs_off - cookie_off) / 8;
- for (i = 0; i < fexit->nr_links; i++) {
- if (bpf_prog_calls_session_cookie(fexit->links[i])) {
+ for (i = 0; i < fexit->nr_nodes; i++) {
+ if (bpf_prog_calls_session_cookie(fexit->nodes[i])) {
u64 meta = func_meta | (cookie_bargs_off << BPF_TRAMP_COOKIE_INDEX_SHIFT);
store_func_meta(ctx, meta, func_meta_off);
cookie_bargs_off--;
}
- invoke_bpf_prog(ctx, fexit->links[i], bargs_off, retval_off,
+ invoke_bpf_prog(ctx, fexit->nodes[i], bargs_off, retval_off,
run_ctx_off, false);
}
@@ -2800,7 +2800,7 @@ bool bpf_jit_supports_fsession(void)
}
int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks, void *func_addr)
+ struct bpf_tramp_nodes *tnodes, void *func_addr)
{
struct jit_ctx ctx = {
.image = NULL,
@@ -2814,7 +2814,7 @@ int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
if (ret < 0)
return ret;
- ret = prepare_trampoline(&ctx, &im, tlinks, func_addr, m, &aaux, flags);
+ ret = prepare_trampoline(&ctx, &im, tnodes, func_addr, m, &aaux, flags);
if (ret < 0)
return ret;
@@ -2838,7 +2838,7 @@ int arch_protect_bpf_trampoline(void *image, unsigned int size)
int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *ro_image,
void *ro_image_end, const struct btf_func_model *m,
- u32 flags, struct bpf_tramp_links *tlinks,
+ u32 flags, struct bpf_tramp_nodes *tnodes,
void *func_addr)
{
u32 size = ro_image_end - ro_image;
@@ -2865,7 +2865,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *ro_image,
ret = calc_arg_aux(m, &aaux);
if (ret)
goto out;
- ret = prepare_trampoline(&ctx, im, tlinks, func_addr, m, &aaux, flags);
+ ret = prepare_trampoline(&ctx, im, tnodes, func_addr, m, &aaux, flags);
if (ret > 0 && validate_code(&ctx) < 0) {
ret = -EINVAL;
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index 579461d471bb..1cc8a642297a 100644
--- a/arch/s390/net/bpf_jit_comp.c
+++ b/arch/s390/net/bpf_jit_comp.c
@@ -2508,20 +2508,20 @@ static void load_imm64(struct bpf_jit *jit, int dst_reg, u64 val)
static int invoke_bpf_prog(struct bpf_tramp_jit *tjit,
const struct btf_func_model *m,
- struct bpf_tramp_link *tlink, bool save_ret)
+ struct bpf_tramp_node *node, bool save_ret)
{
struct bpf_jit *jit = &tjit->common;
int cookie_off = tjit->run_ctx_off +
offsetof(struct bpf_tramp_run_ctx, bpf_cookie);
- struct bpf_prog *p = tlink->link.prog;
+ struct bpf_prog *p = node->link->prog;
int patch;
/*
- * run_ctx.cookie = tlink->cookie;
+ * run_ctx.cookie = node->cookie;
*/
- /* %r0 = tlink->cookie */
- load_imm64(jit, REG_W0, tlink->cookie);
+ /* %r0 = node->cookie */
+ load_imm64(jit, REG_W0, node->cookie);
/* stg %r0,cookie_off(%r15) */
EMIT6_DISP_LH(0xe3000000, 0x0024, REG_W0, REG_0, REG_15, cookie_off);
@@ -2603,12 +2603,12 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
struct bpf_tramp_jit *tjit,
const struct btf_func_model *m,
u32 flags,
- struct bpf_tramp_links *tlinks,
+ struct bpf_tramp_nodes *nodes,
void *func_addr)
{
- struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
- struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
- struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
+ struct bpf_tramp_nodes *fmod_ret = &nodes[BPF_TRAMP_MODIFY_RETURN];
+ struct bpf_tramp_nodes *fentry = &nodes[BPF_TRAMP_FENTRY];
+ struct bpf_tramp_nodes *fexit = &nodes[BPF_TRAMP_FEXIT];
int nr_bpf_args, nr_reg_args, nr_stack_args;
struct bpf_jit *jit = &tjit->common;
int arg, bpf_arg_off;
@@ -2767,12 +2767,12 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
EMIT6_PCREL_RILB_PTR(0xc0050000, REG_14, __bpf_tramp_enter);
}
- for (i = 0; i < fentry->nr_links; i++)
- if (invoke_bpf_prog(tjit, m, fentry->links[i],
+ for (i = 0; i < fentry->nr_nodes; i++)
+ if (invoke_bpf_prog(tjit, m, fentry->nodes[i],
flags & BPF_TRAMP_F_RET_FENTRY_RET))
return -EINVAL;
- if (fmod_ret->nr_links) {
+ if (fmod_ret->nr_nodes) {
/*
* retval = 0;
*/
@@ -2781,8 +2781,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
_EMIT6(0xd707f000 | tjit->retval_off,
0xf000 | tjit->retval_off);
- for (i = 0; i < fmod_ret->nr_links; i++) {
- if (invoke_bpf_prog(tjit, m, fmod_ret->links[i], true))
+ for (i = 0; i < fmod_ret->nr_nodes; i++) {
+ if (invoke_bpf_prog(tjit, m, fmod_ret->nodes[i], true))
return -EINVAL;
/*
@@ -2849,8 +2849,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
/* do_fexit: */
tjit->do_fexit = jit->prg;
- for (i = 0; i < fexit->nr_links; i++)
- if (invoke_bpf_prog(tjit, m, fexit->links[i], false))
+ for (i = 0; i < fexit->nr_nodes; i++)
+ if (invoke_bpf_prog(tjit, m, fexit->nodes[i], false))
return -EINVAL;
if (flags & BPF_TRAMP_F_CALL_ORIG) {
@@ -2902,7 +2902,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
}
int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks, void *orig_call)
+ struct bpf_tramp_nodes *tnodes, void *orig_call)
{
struct bpf_tramp_image im;
struct bpf_tramp_jit tjit;
@@ -2911,14 +2911,14 @@ int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
memset(&tjit, 0, sizeof(tjit));
ret = __arch_prepare_bpf_trampoline(&im, &tjit, m, flags,
- tlinks, orig_call);
+ tnodes, orig_call);
return ret < 0 ? ret : tjit.common.prg;
}
int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image,
void *image_end, const struct btf_func_model *m,
- u32 flags, struct bpf_tramp_links *tlinks,
+ u32 flags, struct bpf_tramp_nodes *tnodes,
void *func_addr)
{
struct bpf_tramp_jit tjit;
@@ -2927,7 +2927,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image,
/* Compute offsets, check whether the code fits. */
memset(&tjit, 0, sizeof(tjit));
ret = __arch_prepare_bpf_trampoline(im, &tjit, m, flags,
- tlinks, func_addr);
+ tnodes, func_addr);
if (ret < 0)
return ret;
@@ -2941,7 +2941,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image,
tjit.common.prg = 0;
tjit.common.prg_buf = image;
ret = __arch_prepare_bpf_trampoline(im, &tjit, m, flags,
- tlinks, func_addr);
+ tnodes, func_addr);
return ret < 0 ? ret : tjit.common.prg;
}
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 070ba80e39d7..c5eab786780e 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -2978,15 +2978,15 @@ static void restore_regs(const struct btf_func_model *m, u8 **prog,
}
static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
- struct bpf_tramp_link *l, int stack_size,
+ struct bpf_tramp_node *node, int stack_size,
int run_ctx_off, bool save_ret,
void *image, void *rw_image)
{
u8 *prog = *pprog;
u8 *jmp_insn;
int ctx_cookie_off = offsetof(struct bpf_tramp_run_ctx, bpf_cookie);
- struct bpf_prog *p = l->link.prog;
- u64 cookie = l->cookie;
+ struct bpf_prog *p = node->link->prog;
+ u64 cookie = node->cookie;
/* mov rdi, cookie */
emit_mov_imm64(&prog, BPF_REG_1, (long) cookie >> 32, (u32) (long) cookie);
@@ -3093,7 +3093,7 @@ static int emit_cond_near_jump(u8 **pprog, void *func, void *ip, u8 jmp_cond)
}
static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
- struct bpf_tramp_links *tl, int stack_size,
+ struct bpf_tramp_nodes *tl, int stack_size,
int run_ctx_off, int func_meta_off, bool save_ret,
void *image, void *rw_image, u64 func_meta,
int cookie_off)
@@ -3101,13 +3101,13 @@ static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
int i, cur_cookie = (cookie_off - stack_size) / 8;
u8 *prog = *pprog;
- for (i = 0; i < tl->nr_links; i++) {
- if (tl->links[i]->link.prog->call_session_cookie) {
+ for (i = 0; i < tl->nr_nodes; i++) {
+ if (tl->nodes[i]->link->prog->call_session_cookie) {
emit_store_stack_imm64(&prog, BPF_REG_0, -func_meta_off,
func_meta | (cur_cookie << BPF_TRAMP_COOKIE_INDEX_SHIFT));
cur_cookie--;
}
- if (invoke_bpf_prog(m, &prog, tl->links[i], stack_size,
+ if (invoke_bpf_prog(m, &prog, tl->nodes[i], stack_size,
run_ctx_off, save_ret, image, rw_image))
return -EINVAL;
}
@@ -3116,7 +3116,7 @@ static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
}
static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
- struct bpf_tramp_links *tl, int stack_size,
+ struct bpf_tramp_nodes *tl, int stack_size,
int run_ctx_off, u8 **branches,
void *image, void *rw_image)
{
@@ -3128,8 +3128,8 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
*/
emit_mov_imm32(&prog, false, BPF_REG_0, 0);
emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
- for (i = 0; i < tl->nr_links; i++) {
- if (invoke_bpf_prog(m, &prog, tl->links[i], stack_size, run_ctx_off, true,
+ for (i = 0; i < tl->nr_nodes; i++) {
+ if (invoke_bpf_prog(m, &prog, tl->nodes[i], stack_size, run_ctx_off, true,
image, rw_image))
return -EINVAL;
@@ -3220,14 +3220,14 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_image,
void *rw_image_end, void *image,
const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks,
+ struct bpf_tramp_nodes *tnodes,
void *func_addr)
{
int i, ret, nr_regs = m->nr_args, stack_size = 0;
int regs_off, func_meta_off, ip_off, run_ctx_off, arg_stack_off, rbx_off;
- struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
- struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
- struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
+ struct bpf_tramp_nodes *fentry = &tnodes[BPF_TRAMP_FENTRY];
+ struct bpf_tramp_nodes *fexit = &tnodes[BPF_TRAMP_FEXIT];
+ struct bpf_tramp_nodes *fmod_ret = &tnodes[BPF_TRAMP_MODIFY_RETURN];
void *orig_call = func_addr;
int cookie_off, cookie_cnt;
u8 **branches = NULL;
@@ -3299,7 +3299,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
ip_off = stack_size;
- cookie_cnt = bpf_fsession_cookie_cnt(tlinks);
+ cookie_cnt = bpf_fsession_cookie_cnt(tnodes);
/* room for session cookies */
stack_size += cookie_cnt * 8;
cookie_off = stack_size;
@@ -3392,7 +3392,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
}
}
- if (bpf_fsession_cnt(tlinks)) {
+ if (bpf_fsession_cnt(tnodes)) {
/* clear all the session cookies' value */
for (int i = 0; i < cookie_cnt; i++)
emit_store_stack_imm64(&prog, BPF_REG_0, -cookie_off + 8 * i, 0);
@@ -3400,15 +3400,15 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
emit_store_stack_imm64(&prog, BPF_REG_0, -8, 0);
}
- if (fentry->nr_links) {
+ if (fentry->nr_nodes) {
if (invoke_bpf(m, &prog, fentry, regs_off, run_ctx_off, func_meta_off,
flags & BPF_TRAMP_F_RET_FENTRY_RET, image, rw_image,
func_meta, cookie_off))
return -EINVAL;
}
- if (fmod_ret->nr_links) {
- branches = kcalloc(fmod_ret->nr_links, sizeof(u8 *),
+ if (fmod_ret->nr_nodes) {
+ branches = kcalloc(fmod_ret->nr_nodes, sizeof(u8 *),
GFP_KERNEL);
if (!branches)
return -ENOMEM;
@@ -3447,7 +3447,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
emit_nops(&prog, X86_PATCH_SIZE);
}
- if (fmod_ret->nr_links) {
+ if (fmod_ret->nr_nodes) {
/* From Intel 64 and IA-32 Architectures Optimization
* Reference Manual, 3.4.1.4 Code Alignment, Assembly/Compiler
* Coding Rule 11: All branch targets should be 16-byte
@@ -3457,7 +3457,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
/* Update the branches saved in invoke_bpf_mod_ret with the
* aligned address of do_fexit.
*/
- for (i = 0; i < fmod_ret->nr_links; i++) {
+ for (i = 0; i < fmod_ret->nr_nodes; i++) {
emit_cond_near_jump(&branches[i], image + (prog - (u8 *)rw_image),
image + (branches[i] - (u8 *)rw_image), X86_JNE);
}
@@ -3465,10 +3465,10 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
/* set the "is_return" flag for fsession */
func_meta |= (1ULL << BPF_TRAMP_IS_RETURN_SHIFT);
- if (bpf_fsession_cnt(tlinks))
+ if (bpf_fsession_cnt(tnodes))
emit_store_stack_imm64(&prog, BPF_REG_0, -func_meta_off, func_meta);
- if (fexit->nr_links) {
+ if (fexit->nr_nodes) {
if (invoke_bpf(m, &prog, fexit, regs_off, run_ctx_off, func_meta_off,
false, image, rw_image, func_meta, cookie_off)) {
ret = -EINVAL;
@@ -3542,7 +3542,7 @@ int arch_protect_bpf_trampoline(void *image, unsigned int size)
int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *image_end,
const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks,
+ struct bpf_tramp_nodes *tnodes,
void *func_addr)
{
void *rw_image, *tmp;
@@ -3557,7 +3557,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
return -ENOMEM;
ret = __arch_prepare_bpf_trampoline(im, rw_image, rw_image + size, image, m,
- flags, tlinks, func_addr);
+ flags, tnodes, func_addr);
if (ret < 0)
goto out;
@@ -3570,7 +3570,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
}
int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks, void *func_addr)
+ struct bpf_tramp_nodes *tnodes, void *func_addr)
{
struct bpf_tramp_image im;
void *image;
@@ -3588,7 +3588,7 @@ int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
return -ENOMEM;
ret = __arch_prepare_bpf_trampoline(&im, image, image + PAGE_SIZE, image,
- m, flags, tlinks, func_addr);
+ m, flags, tnodes, func_addr);
bpf_jit_free_exec(image);
return ret;
}
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 46bf3d86bdb2..9c7f5ab3c7ce 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1233,9 +1233,9 @@ enum {
#define BPF_TRAMP_COOKIE_INDEX_SHIFT 8
#define BPF_TRAMP_IS_RETURN_SHIFT 63
-struct bpf_tramp_links {
- struct bpf_tramp_link *links[BPF_MAX_TRAMP_LINKS];
- int nr_links;
+struct bpf_tramp_nodes {
+ struct bpf_tramp_node *nodes[BPF_MAX_TRAMP_LINKS];
+ int nr_nodes;
};
struct bpf_tramp_run_ctx;
@@ -1263,13 +1263,13 @@ struct bpf_tramp_run_ctx;
struct bpf_tramp_image;
int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *image_end,
const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks,
+ struct bpf_tramp_nodes *tnodes,
void *func_addr);
void *arch_alloc_bpf_trampoline(unsigned int size);
void arch_free_bpf_trampoline(void *image, unsigned int size);
int __must_check arch_protect_bpf_trampoline(void *image, unsigned int size);
int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks, void *func_addr);
+ struct bpf_tramp_nodes *tnodes, void *func_addr);
u64 notrace __bpf_prog_enter_sleepable_recur(struct bpf_prog *prog,
struct bpf_tramp_run_ctx *run_ctx);
@@ -1453,10 +1453,10 @@ static inline int bpf_dynptr_check_off_len(const struct bpf_dynptr_kern *ptr, u6
}
#ifdef CONFIG_BPF_JIT
-int bpf_trampoline_link_prog(struct bpf_tramp_link *link,
+int bpf_trampoline_link_prog(struct bpf_tramp_node *node,
struct bpf_trampoline *tr,
struct bpf_prog *tgt_prog);
-int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
+int bpf_trampoline_unlink_prog(struct bpf_tramp_node *node,
struct bpf_trampoline *tr,
struct bpf_prog *tgt_prog);
struct bpf_trampoline *bpf_trampoline_get(u64 key,
@@ -1865,12 +1865,17 @@ struct bpf_link_ops {
__poll_t (*poll)(struct file *file, struct poll_table_struct *pts);
};
-struct bpf_tramp_link {
- struct bpf_link link;
+struct bpf_tramp_node {
+ struct bpf_link *link;
struct hlist_node tramp_hlist;
u64 cookie;
};
+struct bpf_tramp_link {
+ struct bpf_link link;
+ struct bpf_tramp_node node;
+};
+
struct bpf_shim_tramp_link {
struct bpf_tramp_link link;
struct bpf_trampoline *trampoline;
@@ -2088,8 +2093,8 @@ void bpf_struct_ops_put(const void *kdata);
int bpf_struct_ops_supported(const struct bpf_struct_ops *st_ops, u32 moff);
int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key,
void *value);
-int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks,
- struct bpf_tramp_link *link,
+int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_nodes *tnodes,
+ struct bpf_tramp_node *node,
const struct btf_func_model *model,
void *stub_func,
void **image, u32 *image_off,
@@ -2181,31 +2186,31 @@ static inline void bpf_struct_ops_desc_release(struct bpf_struct_ops_desc *st_op
#endif
-static inline int bpf_fsession_cnt(struct bpf_tramp_links *links)
+static inline int bpf_fsession_cnt(struct bpf_tramp_nodes *nodes)
{
- struct bpf_tramp_links fentries = links[BPF_TRAMP_FENTRY];
+ struct bpf_tramp_nodes fentries = nodes[BPF_TRAMP_FENTRY];
int cnt = 0;
- for (int i = 0; i < links[BPF_TRAMP_FENTRY].nr_links; i++) {
- if (fentries.links[i]->link.prog->expected_attach_type == BPF_TRACE_FSESSION)
+ for (int i = 0; i < nodes[BPF_TRAMP_FENTRY].nr_nodes; i++) {
+ if (fentries.nodes[i]->link->prog->expected_attach_type == BPF_TRACE_FSESSION)
cnt++;
}
return cnt;
}
-static inline bool bpf_prog_calls_session_cookie(struct bpf_tramp_link *link)
+static inline bool bpf_prog_calls_session_cookie(struct bpf_tramp_node *node)
{
- return link->link.prog->call_session_cookie;
+ return node->link->prog->call_session_cookie;
}
-static inline int bpf_fsession_cookie_cnt(struct bpf_tramp_links *links)
+static inline int bpf_fsession_cookie_cnt(struct bpf_tramp_nodes *nodes)
{
- struct bpf_tramp_links fentries = links[BPF_TRAMP_FENTRY];
+ struct bpf_tramp_nodes fentries = nodes[BPF_TRAMP_FENTRY];
int cnt = 0;
- for (int i = 0; i < links[BPF_TRAMP_FENTRY].nr_links; i++) {
- if (bpf_prog_calls_session_cookie(fentries.links[i]))
+ for (int i = 0; i < nodes[BPF_TRAMP_FENTRY].nr_nodes; i++) {
+ if (bpf_prog_calls_session_cookie(fentries.nodes[i]))
cnt++;
}
@@ -2758,6 +2763,9 @@ void bpf_link_init(struct bpf_link *link, enum bpf_link_type type,
void bpf_link_init_sleepable(struct bpf_link *link, enum bpf_link_type type,
const struct bpf_link_ops *ops, struct bpf_prog *prog,
enum bpf_attach_type attach_type, bool sleepable);
+void bpf_tramp_link_init(struct bpf_tramp_link *link, enum bpf_link_type type,
+ const struct bpf_link_ops *ops, struct bpf_prog *prog,
+ enum bpf_attach_type attach_type, u64 cookie);
int bpf_link_prime(struct bpf_link *link, struct bpf_link_primer *primer);
int bpf_link_settle(struct bpf_link_primer *primer);
void bpf_link_cleanup(struct bpf_link_primer *primer);
@@ -3123,6 +3131,12 @@ static inline void bpf_link_init_sleepable(struct bpf_link *link, enum bpf_link_
{
}
+static inline void bpf_tramp_link_init(struct bpf_tramp_link *link, enum bpf_link_type type,
+ const struct bpf_link_ops *ops, struct bpf_prog *prog,
+ enum bpf_attach_type attach_type, u64 cookie)
+{
+}
+
static inline int bpf_link_prime(struct bpf_link *link,
struct bpf_link_primer *primer)
{
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index c43346cb3d76..73522559dc05 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -595,8 +595,8 @@ const struct bpf_link_ops bpf_struct_ops_link_lops = {
.dealloc = bpf_struct_ops_link_dealloc,
};
-int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks,
- struct bpf_tramp_link *link,
+int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_nodes *tnodes,
+ struct bpf_tramp_node *node,
const struct btf_func_model *model,
void *stub_func,
void **_image, u32 *_image_off,
@@ -606,13 +606,13 @@ int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks,
void *image = *_image;
int size;
- tlinks[BPF_TRAMP_FENTRY].links[0] = link;
- tlinks[BPF_TRAMP_FENTRY].nr_links = 1;
+ tnodes[BPF_TRAMP_FENTRY].nodes[0] = node;
+ tnodes[BPF_TRAMP_FENTRY].nr_nodes = 1;
if (model->ret_size > 0)
flags |= BPF_TRAMP_F_RET_FENTRY_RET;
- size = arch_bpf_trampoline_size(model, flags, tlinks, stub_func);
+ size = arch_bpf_trampoline_size(model, flags, tnodes, stub_func);
if (size <= 0)
return size ? : -EFAULT;
@@ -629,7 +629,7 @@ int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks,
size = arch_prepare_bpf_trampoline(NULL, image + image_off,
image + image_off + size,
- model, flags, tlinks, stub_func);
+ model, flags, tnodes, stub_func);
if (size <= 0) {
if (image != *_image)
bpf_struct_ops_image_free(image);
@@ -694,7 +694,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
const struct btf_type *module_type;
const struct btf_member *member;
const struct btf_type *t = st_ops_desc->type;
- struct bpf_tramp_links *tlinks;
+ struct bpf_tramp_nodes *tnodes;
void *udata, *kdata;
int prog_fd, err;
u32 i, trampoline_start, image_off = 0;
@@ -721,8 +721,8 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
if (uvalue->common.state || refcount_read(&uvalue->common.refcnt))
return -EINVAL;
- tlinks = kcalloc(BPF_TRAMP_MAX, sizeof(*tlinks), GFP_KERNEL);
- if (!tlinks)
+ tnodes = kcalloc(BPF_TRAMP_MAX, sizeof(*tnodes), GFP_KERNEL);
+ if (!tnodes)
return -ENOMEM;
uvalue = (struct bpf_struct_ops_value *)st_map->uvalue;
@@ -821,8 +821,9 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
err = -ENOMEM;
goto reset_unlock;
}
- bpf_link_init(&link->link, BPF_LINK_TYPE_STRUCT_OPS,
- &bpf_struct_ops_link_lops, prog, prog->expected_attach_type);
+ bpf_tramp_link_init(link, BPF_LINK_TYPE_STRUCT_OPS,
+ &bpf_struct_ops_link_lops, prog, prog->expected_attach_type, 0);
+
*plink++ = &link->link;
ksym = kzalloc(sizeof(*ksym), GFP_USER);
@@ -833,7 +834,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
*pksym++ = ksym;
trampoline_start = image_off;
- err = bpf_struct_ops_prepare_trampoline(tlinks, link,
+ err = bpf_struct_ops_prepare_trampoline(tnodes, &link->node,
&st_ops->func_models[i],
*(void **)(st_ops->cfi_stubs + moff),
&image, &image_off,
@@ -911,7 +912,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
memset(uvalue, 0, map->value_size);
memset(kvalue, 0, map->value_size);
unlock:
- kfree(tlinks);
+ kfree(tnodes);
mutex_unlock(&st_map->lock);
if (!err)
bpf_struct_ops_map_add_ksyms(st_map);
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index dd89bf809772..e9d482c59977 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -3209,6 +3209,15 @@ void bpf_link_init(struct bpf_link *link, enum bpf_link_type type,
bpf_link_init_sleepable(link, type, ops, prog, attach_type, false);
}
+void bpf_tramp_link_init(struct bpf_tramp_link *link, enum bpf_link_type type,
+ const struct bpf_link_ops *ops, struct bpf_prog *prog,
+ enum bpf_attach_type attach_type, u64 cookie)
+{
+ bpf_link_init(&link->link, type, ops, prog, attach_type);
+ link->node.link = &link->link;
+ link->node.cookie = cookie;
+}
+
static void bpf_link_free_id(int id)
{
if (!id)
@@ -3502,7 +3511,7 @@ static void bpf_tracing_link_release(struct bpf_link *link)
struct bpf_tracing_link *tr_link =
container_of(link, struct bpf_tracing_link, link.link);
- WARN_ON_ONCE(bpf_trampoline_unlink_prog(&tr_link->link,
+ WARN_ON_ONCE(bpf_trampoline_unlink_prog(&tr_link->link.node,
tr_link->trampoline,
tr_link->tgt_prog));
@@ -3515,8 +3524,7 @@ static void bpf_tracing_link_release(struct bpf_link *link)
static void bpf_tracing_link_dealloc(struct bpf_link *link)
{
- struct bpf_tracing_link *tr_link =
- container_of(link, struct bpf_tracing_link, link.link);
+ struct bpf_tracing_link *tr_link = container_of(link, struct bpf_tracing_link, link.link);
kfree(tr_link);
}
@@ -3524,8 +3532,8 @@ static void bpf_tracing_link_dealloc(struct bpf_link *link)
static void bpf_tracing_link_show_fdinfo(const struct bpf_link *link,
struct seq_file *seq)
{
- struct bpf_tracing_link *tr_link =
- container_of(link, struct bpf_tracing_link, link.link);
+ struct bpf_tracing_link *tr_link = container_of(link, struct bpf_tracing_link, link.link);
+
u32 target_btf_id, target_obj_id;
bpf_trampoline_unpack_key(tr_link->trampoline->key,
@@ -3538,17 +3546,16 @@ static void bpf_tracing_link_show_fdinfo(const struct bpf_link *link,
link->attach_type,
target_obj_id,
target_btf_id,
- tr_link->link.cookie);
+ tr_link->link.node.cookie);
}
static int bpf_tracing_link_fill_link_info(const struct bpf_link *link,
struct bpf_link_info *info)
{
- struct bpf_tracing_link *tr_link =
- container_of(link, struct bpf_tracing_link, link.link);
+ struct bpf_tracing_link *tr_link = container_of(link, struct bpf_tracing_link, link.link);
info->tracing.attach_type = link->attach_type;
- info->tracing.cookie = tr_link->link.cookie;
+ info->tracing.cookie = tr_link->link.node.cookie;
bpf_trampoline_unpack_key(tr_link->trampoline->key,
&info->tracing.target_obj_id,
&info->tracing.target_btf_id);
@@ -3635,9 +3642,9 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
fslink = kzalloc(sizeof(*fslink), GFP_USER);
if (fslink) {
- bpf_link_init(&fslink->fexit.link, BPF_LINK_TYPE_TRACING,
- &bpf_tracing_link_lops, prog, attach_type);
- fslink->fexit.cookie = bpf_cookie;
+ bpf_tramp_link_init(&fslink->fexit, BPF_LINK_TYPE_TRACING,
+ &bpf_tracing_link_lops, prog, attach_type,
+ bpf_cookie);
link = &fslink->link;
} else {
link = NULL;
@@ -3649,10 +3656,8 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
err = -ENOMEM;
goto out_put_prog;
}
- bpf_link_init(&link->link.link, BPF_LINK_TYPE_TRACING,
- &bpf_tracing_link_lops, prog, attach_type);
-
- link->link.cookie = bpf_cookie;
+ bpf_tramp_link_init(&link->link, BPF_LINK_TYPE_TRACING,
+ &bpf_tracing_link_lops, prog, attach_type, bpf_cookie);
mutex_lock(&prog->aux->dst_mutex);
@@ -3738,7 +3743,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
if (err)
goto out_unlock;
- err = bpf_trampoline_link_prog(&link->link, tr, tgt_prog);
+ err = bpf_trampoline_link_prog(&link->link.node, tr, tgt_prog);
if (err) {
bpf_link_cleanup(&link_primer);
link = NULL;
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index e9f0152289a4..f4acf3771600 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -489,30 +489,29 @@ static struct bpf_trampoline_ops trampoline_ops = {
.modify_fentry = modify_fentry,
};
-static struct bpf_tramp_links *
+static struct bpf_tramp_nodes *
bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total, bool *ip_arg)
{
- struct bpf_tramp_link *link;
- struct bpf_tramp_links *tlinks;
- struct bpf_tramp_link **links;
+ struct bpf_tramp_node *node, **nodes;
+ struct bpf_tramp_nodes *tnodes;
int kind;
*total = 0;
- tlinks = kcalloc(BPF_TRAMP_MAX, sizeof(*tlinks), GFP_KERNEL);
- if (!tlinks)
+ tnodes = kcalloc(BPF_TRAMP_MAX, sizeof(*tnodes), GFP_KERNEL);
+ if (!tnodes)
return ERR_PTR(-ENOMEM);
for (kind = 0; kind < BPF_TRAMP_MAX; kind++) {
- tlinks[kind].nr_links = tr->progs_cnt[kind];
+ tnodes[kind].nr_nodes = tr->progs_cnt[kind];
*total += tr->progs_cnt[kind];
- links = tlinks[kind].links;
+ nodes = tnodes[kind].nodes;
- hlist_for_each_entry(link, &tr->progs_hlist[kind], tramp_hlist) {
- *ip_arg |= link->link.prog->call_get_func_ip;
- *links++ = link;
+ hlist_for_each_entry(node, &tr->progs_hlist[kind], tramp_hlist) {
+ *ip_arg |= node->link->prog->call_get_func_ip;
+ *nodes++ = node;
}
}
- return tlinks;
+ return tnodes;
}
static void bpf_tramp_image_free(struct bpf_tramp_image *im)
@@ -660,14 +659,14 @@ static int bpf_trampoline_update_ops(struct bpf_trampoline *tr, bool lock_direct
struct bpf_trampoline_ops *ops, void *data)
{
struct bpf_tramp_image *im;
- struct bpf_tramp_links *tlinks;
+ struct bpf_tramp_nodes *tnodes;
u32 orig_flags = tr->flags;
bool ip_arg = false;
int err, total, size;
- tlinks = bpf_trampoline_get_progs(tr, &total, &ip_arg);
- if (IS_ERR(tlinks))
- return PTR_ERR(tlinks);
+ tnodes = bpf_trampoline_get_progs(tr, &total, &ip_arg);
+ if (IS_ERR(tnodes))
+ return PTR_ERR(tnodes);
if (total == 0) {
err = ops->unregister_fentry(tr, orig_flags, tr->cur_image->image, data);
@@ -679,8 +678,8 @@ static int bpf_trampoline_update_ops(struct bpf_trampoline *tr, bool lock_direct
/* clear all bits except SHARE_IPMODIFY and TAIL_CALL_CTX */
tr->flags &= (BPF_TRAMP_F_SHARE_IPMODIFY | BPF_TRAMP_F_TAIL_CALL_CTX);
- if (tlinks[BPF_TRAMP_FEXIT].nr_links ||
- tlinks[BPF_TRAMP_MODIFY_RETURN].nr_links) {
+ if (tnodes[BPF_TRAMP_FEXIT].nr_nodes ||
+ tnodes[BPF_TRAMP_MODIFY_RETURN].nr_nodes) {
/* NOTE: BPF_TRAMP_F_RESTORE_REGS and BPF_TRAMP_F_SKIP_FRAME
* should not be set together.
*/
@@ -711,7 +710,7 @@ static int bpf_trampoline_update_ops(struct bpf_trampoline *tr, bool lock_direct
#endif
size = arch_bpf_trampoline_size(&tr->func.model, tr->flags,
- tlinks, tr->func.addr);
+ tnodes, tr->func.addr);
if (size < 0) {
err = size;
goto out;
@@ -729,7 +728,7 @@ static int bpf_trampoline_update_ops(struct bpf_trampoline *tr, bool lock_direct
}
err = arch_prepare_bpf_trampoline(im, im->image, im->image + size,
- &tr->func.model, tr->flags, tlinks,
+ &tr->func.model, tr->flags, tnodes,
tr->func.addr);
if (err < 0)
goto out_free;
@@ -767,7 +766,7 @@ static int bpf_trampoline_update_ops(struct bpf_trampoline *tr, bool lock_direct
/* If any error happens, restore previous flags */
if (err)
tr->flags = orig_flags;
- kfree(tlinks);
+ kfree(tnodes);
return err;
out_free:
@@ -822,7 +821,7 @@ static int bpf_freplace_check_tgt_prog(struct bpf_prog *tgt_prog)
return 0;
}
-static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
+static int __bpf_trampoline_link_prog(struct bpf_tramp_node *node,
struct bpf_trampoline *tr,
struct bpf_prog *tgt_prog,
struct bpf_trampoline_ops *ops,
@@ -830,12 +829,12 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
{
struct bpf_fsession_link *fslink = NULL;
enum bpf_tramp_prog_type kind;
- struct bpf_tramp_link *link_exiting;
+ struct bpf_tramp_node *node_existing;
struct hlist_head *prog_list;
int err = 0;
int cnt = 0, i;
- kind = bpf_attach_type_to_tramp(link->link.prog);
+ kind = bpf_attach_type_to_tramp(node->link->prog);
if (tr->extension_prog)
/* cannot attach fentry/fexit if extension prog is attached.
* cannot overwrite extension prog either.
@@ -852,10 +851,10 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
err = bpf_freplace_check_tgt_prog(tgt_prog);
if (err)
return err;
- tr->extension_prog = link->link.prog;
+ tr->extension_prog = node->link->prog;
return bpf_arch_text_poke(tr->func.addr, BPF_MOD_NOP,
BPF_MOD_JUMP, NULL,
- link->link.prog->bpf_func);
+ node->link->prog->bpf_func);
}
if (kind == BPF_TRAMP_FSESSION) {
prog_list = &tr->progs_hlist[BPF_TRAMP_FENTRY];
@@ -865,31 +864,31 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
}
if (cnt >= BPF_MAX_TRAMP_LINKS)
return -E2BIG;
- if (!hlist_unhashed(&link->tramp_hlist))
+ if (!hlist_unhashed(&node->tramp_hlist))
/* prog already linked */
return -EBUSY;
- hlist_for_each_entry(link_exiting, prog_list, tramp_hlist) {
- if (link_exiting->link.prog != link->link.prog)
+ hlist_for_each_entry(node_existing, prog_list, tramp_hlist) {
+ if (node_existing->link->prog != node->link->prog)
continue;
/* prog already linked */
return -EBUSY;
}
- hlist_add_head(&link->tramp_hlist, prog_list);
+ hlist_add_head(&node->tramp_hlist, prog_list);
if (kind == BPF_TRAMP_FSESSION) {
tr->progs_cnt[BPF_TRAMP_FENTRY]++;
- fslink = container_of(link, struct bpf_fsession_link, link.link);
- hlist_add_head(&fslink->fexit.tramp_hlist, &tr->progs_hlist[BPF_TRAMP_FEXIT]);
+ fslink = container_of(node, struct bpf_fsession_link, link.link.node);
+ hlist_add_head(&fslink->fexit.node.tramp_hlist, &tr->progs_hlist[BPF_TRAMP_FEXIT]);
tr->progs_cnt[BPF_TRAMP_FEXIT]++;
} else {
tr->progs_cnt[kind]++;
}
err = bpf_trampoline_update_ops(tr, true /* lock_direct_mutex */, ops, data);
if (err) {
- hlist_del_init(&link->tramp_hlist);
+ hlist_del_init(&node->tramp_hlist);
if (kind == BPF_TRAMP_FSESSION) {
tr->progs_cnt[BPF_TRAMP_FENTRY]--;
- hlist_del_init(&fslink->fexit.tramp_hlist);
+ hlist_del_init(&fslink->fexit.node.tramp_hlist);
tr->progs_cnt[BPF_TRAMP_FEXIT]--;
} else {
tr->progs_cnt[kind]--;
@@ -898,19 +897,19 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
return err;
}
-int bpf_trampoline_link_prog(struct bpf_tramp_link *link,
+int bpf_trampoline_link_prog(struct bpf_tramp_node *node,
struct bpf_trampoline *tr,
struct bpf_prog *tgt_prog)
{
int err;
trampoline_lock(tr);
- err = __bpf_trampoline_link_prog(link, tr, tgt_prog, &trampoline_ops, NULL);
+ err = __bpf_trampoline_link_prog(node, tr, tgt_prog, &trampoline_ops, NULL);
trampoline_unlock(tr);
return err;
}
-static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
+static int __bpf_trampoline_unlink_prog(struct bpf_tramp_node *node,
struct bpf_trampoline *tr,
struct bpf_prog *tgt_prog,
struct bpf_trampoline_ops *ops,
@@ -919,7 +918,7 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
enum bpf_tramp_prog_type kind;
int err;
- kind = bpf_attach_type_to_tramp(link->link.prog);
+ kind = bpf_attach_type_to_tramp(node->link->prog);
if (kind == BPF_TRAMP_REPLACE) {
WARN_ON_ONCE(!tr->extension_prog);
err = bpf_arch_text_poke(tr->func.addr, BPF_MOD_JUMP,
@@ -931,26 +930,26 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
return err;
} else if (kind == BPF_TRAMP_FSESSION) {
struct bpf_fsession_link *fslink =
- container_of(link, struct bpf_fsession_link, link.link);
+ container_of(node, struct bpf_fsession_link, link.link.node);
- hlist_del_init(&fslink->fexit.tramp_hlist);
+ hlist_del_init(&fslink->fexit.node.tramp_hlist);
tr->progs_cnt[BPF_TRAMP_FEXIT]--;
kind = BPF_TRAMP_FENTRY;
}
- hlist_del_init(&link->tramp_hlist);
+ hlist_del_init(&node->tramp_hlist);
tr->progs_cnt[kind]--;
return bpf_trampoline_update_ops(tr, true /* lock_direct_mutex */, ops, data);
}
/* bpf_trampoline_unlink_prog() should never fail. */
-int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
+int bpf_trampoline_unlink_prog(struct bpf_tramp_node *node,
struct bpf_trampoline *tr,
struct bpf_prog *tgt_prog)
{
int err;
trampoline_lock(tr);
- err = __bpf_trampoline_unlink_prog(link, tr, tgt_prog, &trampoline_ops, NULL);
+ err = __bpf_trampoline_unlink_prog(node, tr, tgt_prog, &trampoline_ops, NULL);
trampoline_unlock(tr);
return err;
}
@@ -965,7 +964,7 @@ static void bpf_shim_tramp_link_release(struct bpf_link *link)
if (!shim_link->trampoline)
return;
- WARN_ON_ONCE(bpf_trampoline_unlink_prog(&shim_link->link, shim_link->trampoline, NULL));
+ WARN_ON_ONCE(bpf_trampoline_unlink_prog(&shim_link->link.node, shim_link->trampoline, NULL));
bpf_trampoline_put(shim_link->trampoline);
}
@@ -1011,8 +1010,8 @@ static struct bpf_shim_tramp_link *cgroup_shim_alloc(const struct bpf_prog *prog
p->type = BPF_PROG_TYPE_LSM;
p->expected_attach_type = BPF_LSM_MAC;
bpf_prog_inc(p);
- bpf_link_init(&shim_link->link.link, BPF_LINK_TYPE_UNSPEC,
- &bpf_shim_tramp_link_lops, p, attach_type);
+ bpf_tramp_link_init(&shim_link->link, BPF_LINK_TYPE_UNSPEC,
+ &bpf_shim_tramp_link_lops, p, attach_type, 0);
bpf_cgroup_atype_get(p->aux->attach_btf_id, cgroup_atype);
return shim_link;
@@ -1021,15 +1020,15 @@ static struct bpf_shim_tramp_link *cgroup_shim_alloc(const struct bpf_prog *prog
static struct bpf_shim_tramp_link *cgroup_shim_find(struct bpf_trampoline *tr,
bpf_func_t bpf_func)
{
- struct bpf_tramp_link *link;
+ struct bpf_tramp_node *node;
int kind;
for (kind = 0; kind < BPF_TRAMP_MAX; kind++) {
- hlist_for_each_entry(link, &tr->progs_hlist[kind], tramp_hlist) {
- struct bpf_prog *p = link->link.prog;
+ hlist_for_each_entry(node, &tr->progs_hlist[kind], tramp_hlist) {
+ struct bpf_prog *p = node->link->prog;
if (p->bpf_func == bpf_func)
- return container_of(link, struct bpf_shim_tramp_link, link);
+ return container_of(node, struct bpf_shim_tramp_link, link.node);
}
}
@@ -1082,7 +1081,7 @@ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog,
goto err;
}
- err = __bpf_trampoline_link_prog(&shim_link->link, tr, NULL, &trampoline_ops, NULL);
+ err = __bpf_trampoline_link_prog(&shim_link->link.node, tr, NULL, &trampoline_ops, NULL);
if (err)
goto err;
@@ -1397,7 +1396,7 @@ bpf_trampoline_exit_t bpf_trampoline_exit(const struct bpf_prog *prog)
int __weak
arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *image_end,
const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks,
+ struct bpf_tramp_nodes *tnodes,
void *func_addr)
{
return -ENOTSUPP;
@@ -1431,7 +1430,7 @@ int __weak arch_protect_bpf_trampoline(void *image, unsigned int size)
}
int __weak arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks, void *func_addr)
+ struct bpf_tramp_nodes *tnodes, void *func_addr)
{
return -ENOTSUPP;
}
diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c
index 812457819b5a..8f58c1f5a039 100644
--- a/net/bpf/bpf_dummy_struct_ops.c
+++ b/net/bpf/bpf_dummy_struct_ops.c
@@ -132,7 +132,7 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
const struct bpf_struct_ops *st_ops = &bpf_bpf_dummy_ops;
const struct btf_type *func_proto;
struct bpf_dummy_ops_test_args *args;
- struct bpf_tramp_links *tlinks = NULL;
+ struct bpf_tramp_nodes *tnodes = NULL;
struct bpf_tramp_link *link = NULL;
void *image = NULL;
unsigned int op_idx;
@@ -158,8 +158,8 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
if (err)
goto out;
- tlinks = kcalloc(BPF_TRAMP_MAX, sizeof(*tlinks), GFP_KERNEL);
- if (!tlinks) {
+ tnodes = kcalloc(BPF_TRAMP_MAX, sizeof(*tnodes), GFP_KERNEL);
+ if (!tnodes) {
err = -ENOMEM;
goto out;
}
@@ -171,11 +171,11 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
}
/* prog doesn't take the ownership of the reference from caller */
bpf_prog_inc(prog);
- bpf_link_init(&link->link, BPF_LINK_TYPE_STRUCT_OPS, &bpf_struct_ops_link_lops, prog,
- prog->expected_attach_type);
+ bpf_tramp_link_init(link, BPF_LINK_TYPE_STRUCT_OPS, &bpf_struct_ops_link_lops,
+ prog, prog->expected_attach_type, 0);
op_idx = prog->expected_attach_type;
- err = bpf_struct_ops_prepare_trampoline(tlinks, link,
+ err = bpf_struct_ops_prepare_trampoline(tnodes, &link->node,
&st_ops->func_models[op_idx],
&dummy_ops_test_ret_function,
&image, &image_off,
@@ -198,7 +198,7 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
bpf_struct_ops_image_free(image);
if (link)
bpf_link_put(&link->link);
- kfree(tlinks);
+ kfree(tnodes);
return err;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH bpf-next 05/17] bpf: Factor fsession link to use struct bpf_tramp_node
2026-02-20 10:06 [PATCH bpf-next 00/17] bpf: tracing_multi link Jiri Olsa
` (3 preceding siblings ...)
2026-02-20 10:06 ` [PATCH bpf-next 04/17] bpf: Add struct bpf_tramp_node object Jiri Olsa
@ 2026-02-20 10:06 ` Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 06/17] bpf: Add multi tracing attach types Jiri Olsa
` (11 subsequent siblings)
16 siblings, 0 replies; 38+ messages in thread
From: Jiri Olsa @ 2026-02-20 10:06 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Now that we split trampoline attachment object (bpf_tramp_node) from
the link object (bpf_tramp_link) we can use bpf_tramp_node as fsession's
fexit attachment object and get rid of the bpf_fsession_link object.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/bpf.h | 6 +-----
kernel/bpf/syscall.c | 21 ++++++---------------
kernel/bpf/trampoline.c | 14 +++++++-------
3 files changed, 14 insertions(+), 27 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 9c7f5ab3c7ce..d79951c0ab79 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1883,15 +1883,11 @@ struct bpf_shim_tramp_link {
struct bpf_tracing_link {
struct bpf_tramp_link link;
+ struct bpf_tramp_node fexit;
struct bpf_trampoline *trampoline;
struct bpf_prog *tgt_prog;
};
-struct bpf_fsession_link {
- struct bpf_tracing_link link;
- struct bpf_tramp_link fexit;
-};
-
struct bpf_raw_tp_link {
struct bpf_link link;
struct bpf_raw_event_map *btp;
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index e9d482c59977..95a4bfbeab62 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -3637,21 +3637,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
key = bpf_trampoline_compute_key(tgt_prog, NULL, btf_id);
}
- if (prog->expected_attach_type == BPF_TRACE_FSESSION) {
- struct bpf_fsession_link *fslink;
-
- fslink = kzalloc(sizeof(*fslink), GFP_USER);
- if (fslink) {
- bpf_tramp_link_init(&fslink->fexit, BPF_LINK_TYPE_TRACING,
- &bpf_tracing_link_lops, prog, attach_type,
- bpf_cookie);
- link = &fslink->link;
- } else {
- link = NULL;
- }
- } else {
- link = kzalloc(sizeof(*link), GFP_USER);
- }
+ link = kzalloc(sizeof(*link), GFP_USER);
if (!link) {
err = -ENOMEM;
goto out_put_prog;
@@ -3659,6 +3645,11 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
bpf_tramp_link_init(&link->link, BPF_LINK_TYPE_TRACING,
&bpf_tracing_link_lops, prog, attach_type, bpf_cookie);
+ if (prog->expected_attach_type == BPF_TRACE_FSESSION) {
+ link->fexit.link = &link->link.link;
+ link->fexit.cookie = bpf_cookie;
+ }
+
mutex_lock(&prog->aux->dst_mutex);
/* There are a few possible cases here:
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index f4acf3771600..14fa7012738a 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -827,7 +827,7 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_node *node,
struct bpf_trampoline_ops *ops,
void *data)
{
- struct bpf_fsession_link *fslink = NULL;
+ struct bpf_tracing_link *tr_link = NULL;
enum bpf_tramp_prog_type kind;
struct bpf_tramp_node *node_existing;
struct hlist_head *prog_list;
@@ -877,8 +877,8 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_node *node,
hlist_add_head(&node->tramp_hlist, prog_list);
if (kind == BPF_TRAMP_FSESSION) {
tr->progs_cnt[BPF_TRAMP_FENTRY]++;
- fslink = container_of(node, struct bpf_fsession_link, link.link.node);
- hlist_add_head(&fslink->fexit.node.tramp_hlist, &tr->progs_hlist[BPF_TRAMP_FEXIT]);
+ tr_link = container_of(node, struct bpf_tracing_link, link.node);
+ hlist_add_head(&tr_link->fexit.tramp_hlist, &tr->progs_hlist[BPF_TRAMP_FEXIT]);
tr->progs_cnt[BPF_TRAMP_FEXIT]++;
} else {
tr->progs_cnt[kind]++;
@@ -888,7 +888,7 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_node *node,
hlist_del_init(&node->tramp_hlist);
if (kind == BPF_TRAMP_FSESSION) {
tr->progs_cnt[BPF_TRAMP_FENTRY]--;
- hlist_del_init(&fslink->fexit.node.tramp_hlist);
+ hlist_del_init(&tr_link->fexit.tramp_hlist);
tr->progs_cnt[BPF_TRAMP_FEXIT]--;
} else {
tr->progs_cnt[kind]--;
@@ -929,10 +929,10 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_node *node,
tgt_prog->aux->is_extended = false;
return err;
} else if (kind == BPF_TRAMP_FSESSION) {
- struct bpf_fsession_link *fslink =
- container_of(node, struct bpf_fsession_link, link.link.node);
+ struct bpf_tracing_link *tr_link =
+ container_of(node, struct bpf_tracing_link, link.node);
- hlist_del_init(&fslink->fexit.node.tramp_hlist);
+ hlist_del_init(&tr_link->fexit.tramp_hlist);
tr->progs_cnt[BPF_TRAMP_FEXIT]--;
kind = BPF_TRAMP_FENTRY;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH bpf-next 06/17] bpf: Add multi tracing attach types
2026-02-20 10:06 [PATCH bpf-next 00/17] bpf: tracing_multi link Jiri Olsa
` (4 preceding siblings ...)
2026-02-20 10:06 ` [PATCH bpf-next 05/17] bpf: Factor fsession link to use struct bpf_tramp_node Jiri Olsa
@ 2026-02-20 10:06 ` Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 07/17] bpf: Add bpf_trampoline_multi_attach/detach functions Jiri Olsa
` (10 subsequent siblings)
16 siblings, 0 replies; 38+ messages in thread
From: Jiri Olsa @ 2026-02-20 10:06 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding new program attach types multi tracing attachment:
BPF_TRACE_FENTRY_MULTI
BPF_TRACE_FEXIT_MULTI
and their base support in verifier code.
Programs with such attach type will use specific link attachment
interface coming in following changes.
This was suggested by Andrii some (long) time ago and turned out
to be easier than having special program flag for that.
Bpf programs with such types have 'bpf_multi_func' function set
as their attach_btf_id.
Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/bpf.h | 5 +++++
include/uapi/linux/bpf.h | 2 ++
kernel/bpf/btf.c | 2 ++
kernel/bpf/syscall.c | 35 ++++++++++++++++++++++++++++++----
kernel/bpf/trampoline.c | 5 ++++-
kernel/bpf/verifier.c | 9 +++++++++
net/bpf/test_run.c | 2 ++
tools/include/uapi/linux/bpf.h | 2 ++
tools/lib/bpf/libbpf.c | 2 ++
9 files changed, 59 insertions(+), 5 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index d79951c0ab79..3d13ec5a66eb 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2116,6 +2116,11 @@ void bpf_prog_disassoc_struct_ops(struct bpf_prog *prog);
void *bpf_prog_get_assoc_struct_ops(const struct bpf_prog_aux *aux);
u32 bpf_struct_ops_id(const void *kdata);
+static inline bool is_tracing_multi(enum bpf_attach_type type)
+{
+ return type == BPF_TRACE_FENTRY_MULTI || type == BPF_TRACE_FEXIT_MULTI;
+}
+
#ifdef CONFIG_NET
/* Define it here to avoid the use of forward declaration */
struct bpf_dummy_ops_state {
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index c8d400b7680a..68600972a778 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1154,6 +1154,8 @@ enum bpf_attach_type {
BPF_TRACE_KPROBE_SESSION,
BPF_TRACE_UPROBE_SESSION,
BPF_TRACE_FSESSION,
+ BPF_TRACE_FENTRY_MULTI,
+ BPF_TRACE_FEXIT_MULTI,
__MAX_BPF_ATTACH_TYPE
};
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 7708958e3fb8..07d1e88e3524 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6221,6 +6221,8 @@ static int btf_validate_prog_ctx_type(struct bpf_verifier_log *log, const struct
case BPF_TRACE_FEXIT:
case BPF_MODIFY_RETURN:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FENTRY_MULTI:
+ case BPF_TRACE_FEXIT_MULTI:
/* allow u64* as ctx */
if (btf_is_int(t) && t->size == 8)
return 0;
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 95a4bfbeab62..ff85a9fa080e 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -41,6 +41,7 @@
#include <linux/overflow.h>
#include <linux/cookie.h>
#include <linux/verification.h>
+#include <linux/btf_ids.h>
#include <net/netfilter/nf_bpf_link.h>
#include <net/netkit.h>
@@ -2653,7 +2654,8 @@ static int
bpf_prog_load_check_attach(enum bpf_prog_type prog_type,
enum bpf_attach_type expected_attach_type,
struct btf *attach_btf, u32 btf_id,
- struct bpf_prog *dst_prog)
+ struct bpf_prog *dst_prog,
+ bool multi_func)
{
if (btf_id) {
if (btf_id > BTF_MAX_TYPE)
@@ -2673,6 +2675,14 @@ bpf_prog_load_check_attach(enum bpf_prog_type prog_type,
}
}
+ if (multi_func) {
+ if (prog_type != BPF_PROG_TYPE_TRACING)
+ return -EINVAL;
+ if (!attach_btf || btf_id)
+ return -EINVAL;
+ return 0;
+ }
+
if (attach_btf && (!btf_id || dst_prog))
return -EINVAL;
@@ -2865,6 +2875,16 @@ static int bpf_prog_mark_insn_arrays_ready(struct bpf_prog *prog)
return 0;
}
+#define DEFINE_BPF_MULTI_FUNC(args...) \
+ extern int bpf_multi_func(args); \
+ int __init bpf_multi_func(args) { return 0; }
+
+DEFINE_BPF_MULTI_FUNC(unsigned long a1, unsigned long a2,
+ unsigned long a3, unsigned long a4,
+ unsigned long a5, unsigned long a6)
+
+BTF_ID_LIST_SINGLE(bpf_multi_func_btf_id, func, bpf_multi_func)
+
/* last field in 'union bpf_attr' used by this command */
#define BPF_PROG_LOAD_LAST_FIELD keyring_id
@@ -2877,6 +2897,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
bool bpf_cap;
int err;
char license[128];
+ bool multi_func;
if (CHECK_ATTR(BPF_PROG_LOAD))
return -EINVAL;
@@ -2943,6 +2964,8 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
if (is_perfmon_prog_type(type) && !bpf_token_capable(token, CAP_PERFMON))
goto put_token;
+ multi_func = is_tracing_multi(attr->expected_attach_type);
+
/* attach_prog_fd/attach_btf_obj_fd can specify fd of either bpf_prog
* or btf, we need to check which one it is
*/
@@ -2964,7 +2987,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
goto put_token;
}
}
- } else if (attr->attach_btf_id) {
+ } else if (attr->attach_btf_id || multi_func) {
/* fall back to vmlinux BTF, if BTF type ID is specified */
attach_btf = bpf_get_btf_vmlinux();
if (IS_ERR(attach_btf)) {
@@ -2980,7 +3003,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
if (bpf_prog_load_check_attach(type, attr->expected_attach_type,
attach_btf, attr->attach_btf_id,
- dst_prog)) {
+ dst_prog, multi_func)) {
if (dst_prog)
bpf_prog_put(dst_prog);
if (attach_btf)
@@ -3003,7 +3026,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
prog->expected_attach_type = attr->expected_attach_type;
prog->sleepable = !!(attr->prog_flags & BPF_F_SLEEPABLE);
prog->aux->attach_btf = attach_btf;
- prog->aux->attach_btf_id = attr->attach_btf_id;
+ prog->aux->attach_btf_id = multi_func ? bpf_multi_func_btf_id[0] : attr->attach_btf_id;
prog->aux->dst_prog = dst_prog;
prog->aux->dev_bound = !!attr->prog_ifindex;
prog->aux->xdp_has_frags = attr->prog_flags & BPF_F_XDP_HAS_FRAGS;
@@ -3588,6 +3611,8 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
if (prog->expected_attach_type != BPF_TRACE_FENTRY &&
prog->expected_attach_type != BPF_TRACE_FEXIT &&
prog->expected_attach_type != BPF_TRACE_FSESSION &&
+ prog->expected_attach_type != BPF_TRACE_FENTRY_MULTI &&
+ prog->expected_attach_type != BPF_TRACE_FEXIT_MULTI &&
prog->expected_attach_type != BPF_MODIFY_RETURN) {
err = -EINVAL;
goto out_put_prog;
@@ -4365,6 +4390,8 @@ attach_type_to_prog_type(enum bpf_attach_type attach_type)
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FENTRY_MULTI:
+ case BPF_TRACE_FEXIT_MULTI:
case BPF_MODIFY_RETURN:
return BPF_PROG_TYPE_TRACING;
case BPF_LSM_MAC:
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 14fa7012738a..2d701bc6e1a5 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -189,7 +189,8 @@ bool bpf_prog_has_trampoline(const struct bpf_prog *prog)
switch (ptype) {
case BPF_PROG_TYPE_TRACING:
if (eatype == BPF_TRACE_FENTRY || eatype == BPF_TRACE_FEXIT ||
- eatype == BPF_MODIFY_RETURN || eatype == BPF_TRACE_FSESSION)
+ eatype == BPF_MODIFY_RETURN || eatype == BPF_TRACE_FSESSION ||
+ eatype == BPF_TRACE_FENTRY_MULTI || eatype == BPF_TRACE_FEXIT_MULTI)
return true;
return false;
case BPF_PROG_TYPE_LSM:
@@ -783,10 +784,12 @@ static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(struct bpf_prog *prog)
{
switch (prog->expected_attach_type) {
case BPF_TRACE_FENTRY:
+ case BPF_TRACE_FENTRY_MULTI:
return BPF_TRAMP_FENTRY;
case BPF_MODIFY_RETURN:
return BPF_TRAMP_MODIFY_RETURN;
case BPF_TRACE_FEXIT:
+ case BPF_TRACE_FEXIT_MULTI:
return BPF_TRAMP_FEXIT;
case BPF_TRACE_FSESSION:
return BPF_TRAMP_FSESSION;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 2b93cd3f8625..9c9303103a9c 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -17911,6 +17911,8 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FENTRY_MULTI:
+ case BPF_TRACE_FEXIT_MULTI:
range = retval_range(0, 0);
break;
case BPF_TRACE_RAW_TP:
@@ -23961,6 +23963,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
insn->imm == BPF_FUNC_get_func_ret) {
if (eatype == BPF_TRACE_FEXIT ||
eatype == BPF_TRACE_FSESSION ||
+ eatype == BPF_TRACE_FEXIT_MULTI ||
eatype == BPF_MODIFY_RETURN) {
/* Load nr_args from ctx - 8 */
insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
@@ -25018,6 +25021,8 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FENTRY_MULTI:
+ case BPF_TRACE_FEXIT_MULTI:
if (prog->expected_attach_type == BPF_TRACE_FSESSION &&
!bpf_jit_supports_fsession()) {
bpf_log(log, "JIT does not support fsession\n");
@@ -25190,6 +25195,8 @@ static bool can_be_sleepable(struct bpf_prog *prog)
case BPF_MODIFY_RETURN:
case BPF_TRACE_ITER:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FENTRY_MULTI:
+ case BPF_TRACE_FEXIT_MULTI:
return true;
default:
return false;
@@ -25259,6 +25266,8 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
return 0;
} else if (prog->expected_attach_type == BPF_TRACE_ITER) {
return bpf_iter_prog_supported(prog);
+ } else if (is_tracing_multi(prog->expected_attach_type)) {
+ return prog->type == BPF_PROG_TYPE_TRACING ? 0 : -EINVAL;
}
if (prog->type == BPF_PROG_TYPE_LSM) {
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index 178c4738e63b..3373450132f0 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -686,6 +686,8 @@ int bpf_prog_test_run_tracing(struct bpf_prog *prog,
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FENTRY_MULTI:
+ case BPF_TRACE_FEXIT_MULTI:
if (bpf_fentry_test1(1) != 2 ||
bpf_fentry_test2(2, 3) != 5 ||
bpf_fentry_test3(4, 5, 6) != 15 ||
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 5e38b4887de6..61f0fe5bc0aa 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1154,6 +1154,8 @@ enum bpf_attach_type {
BPF_TRACE_KPROBE_SESSION,
BPF_TRACE_UPROBE_SESSION,
BPF_TRACE_FSESSION,
+ BPF_TRACE_FENTRY_MULTI,
+ BPF_TRACE_FEXIT_MULTI,
__MAX_BPF_ATTACH_TYPE
};
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 0be7017800fe..1e19c7b861ec 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -136,6 +136,8 @@ static const char * const attach_type_name[] = {
[BPF_NETKIT_PEER] = "netkit_peer",
[BPF_TRACE_KPROBE_SESSION] = "trace_kprobe_session",
[BPF_TRACE_UPROBE_SESSION] = "trace_uprobe_session",
+ [BPF_TRACE_FENTRY_MULTI] = "trace_fentry_multi",
+ [BPF_TRACE_FEXIT_MULTI] = "trace_fexit_multi",
};
static const char * const link_type_name[] = {
--
2.52.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH bpf-next 07/17] bpf: Add bpf_trampoline_multi_attach/detach functions
2026-02-20 10:06 [PATCH bpf-next 00/17] bpf: tracing_multi link Jiri Olsa
` (5 preceding siblings ...)
2026-02-20 10:06 ` [PATCH bpf-next 06/17] bpf: Add multi tracing attach types Jiri Olsa
@ 2026-02-20 10:06 ` Jiri Olsa
2026-02-20 10:57 ` bot+bpf-ci
2026-02-20 10:06 ` [PATCH bpf-next 08/17] bpf: Add support for tracing multi link Jiri Olsa
` (9 subsequent siblings)
16 siblings, 1 reply; 38+ messages in thread
From: Jiri Olsa @ 2026-02-20 10:06 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding bpf_trampoline_multi_attach/detach functions that allows
to attach/detach multi tracing trampoline.
The attachment is defined with bpf_program and array of BTF ids
of functions to attach the bpf program to.
Adding bpf_tracing_multi_link object that holds all the attached
trampolines and is initialized in attach and used in detach.
The attachment allocates or uses currently existing trampoline
for each function to attach and links it with the bpf program.
The attach works as follows:
- we get all the needed trampolines
- lock them and add the bpf program to each (__bpf_trampoline_link_prog)
- the trampoline_multi_ops passed in __bpf_trampoline_link_prog gathers
needed ftrace_hash (ip -> trampoline) data
- we call update_ftrace_direct_add/mod to update needed locations
- we unlock all the trampolines
The detach works as follows:
- we lock all the needed trampolines
- remove the program from each (__bpf_trampoline_unlink_prog)
- the trampoline_multi_ops passed in __bpf_trampoline_link_prog gathers
needed ftrace_hash ip->trampoline data
- we call update_ftrace_direct_del/mod to update needed locations
- we unlock and put all the trampolines
Adding trampoline_(un)lock_all functions to (un)lock all trampolines
to gate the tracing_multi attachment.
Note this is supported only for archs (x86_64) with ftrace direct and
have single ops support.
CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS &&
CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/bpf.h | 17 ++++
kernel/bpf/trampoline.c | 195 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 212 insertions(+)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 3d13ec5a66eb..00585693d31a 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1464,6 +1464,12 @@ struct bpf_trampoline *bpf_trampoline_get(u64 key,
void bpf_trampoline_put(struct bpf_trampoline *tr);
int arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_funcs);
+struct bpf_tracing_multi_link;
+int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
+ struct bpf_tracing_multi_link *link);
+int bpf_trampoline_multi_detach(struct bpf_prog *prog,
+ struct bpf_tracing_multi_link *link);
+
/*
* When the architecture supports STATIC_CALL replace the bpf_dispatcher_fn
* indirection with a direct call to the bpf program. If the architecture does
@@ -1888,6 +1894,17 @@ struct bpf_tracing_link {
struct bpf_prog *tgt_prog;
};
+struct bpf_tracing_multi_node {
+ struct bpf_tramp_node node;
+ struct bpf_trampoline *trampoline;
+};
+
+struct bpf_tracing_multi_link {
+ struct bpf_link link;
+ int nodes_cnt;
+ struct bpf_tracing_multi_node nodes[] __counted_by(nodes_cnt);
+};
+
struct bpf_raw_tp_link {
struct bpf_link link;
struct bpf_raw_event_map *btp;
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 2d701bc6e1a5..c32205adfebe 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -53,6 +53,22 @@ static void trampoline_unlock(struct bpf_trampoline *tr)
mutex_unlock(trampoline_locks_lookup(tr));
}
+static void trampoline_lock_all(void)
+{
+ int i;
+
+ for (i = 0; i < TRAMPOLINE_LOCKS_TABLE_SIZE; i++)
+ mutex_lock(&trampoline_locks[i].mutex);
+}
+
+static void trampoline_unlock_all(void)
+{
+ int i;
+
+ for (i = 0; i < TRAMPOLINE_LOCKS_TABLE_SIZE; i++)
+ mutex_unlock(&trampoline_locks[i].mutex);
+}
+
static int __init trampoline_locks_init(void)
{
int i;
@@ -1438,6 +1454,185 @@ int __weak arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
return -ENOTSUPP;
}
+#if defined(CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS) && defined(CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS)
+
+struct fentry_multi_data {
+ struct ftrace_hash *unreg;
+ struct ftrace_hash *modify;
+ struct ftrace_hash *reg;
+};
+
+static void free_fentry_multi_data(struct fentry_multi_data *data)
+{
+ free_ftrace_hash(data->reg);
+ free_ftrace_hash(data->unreg);
+ free_ftrace_hash(data->modify);
+}
+
+static int register_fentry_multi(struct bpf_trampoline *tr, void *new_addr, void *ptr)
+{
+ unsigned long addr = (unsigned long) new_addr;
+ unsigned long ip = ftrace_location(tr->ip);
+ struct fentry_multi_data *data = ptr;
+
+ if (bpf_trampoline_use_jmp(tr->flags))
+ addr = ftrace_jmp_set(addr);
+ return add_ftrace_hash_entry_direct(data->reg, ip, addr) ? 0 : -ENOMEM;
+}
+
+static int unregister_fentry_multi(struct bpf_trampoline *tr, u32 orig_flags, void *old_addr,
+ void *ptr)
+{
+ unsigned long addr = (unsigned long) old_addr;
+ unsigned long ip = ftrace_location(tr->ip);
+ struct fentry_multi_data *data = ptr;
+
+ if (bpf_trampoline_use_jmp(tr->flags))
+ addr = ftrace_jmp_set(addr);
+ return add_ftrace_hash_entry_direct(data->unreg, ip, addr) ? 0 : -ENOMEM;
+}
+
+static int modify_fentry_multi(struct bpf_trampoline *tr, u32 orig_flags, void *old_addr,
+ void *new_addr, bool lock_direct_mutex, void *ptr)
+{
+ unsigned long addr = (unsigned long) new_addr;
+ unsigned long ip = ftrace_location(tr->ip);
+ struct fentry_multi_data *data = ptr;
+
+ if (bpf_trampoline_use_jmp(tr->flags))
+ addr = ftrace_jmp_set(addr);
+ return add_ftrace_hash_entry_direct(data->modify, ip, addr) ? 0 : -ENOMEM;
+}
+
+static struct bpf_trampoline_ops trampoline_multi_ops = {
+ .register_fentry = register_fentry_multi,
+ .unregister_fentry = unregister_fentry_multi,
+ .modify_fentry = modify_fentry_multi,
+};
+
+int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
+ struct bpf_tracing_multi_link *link)
+{
+ struct bpf_attach_target_info tgt_info = {};
+ struct bpf_tracing_multi_node *mnode;
+ int j, i, err, cnt = link->nodes_cnt;
+ struct fentry_multi_data data = {};
+ struct bpf_trampoline *tr;
+ u64 key;
+
+ data.reg = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
+ if (!data.reg)
+ return -ENOMEM;
+
+ data.modify = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
+ if (!data.modify) {
+ free_ftrace_hash(data.reg);
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < cnt; i++) {
+ mnode = &link->nodes[i];
+ err = bpf_check_attach_target(NULL, prog, NULL, ids[i], &tgt_info);
+ if (err)
+ goto rollback_put;
+
+ key = bpf_trampoline_compute_key(NULL, prog->aux->attach_btf, ids[i]);
+
+ tr = bpf_trampoline_get(key, &tgt_info);
+ if (!tr) {
+ err = -ENOMEM;
+ goto rollback_put;
+ }
+
+ mnode->trampoline = tr;
+ mnode->node.link = &link->link;
+ }
+
+ trampoline_lock_all();
+
+ for (i = 0; i < cnt; i++) {
+ mnode = &link->nodes[i];
+ err = __bpf_trampoline_link_prog(&mnode->node, mnode->trampoline, NULL,
+ &trampoline_multi_ops, &data);
+ if (err)
+ goto rollback_unlink;
+ }
+
+ if (ftrace_hash_count(data.reg)) {
+ err = update_ftrace_direct_add(&direct_ops, data.reg);
+ if (err)
+ goto rollback_unlink;
+ }
+
+ if (ftrace_hash_count(data.modify)) {
+ err = update_ftrace_direct_mod(&direct_ops, data.modify, true);
+ if (err) {
+ WARN_ON_ONCE(update_ftrace_direct_del(&direct_ops, data.reg));
+ goto rollback_unlink;
+ }
+ }
+
+ trampoline_unlock_all();
+
+ free_fentry_multi_data(&data);
+ return 0;
+
+rollback_unlink:
+ for (j = 0; j < i; j++) {
+ mnode = &link->nodes[j];
+ WARN_ON_ONCE(__bpf_trampoline_unlink_prog(&mnode->node, mnode->trampoline,
+ NULL, &trampoline_multi_ops, &data));
+ }
+ trampoline_unlock_all();
+
+rollback_put:
+ for (j = 0; j < i; j++)
+ bpf_trampoline_put(link->nodes[j].trampoline);
+
+ free_fentry_multi_data(&data);
+ return err;
+}
+
+int bpf_trampoline_multi_detach(struct bpf_prog *prog, struct bpf_tracing_multi_link *link)
+{
+ struct bpf_tracing_multi_node *mnode;
+ struct fentry_multi_data data = {};
+ int i, cnt = link->nodes_cnt;
+
+ data.unreg = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
+ if (!data.unreg)
+ return -ENOMEM;
+
+ data.modify = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
+ if (!data.modify) {
+ free_ftrace_hash(data.unreg);
+ return -ENOMEM;
+ }
+
+ trampoline_lock_all();
+
+ for (i = 0; i < cnt; i++) {
+ mnode = &link->nodes[i];
+ WARN_ON_ONCE(__bpf_trampoline_unlink_prog(&mnode->node, mnode->trampoline,
+ NULL, &trampoline_multi_ops, &data));
+ }
+
+ if (ftrace_hash_count(data.unreg))
+ WARN_ON_ONCE(update_ftrace_direct_del(&direct_ops, data.unreg));
+ if (ftrace_hash_count(data.modify))
+ WARN_ON_ONCE(update_ftrace_direct_mod(&direct_ops, data.modify, true));
+
+ trampoline_unlock_all();
+
+ for (i = 0; i < cnt; i++)
+ bpf_trampoline_put(link->nodes[i].trampoline);
+
+ free_fentry_multi_data(&data);
+ return 0;
+}
+
+#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS) && CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS */
+
static int __init init_trampolines(void)
{
int i;
--
2.52.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH bpf-next 08/17] bpf: Add support for tracing multi link
2026-02-20 10:06 [PATCH bpf-next 00/17] bpf: tracing_multi link Jiri Olsa
` (6 preceding siblings ...)
2026-02-20 10:06 ` [PATCH bpf-next 07/17] bpf: Add bpf_trampoline_multi_attach/detach functions Jiri Olsa
@ 2026-02-20 10:06 ` Jiri Olsa
2026-02-20 10:57 ` bot+bpf-ci
2026-02-20 10:06 ` [PATCH bpf-next 09/17] bpf: Add support for tracing_multi link cookies Jiri Olsa
` (8 subsequent siblings)
16 siblings, 1 reply; 38+ messages in thread
From: Jiri Olsa @ 2026-02-20 10:06 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding new link to allow to attach program to multiple function
BTF IDs. The link is represented by struct bpf_tracing_multi_link.
To configure the link, new fields are added to bpf_attr::link_create
to pass array of BTF IDs;
struct {
__aligned_u64 ids;
__u32 cnt;
} tracing_multi;
Each BTF ID represents function (BTF_KIND_FUNC) that the link will
attach bpf program to.
We use previously added bpf_trampoline_multi_attach/detach functions
to attach/detach the link.
The linkinfo/fdinfo callbacks will be implemented in following changes.
Note this is supported only for archs (x86_64) with ftrace direct and
have single ops support.
CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS &&
CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/bpf_types.h | 1 +
include/linux/trace_events.h | 6 +++
include/uapi/linux/bpf.h | 5 ++
kernel/bpf/syscall.c | 2 +
kernel/trace/bpf_trace.c | 87 ++++++++++++++++++++++++++++++++++
tools/include/uapi/linux/bpf.h | 6 +++
tools/lib/bpf/libbpf.c | 1 +
7 files changed, 108 insertions(+)
diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
index b13de31e163f..c1656f026790 100644
--- a/include/linux/bpf_types.h
+++ b/include/linux/bpf_types.h
@@ -155,3 +155,4 @@ BPF_LINK_TYPE(BPF_LINK_TYPE_PERF_EVENT, perf)
BPF_LINK_TYPE(BPF_LINK_TYPE_KPROBE_MULTI, kprobe_multi)
BPF_LINK_TYPE(BPF_LINK_TYPE_STRUCT_OPS, struct_ops)
BPF_LINK_TYPE(BPF_LINK_TYPE_UPROBE_MULTI, uprobe_multi)
+BPF_LINK_TYPE(BPF_LINK_TYPE_TRACING_MULTI, tracing)
diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 0a2b8229b999..7a28cc824fca 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -778,6 +778,7 @@ int bpf_get_perf_event_info(const struct perf_event *event, u32 *prog_id,
unsigned long *missed);
int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog);
int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog);
+int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr);
#else
static inline unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx)
{
@@ -830,6 +831,11 @@ bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
{
return -EOPNOTSUPP;
}
+static inline int
+bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
+{
+ return -EOPNOTSUPP;
+}
#endif
enum {
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 68600972a778..7f5c51f27a36 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1180,6 +1180,7 @@ enum bpf_link_type {
BPF_LINK_TYPE_UPROBE_MULTI = 12,
BPF_LINK_TYPE_NETKIT = 13,
BPF_LINK_TYPE_SOCKMAP = 14,
+ BPF_LINK_TYPE_TRACING_MULTI = 15,
__MAX_BPF_LINK_TYPE,
};
@@ -1863,6 +1864,10 @@ union bpf_attr {
};
__u64 expected_revision;
} cgroup;
+ struct {
+ __aligned_u64 ids;
+ __u32 cnt;
+ } tracing_multi;
};
} link_create;
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index ff85a9fa080e..5892dca20b7e 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -5751,6 +5751,8 @@ static int link_create(union bpf_attr *attr, bpfptr_t uattr)
ret = bpf_iter_link_attach(attr, uattr, prog);
else if (prog->expected_attach_type == BPF_LSM_CGROUP)
ret = cgroup_bpf_link_attach(attr, prog);
+ else if (is_tracing_multi(prog->expected_attach_type))
+ ret = bpf_tracing_multi_attach(prog, attr);
else
ret = bpf_tracing_prog_attach(prog,
attr->link_create.target_fd,
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index eadaef8592a3..bfae9ec5d1b1 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -42,6 +42,7 @@
#define MAX_UPROBE_MULTI_CNT (1U << 20)
#define MAX_KPROBE_MULTI_CNT (1U << 20)
+#define MAX_TRACING_MULTI_CNT (1U << 20)
#ifdef CONFIG_MODULES
struct bpf_trace_module {
@@ -3592,3 +3593,89 @@ __bpf_kfunc int bpf_copy_from_user_task_str_dynptr(struct bpf_dynptr *dptr, u64
}
__bpf_kfunc_end_defs();
+
+#if defined(CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS) && defined(CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS)
+
+static void bpf_tracing_multi_link_release(struct bpf_link *link)
+{
+ struct bpf_tracing_multi_link *tr_link =
+ container_of(link, struct bpf_tracing_multi_link, link);
+
+ WARN_ON_ONCE(bpf_trampoline_multi_detach(link->prog, tr_link));
+}
+
+static void bpf_tracing_multi_link_dealloc(struct bpf_link *link)
+{
+ struct bpf_tracing_multi_link *tr_link =
+ container_of(link, struct bpf_tracing_multi_link, link);
+
+ kfree(tr_link);
+}
+
+static const struct bpf_link_ops bpf_tracing_multi_link_lops = {
+ .release = bpf_tracing_multi_link_release,
+ .dealloc = bpf_tracing_multi_link_dealloc,
+};
+
+int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
+{
+ struct bpf_tracing_multi_link *link = NULL;
+ struct bpf_link_primer link_primer;
+ u32 cnt, *ids = NULL;
+ u32 __user *uids;
+ int err;
+
+ uids = u64_to_user_ptr(attr->link_create.tracing_multi.ids);
+ cnt = attr->link_create.tracing_multi.cnt;
+
+ if (!cnt || !uids)
+ return -EINVAL;
+ if (cnt > MAX_TRACING_MULTI_CNT)
+ return -E2BIG;
+
+ ids = kvmalloc_array(cnt, sizeof(*ids), GFP_KERNEL);
+ if (!ids)
+ return -ENOMEM;
+
+ if (copy_from_user(ids, uids, cnt * sizeof(*ids))) {
+ err = -EFAULT;
+ goto error;
+ }
+
+ link = kzalloc(struct_size(link, nodes, cnt), GFP_KERNEL);
+ if (!link) {
+ err = -ENOMEM;
+ goto error;
+ }
+
+ bpf_link_init(&link->link, BPF_LINK_TYPE_TRACING_MULTI,
+ &bpf_tracing_multi_link_lops, prog, prog->expected_attach_type);
+
+ err = bpf_link_prime(&link->link, &link_primer);
+ if (err)
+ goto error;
+
+ link->nodes_cnt = cnt;
+
+ err = bpf_trampoline_multi_attach(prog, ids, link);
+ kvfree(ids);
+ if (err) {
+ bpf_link_cleanup(&link_primer);
+ return err;
+ }
+ return bpf_link_settle(&link_primer);
+
+error:
+ kvfree(ids);
+ kfree(link);
+ return err;
+}
+
+#else
+
+int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
+{
+ return -EOPNOTSUPP;
+}
+
+#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS) && CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS */
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 61f0fe5bc0aa..7f5c51f27a36 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1180,6 +1180,7 @@ enum bpf_link_type {
BPF_LINK_TYPE_UPROBE_MULTI = 12,
BPF_LINK_TYPE_NETKIT = 13,
BPF_LINK_TYPE_SOCKMAP = 14,
+ BPF_LINK_TYPE_TRACING_MULTI = 15,
__MAX_BPF_LINK_TYPE,
};
@@ -1863,6 +1864,10 @@ union bpf_attr {
};
__u64 expected_revision;
} cgroup;
+ struct {
+ __aligned_u64 ids;
+ __u32 cnt;
+ } tracing_multi;
};
} link_create;
@@ -7236,6 +7241,7 @@ enum {
TCP_BPF_SOCK_OPS_CB_FLAGS = 1008, /* Get or Set TCP sock ops flags */
SK_BPF_CB_FLAGS = 1009, /* Get or set sock ops flags in socket */
SK_BPF_BYPASS_PROT_MEM = 1010, /* Get or Set sk->sk_bypass_prot_mem */
+
};
enum {
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 1e19c7b861ec..74e579d7f310 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -156,6 +156,7 @@ static const char * const link_type_name[] = {
[BPF_LINK_TYPE_UPROBE_MULTI] = "uprobe_multi",
[BPF_LINK_TYPE_NETKIT] = "netkit",
[BPF_LINK_TYPE_SOCKMAP] = "sockmap",
+ [BPF_LINK_TYPE_TRACING_MULTI] = "tracing_multi",
};
static const char * const map_type_name[] = {
--
2.52.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH bpf-next 09/17] bpf: Add support for tracing_multi link cookies
2026-02-20 10:06 [PATCH bpf-next 00/17] bpf: tracing_multi link Jiri Olsa
` (7 preceding siblings ...)
2026-02-20 10:06 ` [PATCH bpf-next 08/17] bpf: Add support for tracing multi link Jiri Olsa
@ 2026-02-20 10:06 ` Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 10/17] bpf: Add support for tracing_multi link session Jiri Olsa
` (7 subsequent siblings)
16 siblings, 0 replies; 38+ messages in thread
From: Jiri Olsa @ 2026-02-20 10:06 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Add support to specify cookies for tracing_multi link.
Cookies are provided in array where each value is paired with provided
BTF ID value with the same array index.
Such cookie can be retrieved by bpf program with bpf_get_attach_cookie
helper call.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/bpf.h | 1 +
include/uapi/linux/bpf.h | 1 +
kernel/bpf/trampoline.c | 1 +
kernel/trace/bpf_trace.c | 18 ++++++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
5 files changed, 22 insertions(+)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 00585693d31a..63a06c85103b 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1901,6 +1901,7 @@ struct bpf_tracing_multi_node {
struct bpf_tracing_multi_link {
struct bpf_link link;
+ u64 *cookies;
int nodes_cnt;
struct bpf_tracing_multi_node nodes[] __counted_by(nodes_cnt);
};
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 7f5c51f27a36..e28722ddeb5b 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1866,6 +1866,7 @@ union bpf_attr {
} cgroup;
struct {
__aligned_u64 ids;
+ __aligned_u64 cookies;
__u32 cnt;
} tracing_multi;
};
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index c32205adfebe..516c27b89701 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -1546,6 +1546,7 @@ int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
mnode->trampoline = tr;
mnode->node.link = &link->link;
+ mnode->node.cookie = link->cookies ? link->cookies[i] : 0;
}
trampoline_lock_all();
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index bfae9ec5d1b1..927fa622c5ea 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -3609,6 +3609,7 @@ static void bpf_tracing_multi_link_dealloc(struct bpf_link *link)
struct bpf_tracing_multi_link *tr_link =
container_of(link, struct bpf_tracing_multi_link, link);
+ kvfree(tr_link->cookies);
kfree(tr_link);
}
@@ -3622,6 +3623,8 @@ int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
struct bpf_tracing_multi_link *link = NULL;
struct bpf_link_primer link_primer;
u32 cnt, *ids = NULL;
+ u64 *cookies = NULL;
+ void __user *ucookies;
u32 __user *uids;
int err;
@@ -3642,6 +3645,19 @@ int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
goto error;
}
+ ucookies = u64_to_user_ptr(attr->link_create.tracing_multi.cookies);
+ if (ucookies) {
+ cookies = kvmalloc_array(cnt, sizeof(*cookies), GFP_KERNEL);
+ if (!cookies) {
+ err = -ENOMEM;
+ goto error;
+ }
+ if (copy_from_user(cookies, ucookies, cnt * sizeof(*cookies))) {
+ err = -EFAULT;
+ goto error;
+ }
+ }
+
link = kzalloc(struct_size(link, nodes, cnt), GFP_KERNEL);
if (!link) {
err = -ENOMEM;
@@ -3656,6 +3672,7 @@ int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
goto error;
link->nodes_cnt = cnt;
+ link->cookies = cookies;
err = bpf_trampoline_multi_attach(prog, ids, link);
kvfree(ids);
@@ -3666,6 +3683,7 @@ int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
return bpf_link_settle(&link_primer);
error:
+ kvfree(cookies);
kvfree(ids);
kfree(link);
return err;
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 7f5c51f27a36..e28722ddeb5b 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1866,6 +1866,7 @@ union bpf_attr {
} cgroup;
struct {
__aligned_u64 ids;
+ __aligned_u64 cookies;
__u32 cnt;
} tracing_multi;
};
--
2.52.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH bpf-next 10/17] bpf: Add support for tracing_multi link session
2026-02-20 10:06 [PATCH bpf-next 00/17] bpf: tracing_multi link Jiri Olsa
` (8 preceding siblings ...)
2026-02-20 10:06 ` [PATCH bpf-next 09/17] bpf: Add support for tracing_multi link cookies Jiri Olsa
@ 2026-02-20 10:06 ` Jiri Olsa
2026-02-20 10:57 ` bot+bpf-ci
2026-02-20 10:06 ` [PATCH bpf-next 11/17] libbpf: Add support to create tracing multi link Jiri Olsa
` (6 subsequent siblings)
16 siblings, 1 reply; 38+ messages in thread
From: Jiri Olsa @ 2026-02-20 10:06 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding support to use session attachment with tracing_multi link.
Adding new BPF_TRACE_FSESSION_MULTI program attach type, that follows
the BPF_TRACE_FSESSION behaviour but on the tracing_multi link.
Such program is called on entry and exit of the attached function
and allows to pass cookie value from entry to exit execution.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/bpf.h | 6 ++++-
include/uapi/linux/bpf.h | 1 +
kernel/bpf/btf.c | 2 ++
kernel/bpf/syscall.c | 2 ++
kernel/bpf/trampoline.c | 43 +++++++++++++++++++++++++++-------
kernel/bpf/verifier.c | 17 ++++++++++----
kernel/trace/bpf_trace.c | 15 +++++++++++-
net/bpf/test_run.c | 1 +
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 1 +
10 files changed, 74 insertions(+), 15 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 63a06c85103b..570c5b8c9cc2 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1902,6 +1902,7 @@ struct bpf_tracing_multi_node {
struct bpf_tracing_multi_link {
struct bpf_link link;
u64 *cookies;
+ struct bpf_tramp_node *fexits;
int nodes_cnt;
struct bpf_tracing_multi_node nodes[] __counted_by(nodes_cnt);
};
@@ -2136,7 +2137,8 @@ u32 bpf_struct_ops_id(const void *kdata);
static inline bool is_tracing_multi(enum bpf_attach_type type)
{
- return type == BPF_TRACE_FENTRY_MULTI || type == BPF_TRACE_FEXIT_MULTI;
+ return type == BPF_TRACE_FENTRY_MULTI || type == BPF_TRACE_FEXIT_MULTI ||
+ type == BPF_TRACE_FSESSION_MULTI;
}
#ifdef CONFIG_NET
@@ -2213,6 +2215,8 @@ static inline int bpf_fsession_cnt(struct bpf_tramp_nodes *nodes)
for (int i = 0; i < nodes[BPF_TRAMP_FENTRY].nr_nodes; i++) {
if (fentries.nodes[i]->link->prog->expected_attach_type == BPF_TRACE_FSESSION)
cnt++;
+ if (fentries.nodes[i]->link->prog->expected_attach_type == BPF_TRACE_FSESSION_MULTI)
+ cnt++;
}
return cnt;
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index e28722ddeb5b..4520830fda06 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1156,6 +1156,7 @@ enum bpf_attach_type {
BPF_TRACE_FSESSION,
BPF_TRACE_FENTRY_MULTI,
BPF_TRACE_FEXIT_MULTI,
+ BPF_TRACE_FSESSION_MULTI,
__MAX_BPF_ATTACH_TYPE
};
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 07d1e88e3524..f8e245cec369 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6221,6 +6221,7 @@ static int btf_validate_prog_ctx_type(struct bpf_verifier_log *log, const struct
case BPF_TRACE_FEXIT:
case BPF_MODIFY_RETURN:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FSESSION_MULTI:
case BPF_TRACE_FENTRY_MULTI:
case BPF_TRACE_FEXIT_MULTI:
/* allow u64* as ctx */
@@ -6825,6 +6826,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
case BPF_LSM_CGROUP:
case BPF_TRACE_FEXIT:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FSESSION_MULTI:
/* When LSM programs are attached to void LSM hooks
* they use FEXIT trampolines and when attached to
* int LSM hooks, they use MODIFY_RETURN trampolines.
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 5892dca20b7e..1cd6c1457bd3 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -3611,6 +3611,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
if (prog->expected_attach_type != BPF_TRACE_FENTRY &&
prog->expected_attach_type != BPF_TRACE_FEXIT &&
prog->expected_attach_type != BPF_TRACE_FSESSION &&
+ prog->expected_attach_type != BPF_TRACE_FSESSION_MULTI &&
prog->expected_attach_type != BPF_TRACE_FENTRY_MULTI &&
prog->expected_attach_type != BPF_TRACE_FEXIT_MULTI &&
prog->expected_attach_type != BPF_MODIFY_RETURN) {
@@ -4390,6 +4391,7 @@ attach_type_to_prog_type(enum bpf_attach_type attach_type)
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FSESSION_MULTI:
case BPF_TRACE_FENTRY_MULTI:
case BPF_TRACE_FEXIT_MULTI:
case BPF_MODIFY_RETURN:
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 516c27b89701..fe0cb5048f39 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -206,7 +206,8 @@ bool bpf_prog_has_trampoline(const struct bpf_prog *prog)
case BPF_PROG_TYPE_TRACING:
if (eatype == BPF_TRACE_FENTRY || eatype == BPF_TRACE_FEXIT ||
eatype == BPF_MODIFY_RETURN || eatype == BPF_TRACE_FSESSION ||
- eatype == BPF_TRACE_FENTRY_MULTI || eatype == BPF_TRACE_FEXIT_MULTI)
+ eatype == BPF_TRACE_FENTRY_MULTI || eatype == BPF_TRACE_FEXIT_MULTI ||
+ eatype == BPF_TRACE_FSESSION_MULTI)
return true;
return false;
case BPF_PROG_TYPE_LSM:
@@ -808,6 +809,7 @@ static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(struct bpf_prog *prog)
case BPF_TRACE_FEXIT_MULTI:
return BPF_TRAMP_FEXIT;
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FSESSION_MULTI:
return BPF_TRAMP_FSESSION;
case BPF_LSM_MAC:
if (!prog->aux->attach_func_proto->type)
@@ -840,15 +842,34 @@ static int bpf_freplace_check_tgt_prog(struct bpf_prog *tgt_prog)
return 0;
}
+static struct bpf_tramp_node *fsession_exit(struct bpf_tramp_node *node)
+{
+ if (node->link->type == BPF_LINK_TYPE_TRACING) {
+ struct bpf_tracing_link *link;
+
+ link = container_of(node->link, struct bpf_tracing_link, link.link);
+ return &link->fexit;
+ } else if (node->link->type == BPF_LINK_TYPE_TRACING_MULTI) {
+ struct bpf_tracing_multi_link *link;
+ struct bpf_tracing_multi_node *mnode;
+
+ link = container_of(node->link, struct bpf_tracing_multi_link, link);
+ mnode = container_of(node, struct bpf_tracing_multi_node, node);
+ return &link->fexits[mnode - link->nodes];
+ }
+
+ WARN_ON_ONCE(1);
+ return NULL;
+}
+
static int __bpf_trampoline_link_prog(struct bpf_tramp_node *node,
struct bpf_trampoline *tr,
struct bpf_prog *tgt_prog,
struct bpf_trampoline_ops *ops,
void *data)
{
- struct bpf_tracing_link *tr_link = NULL;
enum bpf_tramp_prog_type kind;
- struct bpf_tramp_node *node_existing;
+ struct bpf_tramp_node *node_existing, *fexit;
struct hlist_head *prog_list;
int err = 0;
int cnt = 0, i;
@@ -896,8 +917,8 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_node *node,
hlist_add_head(&node->tramp_hlist, prog_list);
if (kind == BPF_TRAMP_FSESSION) {
tr->progs_cnt[BPF_TRAMP_FENTRY]++;
- tr_link = container_of(node, struct bpf_tracing_link, link.node);
- hlist_add_head(&tr_link->fexit.tramp_hlist, &tr->progs_hlist[BPF_TRAMP_FEXIT]);
+ fexit = fsession_exit(node);
+ hlist_add_head(&fexit->tramp_hlist, &tr->progs_hlist[BPF_TRAMP_FEXIT]);
tr->progs_cnt[BPF_TRAMP_FEXIT]++;
} else {
tr->progs_cnt[kind]++;
@@ -907,7 +928,7 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_node *node,
hlist_del_init(&node->tramp_hlist);
if (kind == BPF_TRAMP_FSESSION) {
tr->progs_cnt[BPF_TRAMP_FENTRY]--;
- hlist_del_init(&tr_link->fexit.tramp_hlist);
+ hlist_del_init(&fexit->tramp_hlist);
tr->progs_cnt[BPF_TRAMP_FEXIT]--;
} else {
tr->progs_cnt[kind]--;
@@ -948,10 +969,9 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_node *node,
tgt_prog->aux->is_extended = false;
return err;
} else if (kind == BPF_TRAMP_FSESSION) {
- struct bpf_tracing_link *tr_link =
- container_of(node, struct bpf_tracing_link, link.node);
+ struct bpf_tramp_node *fexit = fsession_exit(node);
- hlist_del_init(&tr_link->fexit.tramp_hlist);
+ hlist_del_init(&fexit->tramp_hlist);
tr->progs_cnt[BPF_TRAMP_FEXIT]--;
kind = BPF_TRAMP_FENTRY;
}
@@ -1547,6 +1567,11 @@ int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
mnode->trampoline = tr;
mnode->node.link = &link->link;
mnode->node.cookie = link->cookies ? link->cookies[i] : 0;
+
+ if (prog->expected_attach_type == BPF_TRACE_FSESSION_MULTI) {
+ link->fexits[i].link = &link->link;
+ link->fexits[i].cookie = link->cookies ? link->cookies[i] : 0;
+ }
}
trampoline_lock_all();
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 9c9303103a9c..1f5c675be51b 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -17913,6 +17913,7 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
case BPF_TRACE_FSESSION:
case BPF_TRACE_FENTRY_MULTI:
case BPF_TRACE_FEXIT_MULTI:
+ case BPF_TRACE_FSESSION_MULTI:
range = retval_range(0, 0);
break;
case BPF_TRACE_RAW_TP:
@@ -23163,7 +23164,8 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
insn_buf[0] = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1);
*cnt = 1;
} else if (desc->func_id == special_kfunc_list[KF_bpf_session_is_return] &&
- env->prog->expected_attach_type == BPF_TRACE_FSESSION) {
+ (env->prog->expected_attach_type == BPF_TRACE_FSESSION ||
+ env->prog->expected_attach_type == BPF_TRACE_FSESSION_MULTI)) {
/*
* inline the bpf_session_is_return() for fsession:
* bool bpf_session_is_return(void *ctx)
@@ -23176,7 +23178,8 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
insn_buf[2] = BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1);
*cnt = 3;
} else if (desc->func_id == special_kfunc_list[KF_bpf_session_cookie] &&
- env->prog->expected_attach_type == BPF_TRACE_FSESSION) {
+ (env->prog->expected_attach_type == BPF_TRACE_FSESSION ||
+ env->prog->expected_attach_type == BPF_TRACE_FSESSION_MULTI)) {
/*
* inline bpf_session_cookie() for fsession:
* __u64 *bpf_session_cookie(void *ctx)
@@ -23964,6 +23967,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
if (eatype == BPF_TRACE_FEXIT ||
eatype == BPF_TRACE_FSESSION ||
eatype == BPF_TRACE_FEXIT_MULTI ||
+ eatype == BPF_TRACE_FSESSION_MULTI ||
eatype == BPF_MODIFY_RETURN) {
/* Load nr_args from ctx - 8 */
insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
@@ -24921,7 +24925,8 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
prog_extension &&
(tgt_prog->expected_attach_type == BPF_TRACE_FENTRY ||
tgt_prog->expected_attach_type == BPF_TRACE_FEXIT ||
- tgt_prog->expected_attach_type == BPF_TRACE_FSESSION)) {
+ tgt_prog->expected_attach_type == BPF_TRACE_FSESSION ||
+ tgt_prog->expected_attach_type == BPF_TRACE_FSESSION_MULTI)) {
/* Program extensions can extend all program types
* except fentry/fexit. The reason is the following.
* The fentry/fexit programs are used for performance
@@ -25021,9 +25026,11 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FSESSION_MULTI:
case BPF_TRACE_FENTRY_MULTI:
case BPF_TRACE_FEXIT_MULTI:
- if (prog->expected_attach_type == BPF_TRACE_FSESSION &&
+ if ((prog->expected_attach_type == BPF_TRACE_FSESSION ||
+ prog->expected_attach_type == BPF_TRACE_FSESSION_MULTI) &&
!bpf_jit_supports_fsession()) {
bpf_log(log, "JIT does not support fsession\n");
return -EOPNOTSUPP;
@@ -25195,6 +25202,7 @@ static bool can_be_sleepable(struct bpf_prog *prog)
case BPF_MODIFY_RETURN:
case BPF_TRACE_ITER:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FSESSION_MULTI:
case BPF_TRACE_FENTRY_MULTI:
case BPF_TRACE_FEXIT_MULTI:
return true;
@@ -25281,6 +25289,7 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
return -EINVAL;
} else if ((prog->expected_attach_type == BPF_TRACE_FEXIT ||
prog->expected_attach_type == BPF_TRACE_FSESSION ||
+ prog->expected_attach_type == BPF_TRACE_FSESSION_MULTI ||
prog->expected_attach_type == BPF_MODIFY_RETURN) &&
btf_id_set_contains(&noreturn_deny, btf_id)) {
verbose(env, "Attaching fexit/fsession/fmod_ret to __noreturn function '%s' is rejected.\n",
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 927fa622c5ea..76ce756f6210 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1306,7 +1306,8 @@ static inline bool is_uprobe_session(const struct bpf_prog *prog)
static inline bool is_trace_fsession(const struct bpf_prog *prog)
{
return prog->type == BPF_PROG_TYPE_TRACING &&
- prog->expected_attach_type == BPF_TRACE_FSESSION;
+ (prog->expected_attach_type == BPF_TRACE_FSESSION ||
+ prog->expected_attach_type == BPF_TRACE_FSESSION_MULTI);
}
static const struct bpf_func_proto *
@@ -3609,6 +3610,7 @@ static void bpf_tracing_multi_link_dealloc(struct bpf_link *link)
struct bpf_tracing_multi_link *tr_link =
container_of(link, struct bpf_tracing_multi_link, link);
+ kvfree(tr_link->fexits);
kvfree(tr_link->cookies);
kfree(tr_link);
}
@@ -3621,6 +3623,7 @@ static const struct bpf_link_ops bpf_tracing_multi_link_lops = {
int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
{
struct bpf_tracing_multi_link *link = NULL;
+ struct bpf_tramp_node *fexits = NULL;
struct bpf_link_primer link_primer;
u32 cnt, *ids = NULL;
u64 *cookies = NULL;
@@ -3658,6 +3661,14 @@ int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
}
}
+ if (prog->expected_attach_type == BPF_TRACE_FSESSION_MULTI) {
+ fexits = kvmalloc_array(cnt, sizeof(*fexits), GFP_KERNEL);
+ if (!fexits) {
+ err = -ENOMEM;
+ goto error;
+ }
+ }
+
link = kzalloc(struct_size(link, nodes, cnt), GFP_KERNEL);
if (!link) {
err = -ENOMEM;
@@ -3673,6 +3684,7 @@ int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
link->nodes_cnt = cnt;
link->cookies = cookies;
+ link->fexits = fexits;
err = bpf_trampoline_multi_attach(prog, ids, link);
kvfree(ids);
@@ -3683,6 +3695,7 @@ int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
return bpf_link_settle(&link_primer);
error:
+ kvfree(fexits);
kvfree(cookies);
kvfree(ids);
kfree(link);
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index 3373450132f0..1aa07d40c80c 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -688,6 +688,7 @@ int bpf_prog_test_run_tracing(struct bpf_prog *prog,
case BPF_TRACE_FSESSION:
case BPF_TRACE_FENTRY_MULTI:
case BPF_TRACE_FEXIT_MULTI:
+ case BPF_TRACE_FSESSION_MULTI:
if (bpf_fentry_test1(1) != 2 ||
bpf_fentry_test2(2, 3) != 5 ||
bpf_fentry_test3(4, 5, 6) != 15 ||
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index e28722ddeb5b..4520830fda06 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1156,6 +1156,7 @@ enum bpf_attach_type {
BPF_TRACE_FSESSION,
BPF_TRACE_FENTRY_MULTI,
BPF_TRACE_FEXIT_MULTI,
+ BPF_TRACE_FSESSION_MULTI,
__MAX_BPF_ATTACH_TYPE
};
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 74e579d7f310..1eb3869e3444 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -138,6 +138,7 @@ static const char * const attach_type_name[] = {
[BPF_TRACE_UPROBE_SESSION] = "trace_uprobe_session",
[BPF_TRACE_FENTRY_MULTI] = "trace_fentry_multi",
[BPF_TRACE_FEXIT_MULTI] = "trace_fexit_multi",
+ [BPF_TRACE_FSESSION_MULTI] = "trace_fsession_multi",
};
static const char * const link_type_name[] = {
--
2.52.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH bpf-next 11/17] libbpf: Add support to create tracing multi link
2026-02-20 10:06 [PATCH bpf-next 00/17] bpf: tracing_multi link Jiri Olsa
` (9 preceding siblings ...)
2026-02-20 10:06 ` [PATCH bpf-next 10/17] bpf: Add support for tracing_multi link session Jiri Olsa
@ 2026-02-20 10:06 ` Jiri Olsa
2026-02-20 10:57 ` bot+bpf-ci
2026-02-20 10:06 ` [PATCH bpf-next 12/17] selftests/bpf: Add tracing multi skel/pattern/ids attach tests Jiri Olsa
` (5 subsequent siblings)
16 siblings, 1 reply; 38+ messages in thread
From: Jiri Olsa @ 2026-02-20 10:06 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding bpf_program__attach_tracing_multi function for attaching
tracing program to multiple functions.
struct bpf_link *
bpf_program__attach_tracing_multi(const struct bpf_program *prog,
const char *pattern,
const struct bpf_tracing_multi_opts *opts);
User can specify functions to attach with 'pattern' argument that
allows wildcards (*?' supported) or provide BTF ids of functions
in array directly via opts argument. These options are mutually
exclusive.
When using BTF ids, user can also provide cookie value for each
provided id/function, that can be retrieved later in bpf program
with bpf_get_attach_cookie helper. Each cookie value is paired with
provided BTF id with the same array index.
Adding support to auto attach programs with following sections:
fsession.multi/<pattern>
fsession.multi.s/<pattern>
fentry.multi/<pattern>
fexit.multi/<pattern>
fentry.multi.s/<pattern>
fexit.multi.s/<pattern>
The provided <pattern> is used as 'pattern' argument in
bpf_program__attach_kprobe_multi_opts function.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
tools/lib/bpf/bpf.c | 9 ++
tools/lib/bpf/bpf.h | 5 +
tools/lib/bpf/libbpf.c | 196 +++++++++++++++++++++++++++++++++++++++
tools/lib/bpf/libbpf.h | 15 +++
tools/lib/bpf/libbpf.map | 1 +
5 files changed, 226 insertions(+)
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index 5846de364209..6c741df4c311 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -790,6 +790,15 @@ int bpf_link_create(int prog_fd, int target_fd,
if (!OPTS_ZEROED(opts, uprobe_multi))
return libbpf_err(-EINVAL);
break;
+ case BPF_TRACE_FENTRY_MULTI:
+ case BPF_TRACE_FEXIT_MULTI:
+ case BPF_TRACE_FSESSION_MULTI:
+ attr.link_create.tracing_multi.ids = (__u64) OPTS_GET(opts, tracing_multi.ids, 0);
+ attr.link_create.tracing_multi.cookies = (__u64) OPTS_GET(opts, tracing_multi.cookies, 0);
+ attr.link_create.tracing_multi.cnt = OPTS_GET(opts, tracing_multi.cnt, 0);
+ if (!OPTS_ZEROED(opts, tracing_multi))
+ return libbpf_err(-EINVAL);
+ break;
case BPF_TRACE_RAW_TP:
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index 2c8e88ddb674..726a6fa585b3 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -454,6 +454,11 @@ struct bpf_link_create_opts {
__u32 relative_id;
__u64 expected_revision;
} cgroup;
+ struct {
+ __u32 *ids;
+ __u64 *cookies;
+ __u32 cnt;
+ } tracing_multi;
};
size_t :0;
};
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 1eb3869e3444..82eca31a8cc2 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -9827,6 +9827,7 @@ static int attach_kprobe_session(const struct bpf_program *prog, long cookie, st
static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link);
static int attach_lsm(const struct bpf_program *prog, long cookie, struct bpf_link **link);
static int attach_iter(const struct bpf_program *prog, long cookie, struct bpf_link **link);
+static int attach_tracing_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link);
static const struct bpf_sec_def section_defs[] = {
SEC_DEF("socket", SOCKET_FILTER, 0, SEC_NONE),
@@ -9875,6 +9876,12 @@ static const struct bpf_sec_def section_defs[] = {
SEC_DEF("fexit.s+", TRACING, BPF_TRACE_FEXIT, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace),
SEC_DEF("fsession+", TRACING, BPF_TRACE_FSESSION, SEC_ATTACH_BTF, attach_trace),
SEC_DEF("fsession.s+", TRACING, BPF_TRACE_FSESSION, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace),
+ SEC_DEF("fsession.multi+", TRACING, BPF_TRACE_FSESSION_MULTI, 0, attach_tracing_multi),
+ SEC_DEF("fsession.multi.s+", TRACING, BPF_TRACE_FSESSION_MULTI, SEC_SLEEPABLE, attach_tracing_multi),
+ SEC_DEF("fentry.multi+", TRACING, BPF_TRACE_FENTRY_MULTI, 0, attach_tracing_multi),
+ SEC_DEF("fexit.multi+", TRACING, BPF_TRACE_FEXIT_MULTI, 0, attach_tracing_multi),
+ SEC_DEF("fentry.multi.s+", TRACING, BPF_TRACE_FENTRY_MULTI, SEC_SLEEPABLE, attach_tracing_multi),
+ SEC_DEF("fexit.multi.s+", TRACING, BPF_TRACE_FEXIT_MULTI, SEC_SLEEPABLE, attach_tracing_multi),
SEC_DEF("freplace+", EXT, 0, SEC_ATTACH_BTF, attach_trace),
SEC_DEF("lsm+", LSM, BPF_LSM_MAC, SEC_ATTACH_BTF, attach_lsm),
SEC_DEF("lsm.s+", LSM, BPF_LSM_MAC, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_lsm),
@@ -12250,6 +12257,195 @@ static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, stru
return ret;
}
+#define MAX_BPF_FUNC_ARGS 12
+
+static bool btf_type_is_modifier(const struct btf_type *t)
+{
+ switch (BTF_INFO_KIND(t->info)) {
+ case BTF_KIND_TYPEDEF:
+ case BTF_KIND_VOLATILE:
+ case BTF_KIND_CONST:
+ case BTF_KIND_RESTRICT:
+ case BTF_KIND_TYPE_TAG:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static bool is_allowed_func(const struct btf *btf, const struct btf_type *t)
+{
+ const struct btf_type *proto;
+ const struct btf_param *args;
+ __u32 i, nargs;
+ __s64 ret;
+
+ proto = btf_type_by_id(btf, t->type);
+ if (BTF_INFO_KIND(proto->info) != BTF_KIND_FUNC_PROTO)
+ return false;
+
+ args = (const struct btf_param *)(proto + 1);
+ nargs = btf_vlen(proto);
+ if (nargs > MAX_BPF_FUNC_ARGS)
+ return false;
+
+ /* No support for struct/union return argument type. */
+ t = btf__type_by_id(btf, proto->type);
+ while (t && btf_type_is_modifier(t))
+ t = btf__type_by_id(btf, t->type);
+
+ if (btf_is_struct(t) || btf_is_union(t))
+ return false;
+
+ for (i = 0; i < nargs; i++) {
+ /* No support for variable args. */
+ if (i == nargs - 1 && args[i].type == 0)
+ return false;
+
+ /* No support of struct argument size greater than 16 bytes. */
+ ret = btf__resolve_size(btf, args[i].type);
+ if (ret < 0 || ret > 16)
+ return false;
+ }
+
+ return true;
+}
+
+static int
+collect_btf_func_ids_by_glob(const struct btf *btf, const char *pattern, __u32 **ids)
+{
+ __u32 type_id, nr_types = btf__type_cnt(btf);
+ size_t cap = 0, cnt = 0;
+
+ if (!pattern)
+ return -EINVAL;
+
+ for (type_id = 1; type_id < nr_types; type_id++) {
+ const struct btf_type *t = btf__type_by_id(btf, type_id);
+ const char *name;
+ int err;
+
+ if (btf_kind(t) != BTF_KIND_FUNC)
+ continue;
+ name = btf__name_by_offset(btf, t->name_off);
+ if (!name)
+ continue;
+
+ if (!glob_match(name, pattern))
+ continue;
+ if (!is_allowed_func(btf, t))
+ continue;
+
+ err = libbpf_ensure_mem((void **) ids, &cap, sizeof(**ids), cnt + 1);
+ if (err) {
+ free(*ids);
+ return -ENOMEM;
+ }
+ (*ids)[cnt++] = type_id;
+ }
+
+ return cnt;
+}
+
+struct bpf_link *
+bpf_program__attach_tracing_multi(const struct bpf_program *prog, const char *pattern,
+ const struct bpf_tracing_multi_opts *opts)
+{
+ LIBBPF_OPTS(bpf_link_create_opts, lopts);
+ __u32 *ids, cnt, *free_ids = NULL;
+ __u64 *cookies;
+ int prog_fd, link_fd, err;
+ struct bpf_link *link;
+
+ ids = OPTS_GET(opts, ids, NULL);
+ cookies = OPTS_GET(opts, cookies, NULL);
+ cnt = OPTS_GET(opts, cnt, 0);
+
+ if (!!ids != !!cnt)
+ return libbpf_err_ptr(-EINVAL);
+ if (pattern && (ids || cookies))
+ return libbpf_err_ptr(-EINVAL);
+ if (!pattern && !ids)
+ return libbpf_err_ptr(-EINVAL);
+
+ if (pattern) {
+ err = bpf_object__load_vmlinux_btf(prog->obj, true);
+ if (err)
+ return libbpf_err_ptr(err);
+
+ cnt = collect_btf_func_ids_by_glob(prog->obj->btf_vmlinux, pattern, &ids);
+ if (cnt < 0)
+ return libbpf_err_ptr(cnt);
+ if (cnt == 0)
+ return libbpf_err_ptr(-EINVAL);
+ free_ids = ids;
+ }
+
+ lopts.tracing_multi.ids = ids;
+ lopts.tracing_multi.cookies = cookies;
+ lopts.tracing_multi.cnt = cnt;
+
+ link = calloc(1, sizeof(*link));
+ if (!link) {
+ err = -ENOMEM;
+ goto error;
+ }
+ link->detach = &bpf_link__detach_fd;
+
+ prog_fd = bpf_program__fd(prog);
+ link_fd = bpf_link_create(prog_fd, 0, prog->expected_attach_type, &lopts);
+ if (link_fd < 0) {
+ err = -errno;
+ pr_warn("prog '%s': failed to attach: %s\n", prog->name, errstr(err));
+ goto error;
+ }
+ link->fd = link_fd;
+ free(free_ids);
+ return link;
+
+error:
+ free(link);
+ free(free_ids);
+ return libbpf_err_ptr(err);
+}
+
+static int attach_tracing_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link)
+{
+ bool is_fexit, is_fsession;
+ const char *spec;
+ char *pattern;
+ int n;
+
+ /* Do not allow auto attach if there's no function pattern. */
+ if (strcmp(prog->sec_name, "fentry.multi") == 0 ||
+ strcmp(prog->sec_name, "fexit.multi") == 0 ||
+ strcmp(prog->sec_name, "fsession.multi") == 0 ||
+ strcmp(prog->sec_name, "fentry.multi.s") == 0 ||
+ strcmp(prog->sec_name, "fexit.multi.s") == 0 ||
+ strcmp(prog->sec_name, "fsession.multi.s") == 0)
+ return 0;
+
+ is_fexit = str_has_pfx(prog->sec_name, "fexit.multi/");
+ is_fsession = str_has_pfx(prog->sec_name, "fsession.multi/");
+
+ if (is_fsession)
+ spec = prog->sec_name + sizeof("fsession.multi/") - 1;
+ else if (is_fexit)
+ spec = prog->sec_name + sizeof("fexit.multi/") - 1;
+ else
+ spec = prog->sec_name + sizeof("fentry.multi/") - 1;
+
+ n = sscanf(spec, "%m[a-zA-Z0-9_.*?]", &pattern);
+ if (n < 1) {
+ pr_warn("tracing multi pattern is invalid: %s\n", spec);
+ return -EINVAL;
+ }
+
+ *link = bpf_program__attach_tracing_multi(prog, pattern, NULL);
+ free(pattern);
+ return libbpf_get_error(*link);
+}
+
static inline int add_uprobe_event_legacy(const char *probe_name, bool retprobe,
const char *binary_path, size_t offset)
{
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index dfc37a615578..b677aea7e592 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -701,6 +701,21 @@ bpf_program__attach_ksyscall(const struct bpf_program *prog,
const char *syscall_name,
const struct bpf_ksyscall_opts *opts);
+struct bpf_tracing_multi_opts {
+ /* size of this struct, for forward/backward compatibility */
+ size_t sz;
+ __u32 *ids;
+ __u64 *cookies;
+ size_t cnt;
+ size_t :0;
+};
+
+#define bpf_tracing_multi_opts__last_field cnt
+
+LIBBPF_API struct bpf_link *
+bpf_program__attach_tracing_multi(const struct bpf_program *prog, const char *pattern,
+ const struct bpf_tracing_multi_opts *opts);
+
struct bpf_uprobe_opts {
/* size of this struct, for forward/backward compatibility */
size_t sz;
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index d18fbcea7578..ff4d7b2c8a14 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -452,6 +452,7 @@ LIBBPF_1.7.0 {
bpf_map__set_exclusive_program;
bpf_map__exclusive_program;
bpf_prog_assoc_struct_ops;
+ bpf_program__attach_tracing_multi;
bpf_program__assoc_struct_ops;
btf__permute;
} LIBBPF_1.6.0;
--
2.52.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH bpf-next 12/17] selftests/bpf: Add tracing multi skel/pattern/ids attach tests
2026-02-20 10:06 [PATCH bpf-next 00/17] bpf: tracing_multi link Jiri Olsa
` (10 preceding siblings ...)
2026-02-20 10:06 ` [PATCH bpf-next 11/17] libbpf: Add support to create tracing multi link Jiri Olsa
@ 2026-02-20 10:06 ` Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 13/17] selftests/bpf: Add tracing multi intersect tests Jiri Olsa
` (4 subsequent siblings)
16 siblings, 0 replies; 38+ messages in thread
From: Jiri Olsa @ 2026-02-20 10:06 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding tests for tracing_multi link attachment via all possible
libbpf apis - skeleton, function pattern and btf ids.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
tools/testing/selftests/bpf/Makefile | 3 +-
.../selftests/bpf/prog_tests/tracing_multi.c | 213 ++++++++++++++++++
.../bpf/progs/tracing_multi_attach.c | 26 +++
.../selftests/bpf/progs/tracing_multi_check.c | 152 +++++++++++++
4 files changed, 393 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/tracing_multi.c
create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_attach.c
create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_check.c
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 6776158f1f3e..849c585fc2a1 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -481,7 +481,7 @@ SKEL_BLACKLIST := btf__% test_pinning_invalid.c test_sk_assign.c
LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \
linked_vars.skel.h linked_maps.skel.h \
test_subskeleton.skel.h test_subskeleton_lib.skel.h \
- test_usdt.skel.h
+ test_usdt.skel.h tracing_multi.skel.h
LSKELS := fexit_sleep.c trace_printk.c trace_vprintk.c map_ptr_kern.c \
core_kern.c core_kern_overflow.c test_ringbuf.c \
@@ -507,6 +507,7 @@ test_usdt.skel.h-deps := test_usdt.bpf.o test_usdt_multispec.bpf.o
xsk_xdp_progs.skel.h-deps := xsk_xdp_progs.bpf.o
xdp_hw_metadata.skel.h-deps := xdp_hw_metadata.bpf.o
xdp_features.skel.h-deps := xdp_features.bpf.o
+tracing_multi.skel.h-deps := tracing_multi_attach.bpf.o tracing_multi_check.bpf.o
LINKED_BPF_OBJS := $(foreach skel,$(LINKED_SKELS),$($(skel)-deps))
LINKED_BPF_SRCS := $(patsubst %.bpf.o,%.c,$(LINKED_BPF_OBJS))
diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
new file mode 100644
index 000000000000..79b84701d38f
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
@@ -0,0 +1,213 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <test_progs.h>
+#include <bpf/btf.h>
+#include <search.h>
+#include "bpf/libbpf_internal.h"
+#include "tracing_multi.skel.h"
+#include "trace_helpers.h"
+
+static const char * const bpf_fentry_test[] = {
+ "bpf_fentry_test1",
+ "bpf_fentry_test2",
+ "bpf_fentry_test3",
+ "bpf_fentry_test4",
+ "bpf_fentry_test5",
+ "bpf_fentry_test6",
+ "bpf_fentry_test7",
+ "bpf_fentry_test8",
+ "bpf_fentry_test9",
+ "bpf_fentry_test10",
+};
+
+#define FUNCS_CNT (ARRAY_SIZE(bpf_fentry_test))
+
+static int compare(const void *ppa, const void *ppb)
+{
+ const char *pa = *(const char **) ppa;
+ const char *pb = *(const char **) ppb;
+
+ return strcmp(pa, pb);
+}
+
+static __u32 *get_ids(const char * const funcs[], int funcs_cnt)
+{
+ __u32 nr, type_id, cnt = 0;
+ void *root = NULL;
+ __u32 *ids = NULL;
+ struct btf *btf;
+ int i, err = 0;
+
+ btf = btf__load_vmlinux_btf();
+ if (!ASSERT_OK_PTR(btf, "btf__load_vmlinux_btf"))
+ return NULL;
+
+ ids = calloc(funcs_cnt, sizeof(ids[0]));
+ if (!ids)
+ goto out;
+
+ /*
+ * We sort function names by name and search them
+ * below for each function.
+ */
+ for (i = 0; i < funcs_cnt; i++)
+ tsearch(&funcs[i], &root, compare);
+
+ nr = btf__type_cnt(btf);
+ for (type_id = 1; type_id < nr && cnt < funcs_cnt; type_id++) {
+ const struct btf_type *type;
+ const char *str, ***val;
+ unsigned int idx;
+
+ type = btf__type_by_id(btf, type_id);
+ if (!type) {
+ err = -1;
+ break;
+ }
+
+ if (BTF_INFO_KIND(type->info) != BTF_KIND_FUNC)
+ continue;
+
+ str = btf__name_by_offset(btf, type->name_off);
+ if (!str) {
+ err = -1;
+ break;
+ }
+
+ val = tfind(&str, &root, compare);
+ if (!val)
+ continue;
+
+ /*
+ * We keep pointer for each function name so we can get the original
+ * array index and have the resulting ids array matching the original
+ * function array.
+ *
+ * Doing it this way allow us to easily test the cookies support,
+ * because each cookie is attach to particular function/id.
+ */
+ idx = *val - funcs;
+ ids[idx] = type_id;
+ cnt++;
+ }
+
+ if (err) {
+ free(ids);
+ ids = NULL;
+ }
+
+out:
+ btf__free(btf);
+ return ids;
+}
+
+static void tracing_multi_test_run(struct tracing_multi *skel)
+{
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ int err, prog_fd;
+
+ prog_fd = bpf_program__fd(skel->progs.test_fentry);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+
+ ASSERT_EQ(skel->bss->test_result_fentry, FUNCS_CNT, "test_result_fentry");
+ ASSERT_EQ(skel->bss->test_result_fexit, FUNCS_CNT, "test_result_fexit");
+}
+
+static void test_skel_api(void)
+{
+ struct tracing_multi *skel = NULL;
+ int err;
+
+ skel = tracing_multi__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "tracing_multi__open_and_load"))
+ return;
+
+ skel->bss->pid = getpid();
+
+ err = tracing_multi__attach(skel);
+ if (!ASSERT_OK(err, "tracing_multi__attach"))
+ goto cleanup;
+
+ tracing_multi_test_run(skel);
+
+cleanup:
+ tracing_multi__destroy(skel);
+}
+
+static void test_link_api_pattern(void)
+{
+ struct tracing_multi *skel = NULL;
+
+ skel = tracing_multi__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "tracing_multi__open_and_load"))
+ return;
+
+ skel->bss->pid = getpid();
+
+ skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
+ "bpf_fentry_test*", NULL);
+ if (!ASSERT_OK_PTR(skel->links.test_fentry, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ skel->links.test_fexit = bpf_program__attach_tracing_multi(skel->progs.test_fexit,
+ "bpf_fentry_test*", NULL);
+ if (!ASSERT_OK_PTR(skel->links.test_fexit, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ tracing_multi_test_run(skel);
+
+cleanup:
+ tracing_multi__destroy(skel);
+}
+
+static void test_link_api_ids(void)
+{
+ LIBBPF_OPTS(bpf_tracing_multi_opts, opts);
+ struct tracing_multi *skel = NULL;
+ size_t cnt = FUNCS_CNT;
+ __u32 *ids;
+
+ skel = tracing_multi__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "tracing_multi__open_and_load"))
+ return;
+
+ skel->bss->pid = getpid();
+
+ ids = get_ids(bpf_fentry_test, cnt);
+ if (!ASSERT_OK_PTR(ids, "get_ids"))
+ goto cleanup;
+
+ opts.ids = ids;
+ opts.cnt = cnt;
+
+ skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
+ NULL, &opts);
+ if (!ASSERT_OK_PTR(skel->links.test_fentry, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ skel->links.test_fexit = bpf_program__attach_tracing_multi(skel->progs.test_fexit,
+ NULL, &opts);
+ if (!ASSERT_OK_PTR(skel->links.test_fexit, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ tracing_multi_test_run(skel);
+
+cleanup:
+ tracing_multi__destroy(skel);
+}
+
+void test_tracing_multi_test(void)
+{
+#ifndef __x86_64__
+ test__skip();
+ return;
+#endif
+
+ if (test__start_subtest("skel_api"))
+ test_skel_api();
+ if (test__start_subtest("link_api_pattern"))
+ test_link_api_pattern();
+ if (test__start_subtest("link_api_ids"))
+ test_link_api_ids();
+}
diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_attach.c b/tools/testing/selftests/bpf/progs/tracing_multi_attach.c
new file mode 100644
index 000000000000..65b96a0d6915
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/tracing_multi_attach.c
@@ -0,0 +1,26 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <stdbool.h>
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+__hidden extern int tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return);
+
+__u64 test_result_fentry = 0;
+__u64 test_result_fexit = 0;
+
+SEC("fentry.multi/bpf_fentry_test*")
+int BPF_PROG(test_fentry)
+{
+ tracing_multi_arg_check(ctx, &test_result_fentry, false);
+ return 0;
+}
+
+SEC("fexit.multi/bpf_fentry_test*")
+int BPF_PROG(test_fexit)
+{
+ tracing_multi_arg_check(ctx, &test_result_fexit, true);
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_check.c b/tools/testing/selftests/bpf/progs/tracing_multi_check.c
new file mode 100644
index 000000000000..fe7d1708cda5
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/tracing_multi_check.c
@@ -0,0 +1,152 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <stdbool.h>
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+int pid = 0;
+
+extern const void bpf_fentry_test1 __ksym;
+extern const void bpf_fentry_test2 __ksym;
+extern const void bpf_fentry_test3 __ksym;
+extern const void bpf_fentry_test4 __ksym;
+extern const void bpf_fentry_test5 __ksym;
+extern const void bpf_fentry_test6 __ksym;
+extern const void bpf_fentry_test7 __ksym;
+extern const void bpf_fentry_test8 __ksym;
+extern const void bpf_fentry_test9 __ksym;
+extern const void bpf_fentry_test10 __ksym;
+
+int tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return)
+{
+ void *ip = (void *) bpf_get_func_ip(ctx);
+ __u64 value = 0, ret = 0;
+ long err = 0;
+
+ if (bpf_get_current_pid_tgid() >> 32 != pid)
+ return 0;
+
+ if (is_return)
+ err |= bpf_get_func_ret(ctx, &ret);
+
+ if (ip == &bpf_fentry_test1) {
+ int a;
+
+ err |= bpf_get_func_arg(ctx, 0, &value);
+ a = (int) value;
+
+ err |= is_return ? ret != 2 : 0;
+
+ *test_result += err == 0 && a == 1;
+ } else if (ip == &bpf_fentry_test2) {
+ __u64 b;
+ int a;
+
+ err |= bpf_get_func_arg(ctx, 0, &value);
+ a = (int) value;
+ err |= bpf_get_func_arg(ctx, 1, &value);
+ b = value;
+
+ err |= is_return ? ret != 5 : 0;
+
+ *test_result += err == 0 && a == 2 && b == 3;
+ } else if (ip == &bpf_fentry_test3) {
+ __u64 c;
+ char a;
+ int b;
+
+ err |= bpf_get_func_arg(ctx, 0, &value);
+ a = (char) value;
+ err |= bpf_get_func_arg(ctx, 1, &value);
+ b = (int) value;
+ err |= bpf_get_func_arg(ctx, 2, &value);
+ c = value;
+
+ err |= is_return ? ret != 15 : 0;
+
+ *test_result += err == 0 && a == 4 && b == 5 && c == 6;
+ } else if (ip == &bpf_fentry_test4) {
+ void *a;
+ char b;
+ int c;
+ __u64 d;
+
+ err |= bpf_get_func_arg(ctx, 0, &value);
+ a = (void *) value;
+ err |= bpf_get_func_arg(ctx, 1, &value);
+ b = (char) value;
+ err |= bpf_get_func_arg(ctx, 2, &value);
+ c = (int) value;
+ err |= bpf_get_func_arg(ctx, 3, &value);
+ d = value;
+
+ err |= is_return ? ret != 34 : 0;
+
+ *test_result += err == 0 && a == (void *) 7 && b == 8 && c == 9 && d == 10;
+ } else if (ip == &bpf_fentry_test5) {
+ __u64 a;
+ void *b;
+ short c;
+ int d;
+ __u64 e;
+
+ err |= bpf_get_func_arg(ctx, 0, &value);
+ a = value;
+ err |= bpf_get_func_arg(ctx, 1, &value);
+ b = (void *) value;
+ err |= bpf_get_func_arg(ctx, 2, &value);
+ c = (short) value;
+ err |= bpf_get_func_arg(ctx, 3, &value);
+ d = (int) value;
+ err |= bpf_get_func_arg(ctx, 4, &value);
+ e = value;
+
+ err |= is_return ? ret != 65 : 0;
+
+ *test_result += err == 0 && a == 11 && b == (void *) 12 && c == 13 && d == 14 && e == 15;
+ } else if (ip == &bpf_fentry_test6) {
+ __u64 a;
+ void *b;
+ short c;
+ int d;
+ void *e;
+ __u64 f;
+
+ err |= bpf_get_func_arg(ctx, 0, &value);
+ a = value;
+ err |= bpf_get_func_arg(ctx, 1, &value);
+ b = (void *) value;
+ err |= bpf_get_func_arg(ctx, 2, &value);
+ c = (short) value;
+ err |= bpf_get_func_arg(ctx, 3, &value);
+ d = (int) value;
+ err |= bpf_get_func_arg(ctx, 4, &value);
+ e = (void *) value;
+ err |= bpf_get_func_arg(ctx, 5, &value);
+ f = value;
+
+ err |= is_return ? ret != 111 : 0;
+
+ *test_result += err == 0 && a == 16 && b == (void *) 17 && c == 18 && d == 19 && e == (void *) 20 && f == 21;
+ } else if (ip == &bpf_fentry_test7) {
+ err |= is_return ? ret != 0 : 0;
+
+ *test_result += err == 0 ? 1 : 0;
+ } else if (ip == &bpf_fentry_test8) {
+ err |= is_return ? ret != 0 : 0;
+
+ *test_result += err == 0 ? 1 : 0;
+ } else if (ip == &bpf_fentry_test9) {
+ err |= is_return ? ret != 0 : 0;
+
+ *test_result += err == 0 ? 1 : 0;
+ } else if (ip == &bpf_fentry_test10) {
+ err |= is_return ? ret != 0 : 0;
+
+ *test_result += err == 0 ? 1 : 0;
+ }
+
+ return 0;
+}
--
2.52.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH bpf-next 13/17] selftests/bpf: Add tracing multi intersect tests
2026-02-20 10:06 [PATCH bpf-next 00/17] bpf: tracing_multi link Jiri Olsa
` (11 preceding siblings ...)
2026-02-20 10:06 ` [PATCH bpf-next 12/17] selftests/bpf: Add tracing multi skel/pattern/ids attach tests Jiri Olsa
@ 2026-02-20 10:06 ` Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 14/17] selftests/bpf: Add tracing multi cookies test Jiri Olsa
` (3 subsequent siblings)
16 siblings, 0 replies; 38+ messages in thread
From: Jiri Olsa @ 2026-02-20 10:06 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding tracing multi tests for intersecting attached functions.
Using bits from (from 1 to 16 values) to specify (up to 4) attached
programs, and randomly choosing bpf_fentry_test* functions they are
attached to.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
tools/testing/selftests/bpf/Makefile | 4 +-
.../selftests/bpf/prog_tests/tracing_multi.c | 99 +++++++++++++++++++
.../progs/tracing_multi_intersect_attach.c | 42 ++++++++
3 files changed, 144 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_intersect_attach.c
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 849c585fc2a1..0cbc9bcb9a2e 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -481,7 +481,8 @@ SKEL_BLACKLIST := btf__% test_pinning_invalid.c test_sk_assign.c
LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \
linked_vars.skel.h linked_maps.skel.h \
test_subskeleton.skel.h test_subskeleton_lib.skel.h \
- test_usdt.skel.h tracing_multi.skel.h
+ test_usdt.skel.h tracing_multi.skel.h \
+ tracing_multi_intersect.skel.h
LSKELS := fexit_sleep.c trace_printk.c trace_vprintk.c map_ptr_kern.c \
core_kern.c core_kern_overflow.c test_ringbuf.c \
@@ -508,6 +509,7 @@ xsk_xdp_progs.skel.h-deps := xsk_xdp_progs.bpf.o
xdp_hw_metadata.skel.h-deps := xdp_hw_metadata.bpf.o
xdp_features.skel.h-deps := xdp_features.bpf.o
tracing_multi.skel.h-deps := tracing_multi_attach.bpf.o tracing_multi_check.bpf.o
+tracing_multi_intersect.skel.h-deps := tracing_multi_intersect_attach.bpf.o tracing_multi_check.bpf.o
LINKED_BPF_OBJS := $(foreach skel,$(LINKED_SKELS),$($(skel)-deps))
LINKED_BPF_SRCS := $(patsubst %.bpf.o,%.c,$(LINKED_BPF_OBJS))
diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
index 79b84701d38f..f6ff1668d88f 100644
--- a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
+++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
@@ -5,6 +5,7 @@
#include <search.h>
#include "bpf/libbpf_internal.h"
#include "tracing_multi.skel.h"
+#include "tracing_multi_intersect.skel.h"
#include "trace_helpers.h"
static const char * const bpf_fentry_test[] = {
@@ -22,6 +23,20 @@ static const char * const bpf_fentry_test[] = {
#define FUNCS_CNT (ARRAY_SIZE(bpf_fentry_test))
+static int get_random_funcs(const char **funcs)
+{
+ int i, cnt = 0;
+
+ for (i = 0; i < FUNCS_CNT; i++) {
+ if (rand() % 2)
+ funcs[cnt++] = bpf_fentry_test[i];
+ }
+ /* we always need at least one.. */
+ if (!cnt)
+ funcs[cnt++] = bpf_fentry_test[rand() % FUNCS_CNT];
+ return cnt;
+}
+
static int compare(const void *ppa, const void *ppb)
{
const char *pa = *(const char **) ppa;
@@ -197,6 +212,88 @@ static void test_link_api_ids(void)
tracing_multi__destroy(skel);
}
+static bool is_set(__u32 mask, __u32 bit)
+{
+ return (1 << bit) & mask;
+}
+
+static void __test_intersect(__u32 mask, const struct bpf_program *progs[4], __u64 *test_results[4])
+{
+ LIBBPF_OPTS(bpf_tracing_multi_opts, opts);
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ struct bpf_link *links[4] = { NULL };
+ const char *funcs[FUNCS_CNT];
+ __u64 expected[4];
+ __u32 *ids, i;
+ int err, cnt;
+
+ /*
+ * We have 4 programs in progs and the mask bits pick which
+ * of them gets attached to randomly chosen functions.
+ */
+ for (i = 0; i < 4; i++) {
+ if (!is_set(mask, i))
+ continue;
+
+ cnt = get_random_funcs(funcs);
+ ids = get_ids(funcs, cnt);
+ if (!ASSERT_OK_PTR(ids, "get_ids"))
+ goto cleanup;
+
+ opts.ids = ids;
+ opts.cnt = cnt;
+ links[i] = bpf_program__attach_tracing_multi(progs[i], NULL, &opts);
+ free(ids);
+
+ if (!ASSERT_OK_PTR(links[i], "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ expected[i] = *test_results[i] + cnt;
+ }
+
+ err = bpf_prog_test_run_opts(bpf_program__fd(progs[0]), &topts);
+ ASSERT_OK(err, "test_run");
+
+ for (i = 0; i < 4; i++) {
+ if (!is_set(mask, i))
+ continue;
+ ASSERT_EQ(*test_results[i], expected[i], "test_results");
+ }
+
+cleanup:
+ for (i = 0; i < 4; i++)
+ bpf_link__destroy(links[i]);
+}
+
+static void test_intersect(void)
+{
+ const struct bpf_program *progs[4];
+ struct tracing_multi_intersect *skel;
+ __u64 *test_results[4];
+ __u32 i;
+
+ skel = tracing_multi_intersect__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "tracing_multi_intersect__open_and_load"))
+ return;
+
+ skel->bss->pid = getpid();
+
+ progs[0] = skel->progs.fentry_1;
+ progs[1] = skel->progs.fexit_1;
+ progs[2] = skel->progs.fentry_2;
+ progs[3] = skel->progs.fexit_2;
+
+ test_results[0] = &skel->bss->test_result_fentry_1;
+ test_results[1] = &skel->bss->test_result_fexit_1;
+ test_results[2] = &skel->bss->test_result_fentry_2;
+ test_results[3] = &skel->bss->test_result_fexit_2;
+
+ for (i = 1; i < 16; i++)
+ __test_intersect(i, progs, test_results);
+
+ tracing_multi_intersect__destroy(skel);
+}
+
void test_tracing_multi_test(void)
{
#ifndef __x86_64__
@@ -210,4 +307,6 @@ void test_tracing_multi_test(void)
test_link_api_pattern();
if (test__start_subtest("link_api_ids"))
test_link_api_ids();
+ if (test__start_subtest("intersect"))
+ test_intersect();
}
diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_intersect_attach.c b/tools/testing/selftests/bpf/progs/tracing_multi_intersect_attach.c
new file mode 100644
index 000000000000..bbd052b02559
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/tracing_multi_intersect_attach.c
@@ -0,0 +1,42 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <stdbool.h>
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+__hidden extern int tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return);
+
+__u64 test_result_fentry_1 = 0;
+__u64 test_result_fentry_2 = 0;
+__u64 test_result_fexit_1 = 0;
+__u64 test_result_fexit_2 = 0;
+
+SEC("fentry.multi")
+int BPF_PROG(fentry_1)
+{
+ tracing_multi_arg_check(ctx, &test_result_fentry_1, false);
+ return 0;
+}
+
+SEC("fentry.multi")
+int BPF_PROG(fentry_2)
+{
+ tracing_multi_arg_check(ctx, &test_result_fentry_2, false);
+ return 0;
+}
+
+SEC("fexit.multi")
+int BPF_PROG(fexit_1)
+{
+ tracing_multi_arg_check(ctx, &test_result_fexit_1, true);
+ return 0;
+}
+
+SEC("fexit.multi")
+int BPF_PROG(fexit_2)
+{
+ tracing_multi_arg_check(ctx, &test_result_fexit_2, true);
+ return 0;
+}
--
2.52.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH bpf-next 14/17] selftests/bpf: Add tracing multi cookies test
2026-02-20 10:06 [PATCH bpf-next 00/17] bpf: tracing_multi link Jiri Olsa
` (12 preceding siblings ...)
2026-02-20 10:06 ` [PATCH bpf-next 13/17] selftests/bpf: Add tracing multi intersect tests Jiri Olsa
@ 2026-02-20 10:06 ` Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 15/17] selftests/bpf: Add tracing multi session test Jiri Olsa
` (2 subsequent siblings)
16 siblings, 0 replies; 38+ messages in thread
From: Jiri Olsa @ 2026-02-20 10:06 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding tests for using cookies on tracing multi link.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
.../selftests/bpf/prog_tests/tracing_multi.c | 23 +++++++++++++++++--
.../selftests/bpf/progs/tracing_multi_check.c | 15 +++++++++++-
2 files changed, 35 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
index f6ff1668d88f..1bab4c3ea808 100644
--- a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
+++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
@@ -8,6 +8,19 @@
#include "tracing_multi_intersect.skel.h"
#include "trace_helpers.h"
+static __u64 bpf_fentry_test_cookies[] = {
+ 8, /* bpf_fentry_test1 */
+ 9, /* bpf_fentry_test2 */
+ 7, /* bpf_fentry_test3 */
+ 5, /* bpf_fentry_test4 */
+ 4, /* bpf_fentry_test5 */
+ 2, /* bpf_fentry_test6 */
+ 3, /* bpf_fentry_test7 */
+ 1, /* bpf_fentry_test8 */
+ 10, /* bpf_fentry_test9 */
+ 6, /* bpf_fentry_test10 */
+};
+
static const char * const bpf_fentry_test[] = {
"bpf_fentry_test1",
"bpf_fentry_test2",
@@ -176,7 +189,7 @@ static void test_link_api_pattern(void)
tracing_multi__destroy(skel);
}
-static void test_link_api_ids(void)
+static void test_link_api_ids(bool test_cookies)
{
LIBBPF_OPTS(bpf_tracing_multi_opts, opts);
struct tracing_multi *skel = NULL;
@@ -188,6 +201,7 @@ static void test_link_api_ids(void)
return;
skel->bss->pid = getpid();
+ skel->bss->test_cookies = test_cookies;
ids = get_ids(bpf_fentry_test, cnt);
if (!ASSERT_OK_PTR(ids, "get_ids"))
@@ -196,6 +210,9 @@ static void test_link_api_ids(void)
opts.ids = ids;
opts.cnt = cnt;
+ if (test_cookies)
+ opts.cookies = bpf_fentry_test_cookies;
+
skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
NULL, &opts);
if (!ASSERT_OK_PTR(skel->links.test_fentry, "bpf_program__attach_tracing_multi"))
@@ -306,7 +323,9 @@ void test_tracing_multi_test(void)
if (test__start_subtest("link_api_pattern"))
test_link_api_pattern();
if (test__start_subtest("link_api_ids"))
- test_link_api_ids();
+ test_link_api_ids(false);
if (test__start_subtest("intersect"))
test_intersect();
+ if (test__start_subtest("cookies"))
+ test_link_api_ids(true);
}
diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_check.c b/tools/testing/selftests/bpf/progs/tracing_multi_check.c
index fe7d1708cda5..c800e537b6b5 100644
--- a/tools/testing/selftests/bpf/progs/tracing_multi_check.c
+++ b/tools/testing/selftests/bpf/progs/tracing_multi_check.c
@@ -7,6 +7,7 @@
char _license[] SEC("license") = "GPL";
int pid = 0;
+bool test_cookies = false;
extern const void bpf_fentry_test1 __ksym;
extern const void bpf_fentry_test2 __ksym;
@@ -22,7 +23,7 @@ extern const void bpf_fentry_test10 __ksym;
int tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return)
{
void *ip = (void *) bpf_get_func_ip(ctx);
- __u64 value = 0, ret = 0;
+ __u64 value = 0, ret = 0, cookie = 0;
long err = 0;
if (bpf_get_current_pid_tgid() >> 32 != pid)
@@ -30,6 +31,8 @@ int tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return)
if (is_return)
err |= bpf_get_func_ret(ctx, &ret);
+ if (test_cookies)
+ cookie = test_cookies ? bpf_get_attach_cookie(ctx) : 0;
if (ip == &bpf_fentry_test1) {
int a;
@@ -38,6 +41,7 @@ int tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return)
a = (int) value;
err |= is_return ? ret != 2 : 0;
+ err |= test_cookies ? cookie != 8 : 0;
*test_result += err == 0 && a == 1;
} else if (ip == &bpf_fentry_test2) {
@@ -50,6 +54,7 @@ int tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return)
b = value;
err |= is_return ? ret != 5 : 0;
+ err |= test_cookies ? cookie != 9 : 0;
*test_result += err == 0 && a == 2 && b == 3;
} else if (ip == &bpf_fentry_test3) {
@@ -65,6 +70,7 @@ int tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return)
c = value;
err |= is_return ? ret != 15 : 0;
+ err |= test_cookies ? cookie != 7 : 0;
*test_result += err == 0 && a == 4 && b == 5 && c == 6;
} else if (ip == &bpf_fentry_test4) {
@@ -83,6 +89,7 @@ int tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return)
d = value;
err |= is_return ? ret != 34 : 0;
+ err |= test_cookies ? cookie != 5 : 0;
*test_result += err == 0 && a == (void *) 7 && b == 8 && c == 9 && d == 10;
} else if (ip == &bpf_fentry_test5) {
@@ -104,6 +111,7 @@ int tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return)
e = value;
err |= is_return ? ret != 65 : 0;
+ err |= test_cookies ? cookie != 4 : 0;
*test_result += err == 0 && a == 11 && b == (void *) 12 && c == 13 && d == 14 && e == 15;
} else if (ip == &bpf_fentry_test6) {
@@ -128,22 +136,27 @@ int tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return)
f = value;
err |= is_return ? ret != 111 : 0;
+ err |= test_cookies ? cookie != 2 : 0;
*test_result += err == 0 && a == 16 && b == (void *) 17 && c == 18 && d == 19 && e == (void *) 20 && f == 21;
} else if (ip == &bpf_fentry_test7) {
err |= is_return ? ret != 0 : 0;
+ err |= test_cookies ? cookie != 3 : 0;
*test_result += err == 0 ? 1 : 0;
} else if (ip == &bpf_fentry_test8) {
err |= is_return ? ret != 0 : 0;
+ err |= test_cookies ? cookie != 1 : 0;
*test_result += err == 0 ? 1 : 0;
} else if (ip == &bpf_fentry_test9) {
err |= is_return ? ret != 0 : 0;
+ err |= test_cookies ? cookie != 10 : 0;
*test_result += err == 0 ? 1 : 0;
} else if (ip == &bpf_fentry_test10) {
err |= is_return ? ret != 0 : 0;
+ err |= test_cookies ? cookie != 6 : 0;
*test_result += err == 0 ? 1 : 0;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH bpf-next 15/17] selftests/bpf: Add tracing multi session test
2026-02-20 10:06 [PATCH bpf-next 00/17] bpf: tracing_multi link Jiri Olsa
` (13 preceding siblings ...)
2026-02-20 10:06 ` [PATCH bpf-next 14/17] selftests/bpf: Add tracing multi cookies test Jiri Olsa
@ 2026-02-20 10:06 ` Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 16/17] selftests/bpf: Add tracing multi attach fails test Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 17/17] selftests/bpf: Add tracing multi attach benchmark test Jiri Olsa
16 siblings, 0 replies; 38+ messages in thread
From: Jiri Olsa @ 2026-02-20 10:06 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding tests for tracing multi link session.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
tools/testing/selftests/bpf/Makefile | 4 ++-
.../selftests/bpf/prog_tests/tracing_multi.c | 31 +++++++++++++++++++
.../bpf/progs/tracing_multi_session_attach.c | 27 ++++++++++++++++
3 files changed, 61 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_session_attach.c
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 0cbc9bcb9a2e..b415d64bf0d1 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -482,7 +482,8 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \
linked_vars.skel.h linked_maps.skel.h \
test_subskeleton.skel.h test_subskeleton_lib.skel.h \
test_usdt.skel.h tracing_multi.skel.h \
- tracing_multi_intersect.skel.h
+ tracing_multi_intersect.skel.h \
+ tracing_multi_session.skel.h
LSKELS := fexit_sleep.c trace_printk.c trace_vprintk.c map_ptr_kern.c \
core_kern.c core_kern_overflow.c test_ringbuf.c \
@@ -510,6 +511,7 @@ xdp_hw_metadata.skel.h-deps := xdp_hw_metadata.bpf.o
xdp_features.skel.h-deps := xdp_features.bpf.o
tracing_multi.skel.h-deps := tracing_multi_attach.bpf.o tracing_multi_check.bpf.o
tracing_multi_intersect.skel.h-deps := tracing_multi_intersect_attach.bpf.o tracing_multi_check.bpf.o
+tracing_multi_session.skel.h-deps := tracing_multi_session_attach.bpf.o tracing_multi_check.bpf.o
LINKED_BPF_OBJS := $(foreach skel,$(LINKED_SKELS),$($(skel)-deps))
LINKED_BPF_SRCS := $(patsubst %.bpf.o,%.c,$(LINKED_BPF_OBJS))
diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
index 1bab4c3ea808..3d9327b80e88 100644
--- a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
+++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
@@ -6,6 +6,7 @@
#include "bpf/libbpf_internal.h"
#include "tracing_multi.skel.h"
#include "tracing_multi_intersect.skel.h"
+#include "tracing_multi_session.skel.h"
#include "trace_helpers.h"
static __u64 bpf_fentry_test_cookies[] = {
@@ -311,6 +312,34 @@ static void test_intersect(void)
tracing_multi_intersect__destroy(skel);
}
+static void test_session(void)
+{
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ struct tracing_multi_session *skel;
+ int err, prog_fd;
+
+ skel = tracing_multi_session__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "tracing_multi_session__open_and_load"))
+ return;
+
+ skel->bss->pid = getpid();
+
+ err = tracing_multi_session__attach(skel);
+ if (!ASSERT_OK(err, "tracing_multi_session__attach"))
+ goto cleanup;
+
+ prog_fd = bpf_program__fd(skel->progs.test_session);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+
+ ASSERT_EQ(skel->bss->test_result_fentry, 10, "test_result_fentry");
+ /* extra count for test_result_fexit cookie */
+ ASSERT_EQ(skel->bss->test_result_fexit, 20, "test_result_fexit");
+
+cleanup:
+ tracing_multi_session__destroy(skel);
+}
+
void test_tracing_multi_test(void)
{
#ifndef __x86_64__
@@ -328,4 +357,6 @@ void test_tracing_multi_test(void)
test_intersect();
if (test__start_subtest("cookies"))
test_link_api_ids(true);
+ if (test__start_subtest("session"))
+ test_session();
}
diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_session_attach.c b/tools/testing/selftests/bpf/progs/tracing_multi_session_attach.c
new file mode 100644
index 000000000000..9d717018a00f
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/tracing_multi_session_attach.c
@@ -0,0 +1,27 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+__hidden extern int tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return);
+
+__u64 test_result_fentry = 0;
+__u64 test_result_fexit = 0;
+
+SEC("fsession.multi/bpf_fentry_test*")
+int BPF_PROG(test_session)
+{
+ volatile __u64 *cookie = bpf_session_cookie(ctx);
+
+ if (bpf_session_is_return(ctx)) {
+ tracing_multi_arg_check(ctx, &test_result_fexit, true);
+ /* extra count for test_result_fexit cookie */
+ test_result_fexit += *cookie == 0xbeafbeafbeafbeaf;
+ } else {
+ tracing_multi_arg_check(ctx, &test_result_fentry, false);
+ *cookie = 0xbeafbeafbeafbeaf;
+ }
+ return 0;
+}
--
2.52.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH bpf-next 16/17] selftests/bpf: Add tracing multi attach fails test
2026-02-20 10:06 [PATCH bpf-next 00/17] bpf: tracing_multi link Jiri Olsa
` (14 preceding siblings ...)
2026-02-20 10:06 ` [PATCH bpf-next 15/17] selftests/bpf: Add tracing multi session test Jiri Olsa
@ 2026-02-20 10:06 ` Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 17/17] selftests/bpf: Add tracing multi attach benchmark test Jiri Olsa
16 siblings, 0 replies; 38+ messages in thread
From: Jiri Olsa @ 2026-02-20 10:06 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding tests for attach fails on tracing multi link.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
.../selftests/bpf/prog_tests/tracing_multi.c | 56 +++++++++++++++++++
1 file changed, 56 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
index 3d9327b80e88..ba86a88844e6 100644
--- a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
+++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
@@ -340,6 +340,60 @@ static void test_session(void)
tracing_multi_session__destroy(skel);
}
+static void test_attach_api_fails(void)
+{
+ LIBBPF_OPTS(bpf_tracing_multi_opts, opts);
+ struct tracing_multi *skel = NULL;
+ __u64 cookies[2];
+ __u32 ids[2];
+
+ skel = tracing_multi__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "tracing_multi__open_and_load"))
+ return;
+
+ /* fail#1 pattern and opts NULL */
+ skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
+ NULL, NULL);
+ if (!ASSERT_ERR_PTR(skel->links.test_fentry, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ /* fail#2 pattern and ids */
+ opts.ids = ids;
+ opts.cnt = 2;
+
+ skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
+ "bpf_fentry_test*", &opts);
+ if (!ASSERT_ERR_PTR(skel->links.test_fentry, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ /* fail#3 pattern and cookies */
+ opts.ids = NULL;
+ opts.cnt = 2;
+ opts.cookies = cookies;
+
+ skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
+ "bpf_fentry_test*", &opts);
+ if (!ASSERT_ERR_PTR(skel->links.test_fentry, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ /* fail#4 bogus pattern */
+ skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
+ "bpf_not_really_a_function*", NULL);
+ if (!ASSERT_ERR_PTR(skel->links.test_fentry, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ /* fail#5 abnormal cnt */
+ opts.ids = ids;
+ opts.cnt = INT_MAX;
+
+ skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
+ NULL, &opts);
+ ASSERT_ERR_PTR(skel->links.test_fentry, "bpf_program__attach_tracing_multi");
+
+cleanup:
+ tracing_multi__destroy(skel);
+}
+
void test_tracing_multi_test(void)
{
#ifndef __x86_64__
@@ -359,4 +413,6 @@ void test_tracing_multi_test(void)
test_link_api_ids(true);
if (test__start_subtest("session"))
test_session();
+ if (test__start_subtest("attach_api_fails"))
+ test_attach_api_fails();
}
--
2.52.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH bpf-next 17/17] selftests/bpf: Add tracing multi attach benchmark test
2026-02-20 10:06 [PATCH bpf-next 00/17] bpf: tracing_multi link Jiri Olsa
` (15 preceding siblings ...)
2026-02-20 10:06 ` [PATCH bpf-next 16/17] selftests/bpf: Add tracing multi attach fails test Jiri Olsa
@ 2026-02-20 10:06 ` Jiri Olsa
16 siblings, 0 replies; 38+ messages in thread
From: Jiri Olsa @ 2026-02-20 10:06 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding benchmark test that attaches to (almost) all allowed tracing
functions and display attach/detach times.
# ./test_progs -t tracing_multi_bench_attach -v
bpf_testmod.ko is already unloaded.
Loading bpf_testmod.ko...
Successfully loaded bpf_testmod.ko.
serial_test_tracing_multi_bench_attach:PASS:btf__load_vmlinux_btf 0 nsec
serial_test_tracing_multi_bench_attach:PASS:tracing_multi_bench__open_and_load 0 nsec
serial_test_tracing_multi_bench_attach:PASS:get_syms 0 nsec
serial_test_tracing_multi_bench_attach:PASS:bpf_program__attach_tracing_multi 0 nsec
serial_test_tracing_multi_bench_attach: found 51186 functions
serial_test_tracing_multi_bench_attach: attached in 1.295s
serial_test_tracing_multi_bench_attach: detached in 0.243s
#507 tracing_multi_bench_attach:OK
Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
Successfully unloaded bpf_testmod.ko.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
.../selftests/bpf/prog_tests/tracing_multi.c | 177 ++++++++++++++++++
.../selftests/bpf/progs/tracing_multi_bench.c | 13 ++
2 files changed, 190 insertions(+)
create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_bench.c
diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
index ba86a88844e6..ac8ef41586f0 100644
--- a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
+++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
@@ -7,6 +7,7 @@
#include "tracing_multi.skel.h"
#include "tracing_multi_intersect.skel.h"
#include "tracing_multi_session.skel.h"
+#include "tracing_multi_bench.skel.h"
#include "trace_helpers.h"
static __u64 bpf_fentry_test_cookies[] = {
@@ -394,6 +395,182 @@ static void test_attach_api_fails(void)
tracing_multi__destroy(skel);
}
+/*
+ * Skip several kernel symbols that might not be safe or could cause delays.
+ */
+static bool skip_symbol(char *name)
+{
+ if (!strcmp(name, "arch_cpu_idle"))
+ return true;
+ if (!strcmp(name, "default_idle"))
+ return true;
+ if (!strncmp(name, "rcu_", 4))
+ return true;
+ if (!strcmp(name, "bpf_dispatcher_xdp_func"))
+ return true;
+ if (strstr(name, "rcu"))
+ return true;
+ if (strstr(name, "trace"))
+ return true;
+ if (strstr(name, "irq"))
+ return true;
+ if (strstr(name, "bpf_lsm_"))
+ return true;
+ if (!strcmp(name, "migrate_enable"))
+ return true;
+ if (!strcmp(name, "migrate_disable"))
+ return true;
+ if (!strcmp(name, "preempt_count_sub"))
+ return true;
+ if (!strcmp(name, "preempt_count_add"))
+ return true;
+ return false;
+}
+
+#define MAX_BPF_FUNC_ARGS 12
+
+static bool btf_type_is_modifier(const struct btf_type *t)
+{
+ switch (BTF_INFO_KIND(t->info)) {
+ case BTF_KIND_TYPEDEF:
+ case BTF_KIND_VOLATILE:
+ case BTF_KIND_CONST:
+ case BTF_KIND_RESTRICT:
+ case BTF_KIND_TYPE_TAG:
+ return true;
+ }
+ return false;
+}
+
+static bool is_allowed_func(const struct btf *btf, const struct btf_type *t)
+{
+ const struct btf_type *proto;
+ const struct btf_param *args;
+ __u32 i, nargs;
+ __s64 ret;
+
+ proto = btf_type_by_id(btf, t->type);
+ if (BTF_INFO_KIND(proto->info) != BTF_KIND_FUNC_PROTO)
+ return false;
+
+ args = (const struct btf_param *)(proto + 1);
+ nargs = btf_vlen(proto);
+ if (nargs > MAX_BPF_FUNC_ARGS)
+ return false;
+
+ t = btf__type_by_id(btf, proto->type);
+ while (t && btf_type_is_modifier(t))
+ t = btf__type_by_id(btf, t->type);
+
+ if (btf_is_struct(t) || btf_is_union(t))
+ return false;
+
+ for (i = 0; i < nargs; i++) {
+ /* No support for variable args */
+ if (i == nargs - 1 && args[i].type == 0)
+ return false;
+
+ /* No support of struct argument size greater than 16 bytes */
+ ret = btf__resolve_size(btf, args[i].type);
+ if (ret < 0 || ret > 16)
+ return false;
+ }
+
+ return true;
+}
+
+void serial_test_tracing_multi_bench_attach(void)
+{
+ LIBBPF_OPTS(bpf_tracing_multi_opts, opts);
+ struct tracing_multi_bench *skel = NULL;
+ size_t i, syms_cnt, cap = 0, cnt = 0;
+ long attach_start_ns, attach_end_ns;
+ long detach_start_ns, detach_end_ns;
+ double attach_delta, detach_delta;
+ struct bpf_link *link = NULL;
+ void *root = NULL;
+ __u32 *ids = NULL;
+ __u32 nr, type_id;
+ struct btf *btf;
+ char **syms;
+ int err;
+
+#ifndef __x86_64__
+ test__skip();
+ return;
+#endif
+
+ btf = btf__load_vmlinux_btf();
+ if (!ASSERT_OK_PTR(btf, "btf__load_vmlinux_btf"))
+ return;
+
+ skel = tracing_multi_bench__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "tracing_multi_bench__open_and_load"))
+ goto cleanup;
+
+ if (!ASSERT_OK(bpf_get_ksyms(&syms, &syms_cnt, true), "get_syms"))
+ goto cleanup;
+
+ for (i = 0; i < syms_cnt; i++) {
+ if (skip_symbol(syms[i]))
+ continue;
+ tsearch(&syms[i], &root, compare);
+ }
+
+ nr = btf__type_cnt(btf);
+ for (type_id = 1; type_id < nr; type_id++) {
+ const struct btf_type *type;
+ const char *str;
+
+ type = btf__type_by_id(btf, type_id);
+ if (!type)
+ break;
+
+ if (BTF_INFO_KIND(type->info) != BTF_KIND_FUNC)
+ continue;
+
+ str = btf__name_by_offset(btf, type->name_off);
+ if (!str)
+ break;
+
+ if (!tfind(&str, &root, compare))
+ continue;
+
+ if (!is_allowed_func(btf, type))
+ continue;
+
+ err = libbpf_ensure_mem((void **) &ids, &cap, sizeof(*ids), cnt + 1);
+ if (err)
+ break;
+
+ ids[cnt++] = type_id;
+ }
+
+ opts.ids = ids;
+ opts.cnt = cnt;
+
+ attach_start_ns = get_time_ns();
+ link = bpf_program__attach_tracing_multi(skel->progs.bench, NULL, &opts);
+ attach_end_ns = get_time_ns();
+
+ if (!ASSERT_OK_PTR(link, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ detach_start_ns = get_time_ns();
+ bpf_link__destroy(link);
+ detach_end_ns = get_time_ns();
+
+ attach_delta = (attach_end_ns - attach_start_ns) / 1000000000.0;
+ detach_delta = (detach_end_ns - detach_start_ns) / 1000000000.0;
+
+ printf("%s: found %lu functions\n", __func__, cnt);
+ printf("%s: attached in %7.3lfs\n", __func__, attach_delta);
+ printf("%s: detached in %7.3lfs\n", __func__, detach_delta);
+
+cleanup:
+ tracing_multi_bench__destroy(skel);
+}
+
void test_tracing_multi_test(void)
{
#ifndef __x86_64__
diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_bench.c b/tools/testing/selftests/bpf/progs/tracing_multi_bench.c
new file mode 100644
index 000000000000..067ba668489b
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/tracing_multi_bench.c
@@ -0,0 +1,13 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <stdbool.h>
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+SEC("fentry.multi")
+int BPF_PROG(bench)
+{
+ return 0;
+}
--
2.52.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* Re: [PATCH bpf-next 02/17] bpf: Use mutex lock pool for bpf trampolines
2026-02-20 10:06 ` [PATCH bpf-next 02/17] bpf: Use mutex lock pool for bpf trampolines Jiri Olsa
@ 2026-02-20 10:57 ` bot+bpf-ci
2026-02-22 14:33 ` Jiri Olsa
2026-02-20 19:58 ` Alexei Starovoitov
1 sibling, 1 reply; 38+ messages in thread
From: bot+bpf-ci @ 2026-02-20 10:57 UTC (permalink / raw)
To: jolsa, ast, daniel, andrii
Cc: bpf, linux-trace-kernel, kafai, eddyz87, songliubraving, yhs,
menglong8.dong, rostedt, ast, andrii, daniel, martin.lau, eddyz87,
yonghong.song, clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 2986 bytes --]
> diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> --- a/kernel/bpf/trampoline.c
> +++ b/kernel/bpf/trampoline.c
[ ... ]
> @@ -71,7 +110,7 @@ static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, unsigned long ip,
> /* This is called inside register_ftrace_direct_multi(), so
> * tr->mutex is already locked.
> */
> - lockdep_assert_held_once(&tr->mutex);
> + lockdep_assert_held_once(trampoline_locks_lookup(tr));
This comment still references tr->mutex, but the field was removed from
struct bpf_trampoline by this patch. Should the comment say the
trampoline pool lock is already held instead?
[ ... ]
> @@ -102,7 +141,7 @@ static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, unsigned long ip,
> /* The normal locking order is
> * tr->mutex => direct_mutex (ftrace.c) => ftrace_lock (ftrace.c)
> *
> * The following two commands are called from
> *
> * prepare_direct_functions_for_ipmodify
> * cleanup_direct_functions_after_ipmodify
> *
> * In both cases, direct_mutex is already locked. Use
> * mutex_trylock(&tr->mutex) to avoid deadlock in race condition
> * (something else is making changes to this same trampoline).
> */
> - if (!mutex_trylock(&tr->mutex)) {
> + if (!mutex_trylock(trampoline_locks_lookup(tr))) {
> /* sleep 1 ms to make sure whatever holding tr->mutex makes
> * some progress.
> */
The locking order comment and the two references to tr->mutex in this
block also appear to need updating. The locking order is now pool lock
rather than tr->mutex, and the trylock and sleep comments still name the
removed field.
[ ... ]
> @@ -1026,11 +1065,11 @@ struct bpf_attach_target_info tgt_info = {};
> /* note, we're still holding tr refcnt from above */
>
> - mutex_unlock(&tr->mutex);
> + trampoline_unlock(tr);
>
> return 0;
> err:
> - mutex_unlock(&tr->mutex);
> + trampoline_unlock(tr);
>
> if (shim_link)
> bpf_link_put(&shim_link->link.link);
>
> /* have to release tr while _not_ holding its mutex */
> bpf_trampoline_put(tr); /* bpf_trampoline_get above */
Similarly, this comment in bpf_trampoline_link_cgroup_shim() references
"its mutex" which no longer exists as a per-trampoline field.
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22220470181
AI-authorship-score: low
AI-authorship-explanation: Well-motivated infrastructure patch by known BPF maintainer with collaborative development history (Suggested-by tag) and idiomatic kernel coding patterns; stale comments are a typical human oversight.
issues-found: 1
issue-severity-score: low
issue-severity-explanation: Stale comments in bpf_tramp_ftrace_ops_func reference the removed tr->mutex field in locking order documentation, which could mislead future developers but has no runtime impact.
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH bpf-next 11/17] libbpf: Add support to create tracing multi link
2026-02-20 10:06 ` [PATCH bpf-next 11/17] libbpf: Add support to create tracing multi link Jiri Olsa
@ 2026-02-20 10:57 ` bot+bpf-ci
2026-02-22 14:36 ` Jiri Olsa
0 siblings, 1 reply; 38+ messages in thread
From: bot+bpf-ci @ 2026-02-20 10:57 UTC (permalink / raw)
To: jolsa, ast, daniel, andrii
Cc: bpf, linux-trace-kernel, kafai, eddyz87, songliubraving, yhs,
menglong8.dong, rostedt, ast, andrii, daniel, martin.lau, eddyz87,
yonghong.song, clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 5176 bytes --]
> The provided <pattern> is used as 'pattern' argument in
> bpf_program__attach_kprobe_multi_opts function.
This isn't a bug, but the commit message references
bpf_program__attach_kprobe_multi_opts when the code actually calls
bpf_program__attach_tracing_multi. Looks like a leftover from
the kprobe_multi code this was modeled after.
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
[ ... ]
> +struct bpf_link *
> +bpf_program__attach_tracing_multi(const struct bpf_program *prog, const char *pattern,
> + const struct bpf_tracing_multi_opts *opts)
> +{
> + LIBBPF_OPTS(bpf_link_create_opts, lopts);
> + __u32 *ids, cnt, *free_ids = NULL;
cnt is declared as __u32 here, but collect_btf_func_ids_by_glob()
returns int. When that function returns a negative error such as
-ENOMEM, the value wraps to a large unsigned number.
> + __u64 *cookies;
> + int prog_fd, link_fd, err;
> + struct bpf_link *link;
[ ... ]
> + if (pattern) {
> + err = bpf_object__load_vmlinux_btf(prog->obj, true);
> + if (err)
> + return libbpf_err_ptr(err);
> +
> + cnt = collect_btf_func_ids_by_glob(prog->obj->btf_vmlinux, pattern, &ids);
> + if (cnt < 0)
> + return libbpf_err_ptr(cnt);
Since cnt is __u32, this comparison is always false. On an allocation
failure inside collect_btf_func_ids_by_glob(), the error is silently
ignored. That function also does free(*ids) before returning -ENOMEM,
so ids becomes a dangling pointer. The code then continues with a
freed ids pointer and a garbage cnt, and eventually double-frees ids
through the error label via free_ids.
Should cnt be declared as int instead of __u32?
> + if (cnt == 0)
> + return libbpf_err_ptr(-EINVAL);
> + free_ids = ids;
> + }
[ ... ]
> +error:
> + free(link);
> + free(free_ids);
> + return libbpf_err_ptr(err);
> +}
> +
> +static int attach_tracing_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link)
> +{
> + bool is_fexit, is_fsession;
> + const char *spec;
> + char *pattern;
> + int n;
> +
> + /* Do not allow auto attach if there's no function pattern. */
> + if (strcmp(prog->sec_name, "fentry.multi") == 0 ||
> + strcmp(prog->sec_name, "fexit.multi") == 0 ||
> + strcmp(prog->sec_name, "fsession.multi") == 0 ||
> + strcmp(prog->sec_name, "fentry.multi.s") == 0 ||
> + strcmp(prog->sec_name, "fexit.multi.s") == 0 ||
> + strcmp(prog->sec_name, "fsession.multi.s") == 0)
> + return 0;
> +
> + is_fexit = str_has_pfx(prog->sec_name, "fexit.multi/");
> + is_fsession = str_has_pfx(prog->sec_name, "fsession.multi/");
> +
> + if (is_fsession)
> + spec = prog->sec_name + sizeof("fsession.multi/") - 1;
> + else if (is_fexit)
> + spec = prog->sec_name + sizeof("fexit.multi/") - 1;
> + else
> + spec = prog->sec_name + sizeof("fentry.multi/") - 1;
The sleepable variants with patterns don't seem to be handled here.
For a section name like "fentry.multi.s/my_func", none of the
str_has_pfx checks match because "fexit.multi/" and "fsession.multi/"
are not prefixes of "fentry.multi.s/..." (the character at position
11 or 14 is '.' not '/').
The else branch then computes spec using sizeof("fentry.multi/") - 1
which gives an offset of 13, pointing into "s/my_func" instead of
"my_func". The sscanf then extracts "s" as the pattern, and the
program attaches to any function matching the glob "s" rather than
"my_func".
Similarly for "fexit.multi.s/X" and "fsession.multi.s/X", the
offsets are wrong and produce either error returns or incorrect
patterns.
The SEC_DEF entries register all six sleepable variants, and
sec_def_matches() correctly distinguishes "fentry.multi.s+" from
"fentry.multi+" using the '/' separator check, so the matching
works but the pattern extraction here does not.
Would it work to also check for the ".s/" variants, for example:
is_fexit = str_has_pfx(prog->sec_name, "fexit.multi/") ||
str_has_pfx(prog->sec_name, "fexit.multi.s/");
with the corresponding sizeof adjustments for the spec offset?
> +
> + n = sscanf(spec, "%m[a-zA-Z0-9_.*?]", &pattern);
> + if (n < 1) {
> + pr_warn("tracing multi pattern is invalid: %s\n", spec);
> + return -EINVAL;
> + }
> +
> + *link = bpf_program__attach_tracing_multi(prog, pattern, NULL);
> + free(pattern);
> + return libbpf_get_error(*link);
> +}
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22220470181
AI-authorship-score: low
AI-authorship-explanation: Written by a well-known BPF maintainer with long contribution history; bugs are typical human copy-paste errors from modeling after existing kprobe_multi code, and the patch evolved through multiple RFC revisions on LKML.
issues-found: 3
issue-severity-score: medium
issue-severity-explanation: The __u32 cnt type bug creates dead error handling that can lead to a double-free under memory pressure, and the sleepable section parsing bug causes wrong functions to be attached silently; both affect correctness of a new public API.
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH bpf-next 07/17] bpf: Add bpf_trampoline_multi_attach/detach functions
2026-02-20 10:06 ` [PATCH bpf-next 07/17] bpf: Add bpf_trampoline_multi_attach/detach functions Jiri Olsa
@ 2026-02-20 10:57 ` bot+bpf-ci
2026-02-22 14:34 ` Jiri Olsa
0 siblings, 1 reply; 38+ messages in thread
From: bot+bpf-ci @ 2026-02-20 10:57 UTC (permalink / raw)
To: jolsa, ast, daniel, andrii
Cc: bpf, linux-trace-kernel, kafai, eddyz87, songliubraving, yhs,
menglong8.dong, rostedt, ast, andrii, daniel, martin.lau, eddyz87,
yonghong.song, clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 4511 bytes --]
> diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> index 2d701bc6e1a5..c32205adfebe 100644
> --- a/kernel/bpf/trampoline.c
> +++ b/kernel/bpf/trampoline.c
[ ... ]
> +static int unregister_fentry_multi(struct bpf_trampoline *tr, u32 orig_flags, void *old_addr,
> + void *ptr)
> +{
> + unsigned long addr = (unsigned long) old_addr;
> + unsigned long ip = ftrace_location(tr->ip);
> + struct fentry_multi_data *data = ptr;
> +
> + if (bpf_trampoline_use_jmp(tr->flags))
> + addr = ftrace_jmp_set(addr);
> + return add_ftrace_hash_entry_direct(data->unreg, ip, addr) ? 0 : -ENOMEM;
> +}
[ ... ]
> +int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
> + struct bpf_tracing_multi_link *link)
> +{
> + struct bpf_attach_target_info tgt_info = {};
> + struct bpf_tracing_multi_node *mnode;
> + int j, i, err, cnt = link->nodes_cnt;
> + struct fentry_multi_data data = {};
> + struct bpf_trampoline *tr;
> + u64 key;
> +
> + data.reg = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
> + if (!data.reg)
> + return -ENOMEM;
> +
> + data.modify = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
> + if (!data.modify) {
> + free_ftrace_hash(data.reg);
> + return -ENOMEM;
> + }
data.unreg is left as NULL here. In the forward path this is
fine since only data.reg and data.modify are used, but the
rollback path can reach unregister_fentry_multi() which calls
add_ftrace_hash_entry_direct(data->unreg, ...) with a NULL hash.
This happens when rollback_unlink calls
__bpf_trampoline_unlink_prog() on a trampoline that had no
other BPF programs before this attach. Removing the only
program brings total to 0, so bpf_trampoline_update_ops()
calls ops->unregister_fentry, reaching:
unregister_fentry_multi()
add_ftrace_hash_entry_direct(data->unreg, ip, addr)
__add_hash_entry(hash, entry)
ftrace_hash_key(hash, ...) <-- NULL dereference
Should data.unreg also be allocated here to cover the rollback
path?
> +
> + for (i = 0; i < cnt; i++) {
> + mnode = &link->nodes[i];
> + err = bpf_check_attach_target(NULL, prog, NULL, ids[i], &tgt_info);
> + if (err)
> + goto rollback_put;
> +
> + key = bpf_trampoline_compute_key(NULL, prog->aux->attach_btf, ids[i]);
> +
> + tr = bpf_trampoline_get(key, &tgt_info);
> + if (!tr) {
> + err = -ENOMEM;
> + goto rollback_put;
> + }
> +
> + mnode->trampoline = tr;
> + mnode->node.link = &link->link;
> + }
After this loop completes, all cnt trampolines have been
obtained via bpf_trampoline_get() with refcounts incremented.
> +
> + trampoline_lock_all();
> +
> + for (i = 0; i < cnt; i++) {
> + mnode = &link->nodes[i];
> + err = __bpf_trampoline_link_prog(&mnode->node, mnode->trampoline, NULL,
> + &trampoline_multi_ops, &data);
> + if (err)
> + goto rollback_unlink;
> + }
[ ... ]
> +rollback_unlink:
> + for (j = 0; j < i; j++) {
> + mnode = &link->nodes[j];
> + WARN_ON_ONCE(__bpf_trampoline_unlink_prog(&mnode->node, mnode->trampoline,
> + NULL, &trampoline_multi_ops, &data));
> + }
> + trampoline_unlock_all();
> +
> +rollback_put:
> + for (j = 0; j < i; j++)
> + bpf_trampoline_put(link->nodes[j].trampoline);
When __bpf_trampoline_link_prog() fails at index i in the
second loop, this falls through from rollback_unlink to
rollback_put with i still set to the failing index. But all
cnt trampolines were obtained in the first loop above.
The rollback_put loop only puts trampolines 0..i-1, leaving
trampolines i..cnt-1 with elevated refcounts that are never
released.
When rollback_put is reached directly from the first loop
(bpf_trampoline_get or bpf_check_attach_target failure), the
j < i bound is correct since only 0..i-1 were obtained.
Could the rollback_unlink path set i = cnt before falling
through, or use a separate loop bound for the put?
> +
> + free_fentry_multi_data(&data);
> + return err;
> +}
[ ... ]
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22220470181
AI-authorship-score: low
AI-authorship-explanation: Natural kernel development style from a known BPF maintainer, with human-typical error handling oversights in batch operation code.
issues-found: 2
issue-severity-score: high
issue-severity-explanation: NULL pointer dereference in error rollback path causes kernel crash, combined with trampoline refcount leak preventing resource cleanup.
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH bpf-next 10/17] bpf: Add support for tracing_multi link session
2026-02-20 10:06 ` [PATCH bpf-next 10/17] bpf: Add support for tracing_multi link session Jiri Olsa
@ 2026-02-20 10:57 ` bot+bpf-ci
2026-02-22 14:35 ` Jiri Olsa
0 siblings, 1 reply; 38+ messages in thread
From: bot+bpf-ci @ 2026-02-20 10:57 UTC (permalink / raw)
To: jolsa, ast, daniel, andrii
Cc: bpf, linux-trace-kernel, kafai, eddyz87, songliubraving, yhs,
menglong8.dong, rostedt, ast, andrii, daniel, martin.lau, eddyz87,
yonghong.song, clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 2732 bytes --]
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 5892dca20b7e..1cd6c1457bd3 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -3611,6 +3611,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
> if (prog->expected_attach_type != BPF_TRACE_FENTRY &&
> prog->expected_attach_type != BPF_TRACE_FEXIT &&
> prog->expected_attach_type != BPF_TRACE_FSESSION &&
> + prog->expected_attach_type != BPF_TRACE_FSESSION_MULTI &&
> prog->expected_attach_type != BPF_TRACE_FENTRY_MULTI &&
> prog->expected_attach_type != BPF_TRACE_FEXIT_MULTI &&
> prog->expected_attach_type != BPF_MODIFY_RETURN) {
BPF_TRACE_FSESSION_MULTI is now accepted here, which means
a program with this type can enter bpf_tracing_prog_attach()
via BPF_RAW_TRACEPOINT_OPEN:
bpf_raw_tracepoint_open()
bpf_raw_tp_link_attach() /* name == NULL */
bpf_tracing_prog_attach() /* BPF_TRACE_FSESSION_MULTI */
Further down in bpf_tracing_prog_attach(), the fexit node
initialization only checks for BPF_TRACE_FSESSION:
kernel/bpf/syscall.c:bpf_tracing_prog_attach() {
...
if (prog->expected_attach_type == BPF_TRACE_FSESSION) {
link->fexit.link = &link->link.link;
link->fexit.cookie = bpf_cookie;
}
...
}
So for BPF_TRACE_FSESSION_MULTI, link->fexit.link stays NULL
(from kzalloc). When __bpf_trampoline_link_prog() later calls
fsession_exit(), it returns &link->fexit with a NULL link
field. This node gets added to the trampoline FEXIT list, and
bpf_trampoline_get_progs() then dereferences it:
kernel/bpf/trampoline.c:bpf_trampoline_get_progs() {
...
hlist_for_each_entry(node, &tr->progs_hlist[kind], tramp_hlist) {
*ip_arg |= node->link->prog->call_get_func_ip;
^^^^^^^^^^
...
}
Would it make sense to either add BPF_TRACE_FSESSION_MULTI to
the fexit initialization, or reject this type in
bpf_tracing_prog_attach() since it should only be used through
bpf_tracing_multi_attach()?
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22220470181
AI-authorship-score: low
AI-authorship-explanation: Code follows established kernel BPF patterns for adding new attach types, written by a well-known BPF maintainer with terse, practical commit message style.
issues-found: 1
issue-severity-score: high
issue-severity-explanation: NULL pointer dereference reachable from BPF_RAW_TRACEPOINT_OPEN syscall when BPF_TRACE_FSESSION_MULTI enters bpf_tracing_prog_attach without fexit initialization, leading to kernel crash.
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH bpf-next 08/17] bpf: Add support for tracing multi link
2026-02-20 10:06 ` [PATCH bpf-next 08/17] bpf: Add support for tracing multi link Jiri Olsa
@ 2026-02-20 10:57 ` bot+bpf-ci
2026-02-22 14:35 ` Jiri Olsa
0 siblings, 1 reply; 38+ messages in thread
From: bot+bpf-ci @ 2026-02-20 10:57 UTC (permalink / raw)
To: jolsa, ast, daniel, andrii
Cc: bpf, linux-trace-kernel, kafai, eddyz87, songliubraving, yhs,
menglong8.dong, rostedt, ast, andrii, daniel, martin.lau, eddyz87,
yonghong.song, clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 2573 bytes --]
> diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
> --- a/include/linux/bpf_types.h
> +++ b/include/linux/bpf_types.h
[ ... ]
> +BPF_LINK_TYPE(BPF_LINK_TYPE_TRACING_MULTI, tracing)
The existing BPF_LINK_TYPE_TRACING also uses "tracing" as its name
argument:
BPF_LINK_TYPE(BPF_LINK_TYPE_TRACING, tracing)
This means bpf_link_type_strs[] will contain "tracing" for both
BPF_LINK_TYPE_TRACING and BPF_LINK_TYPE_TRACING_MULTI, making them
indistinguishable in fdinfo output via bpf_link_show_fdinfo().
The libbpf side correctly uses "tracing_multi":
[BPF_LINK_TYPE_TRACING_MULTI] = "tracing_multi",
Should this be BPF_LINK_TYPE(BPF_LINK_TYPE_TRACING_MULTI, tracing_multi)
to match?
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
[ ... ]
> +int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
> +{
[ ... ]
> + ids = kvmalloc_array(cnt, sizeof(*ids), GFP_KERNEL);
[ ... ]
> + link = kzalloc(struct_size(link, nodes, cnt), GFP_KERNEL);
^^^^^^
The ids array above uses kvmalloc_array() to handle large allocations,
but the link struct uses kzalloc(). With MAX_TRACING_MULTI_CNT at 1M
and each bpf_tracing_multi_node being roughly 40 bytes,
struct_size() can request up to ~40MB of contiguous memory. kzalloc()
is limited to KMALLOC_MAX_SIZE (~4MB on x86_64), so this will fail
for cnt values above roughly 100K.
Would kvzalloc() work here instead? bpf_tracing_multi_link_dealloc()
would need to use kvfree() accordingly:
> +static void bpf_tracing_multi_link_dealloc(struct bpf_link *link)
> +{
> + struct bpf_tracing_multi_link *tr_link =
> + container_of(link, struct bpf_tracing_multi_link, link);
> +
> + kfree(tr_link);
^^^^^
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22220470181
AI-authorship-score: low
AI-authorship-explanation: Code follows established BPF subsystem patterns and Jiri Olsa's terse kernel commit style, with no AI-characteristic verbosity or over-documentation.
issues-found: 2
issue-severity-score: low
issue-severity-explanation: BPF_LINK_TYPE naming collision makes fdinfo output ambiguous between tracing and tracing_multi link types, and kzalloc limits the flex-array link allocation well below the advertised MAX_TRACING_MULTI_CNT; neither causes crashes or data corruption.
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH bpf-next 04/17] bpf: Add struct bpf_tramp_node object
2026-02-20 10:06 ` [PATCH bpf-next 04/17] bpf: Add struct bpf_tramp_node object Jiri Olsa
@ 2026-02-20 10:58 ` bot+bpf-ci
2026-02-22 14:34 ` Jiri Olsa
2026-02-20 19:52 ` kernel test robot
` (2 subsequent siblings)
3 siblings, 1 reply; 38+ messages in thread
From: bot+bpf-ci @ 2026-02-20 10:58 UTC (permalink / raw)
To: jolsa, ast, daniel, andrii
Cc: bpf, linux-trace-kernel, kafai, eddyz87, songliubraving, yhs,
menglong8.dong, rostedt, ast, andrii, daniel, martin.lau, eddyz87,
yonghong.song, clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 1566 bytes --]
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
[ ... ]
> +struct bpf_tramp_node {
> + struct bpf_link *link;
> + struct hlist_node tramp_hlist;
> + u64 cookie;
> +};
During the RFC review, Andrii Nakryiko suggested that instead of
co-locating a prog/link pointer in bpf_tramp_node, the JIT functions
like invoke_bpf_prog() could accept the prog pointer as a separate
argument, keeping bpf_tramp_node to just {tramp_hlist, cookie}:
https://lore.kernel.org/bpf/CAEf4BzY=vqpnjrDNtLVvC3_nU5=Z7iROU_He=xbkEJDuGCoqug@mail.gmail.com/
The RFC had `struct bpf_prog *prog` in bpf_tramp_node and this v1
changed it to `struct bpf_link *link`, which still co-locates the
path to the program inside the node. Was the refactoring to pass
prog separately to invoke_bpf_prog() considered and ruled out, or
is that planned for a follow-up?
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22220470181
AI-authorship-score: low
AI-authorship-explanation: Mechanical refactoring with lore history dating back to 2021, consistent with the author's known kernel development style and iterative review process.
issues-found: 1
issue-severity-score: low
issue-severity-explanation: Unaddressed RFC review comment from maintainer about struct design; not a code bug but a design concern about co-locating link pointer in bpf_tramp_node.
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH bpf-next 04/17] bpf: Add struct bpf_tramp_node object
2026-02-20 10:06 ` [PATCH bpf-next 04/17] bpf: Add struct bpf_tramp_node object Jiri Olsa
2026-02-20 10:58 ` bot+bpf-ci
@ 2026-02-20 19:52 ` kernel test robot
2026-02-20 21:05 ` kernel test robot
2026-02-21 3:00 ` kernel test robot
3 siblings, 0 replies; 38+ messages in thread
From: kernel test robot @ 2026-02-20 19:52 UTC (permalink / raw)
To: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: llvm, oe-kbuild-all, bpf, linux-trace-kernel, Martin KaFai Lau,
Eduard Zingerman, Song Liu, Yonghong Song, Menglong Dong,
Steven Rostedt
Hi Jiri,
kernel test robot noticed the following build errors:
[auto build test ERROR on bpf-next/master]
url: https://github.com/intel-lab-lkp/linux/commits/Jiri-Olsa/ftrace-Add-ftrace_hash_count-function/20260220-181324
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link: https://lore.kernel.org/r/20260220100649.628307-5-jolsa%40kernel.org
patch subject: [PATCH bpf-next 04/17] bpf: Add struct bpf_tramp_node object
config: riscv-allyesconfig (https://download.01.org/0day-ci/archive/20260221/202602210330.ukNZdClO-lkp@intel.com/config)
compiler: clang version 16.0.6 (https://github.com/llvm/llvm-project 7cbf1a2591520c2491aa35339f227775f4d3adf6)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260221/202602210330.ukNZdClO-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602210330.ukNZdClO-lkp@intel.com/
All errors (new ones prefixed by >>):
arch/riscv/net/bpf_jit_comp64.c:944:9: error: no member named 'cookie' in 'struct bpf_tramp_link'
if (l->cookie)
~ ^
arch/riscv/net/bpf_jit_comp64.c:945:67: error: no member named 'cookie' in 'struct bpf_tramp_link'
emit_store_stack_imm64(RV_REG_T1, -run_ctx_off + cookie_off, l->cookie, ctx);
~ ^
arch/riscv/net/bpf_jit_comp64.c:999:30: warning: declaration of 'struct bpf_tramp_links' will not be visible outside of this function [-Wvisibility]
static int invoke_bpf(struct bpf_tramp_links *tl, int args_off, int retval_off,
^
arch/riscv/net/bpf_jit_comp64.c:1005:20: error: incomplete definition of type 'struct bpf_tramp_links'
for (i = 0; i < tl->nr_links; i++) {
~~^
arch/riscv/net/bpf_jit_comp64.c:999:30: note: forward declaration of 'struct bpf_tramp_links'
static int invoke_bpf(struct bpf_tramp_links *tl, int args_off, int retval_off,
^
arch/riscv/net/bpf_jit_comp64.c:1008:39: error: incomplete definition of type 'struct bpf_tramp_links'
if (bpf_prog_calls_session_cookie(tl->links[i])) {
~~^
arch/riscv/net/bpf_jit_comp64.c:999:30: note: forward declaration of 'struct bpf_tramp_links'
static int invoke_bpf(struct bpf_tramp_links *tl, int args_off, int retval_off,
^
arch/riscv/net/bpf_jit_comp64.c:1014:27: error: incomplete definition of type 'struct bpf_tramp_links'
err = invoke_bpf_prog(tl->links[i], args_off, retval_off, run_ctx_off,
~~^
arch/riscv/net/bpf_jit_comp64.c:999:30: note: forward declaration of 'struct bpf_tramp_links'
static int invoke_bpf(struct bpf_tramp_links *tl, int args_off, int retval_off,
^
arch/riscv/net/bpf_jit_comp64.c:1024:14: warning: declaration of 'struct bpf_tramp_links' will not be visible outside of this function [-Wvisibility]
struct bpf_tramp_links *tlinks,
^
arch/riscv/net/bpf_jit_comp64.c:1033:42: error: subscript of pointer to incomplete type 'struct bpf_tramp_links'
struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
~~~~~~^
arch/riscv/net/bpf_jit_comp64.c:1024:14: note: forward declaration of 'struct bpf_tramp_links'
struct bpf_tramp_links *tlinks,
^
arch/riscv/net/bpf_jit_comp64.c:1034:41: error: subscript of pointer to incomplete type 'struct bpf_tramp_links'
struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
~~~~~~^
arch/riscv/net/bpf_jit_comp64.c:1024:14: note: forward declaration of 'struct bpf_tramp_links'
struct bpf_tramp_links *tlinks,
^
arch/riscv/net/bpf_jit_comp64.c:1035:44: error: subscript of pointer to incomplete type 'struct bpf_tramp_links'
struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
~~~~~~^
arch/riscv/net/bpf_jit_comp64.c:1024:14: note: forward declaration of 'struct bpf_tramp_links'
struct bpf_tramp_links *tlinks,
^
>> arch/riscv/net/bpf_jit_comp64.c:1118:39: error: incompatible pointer types passing 'struct bpf_tramp_links *' to parameter of type 'struct bpf_tramp_nodes *' [-Werror,-Wincompatible-pointer-types]
cookie_cnt = bpf_fsession_cookie_cnt(tlinks);
^~~~~~
include/linux/bpf.h:2207:67: note: passing argument to parameter 'nodes' here
static inline int bpf_fsession_cookie_cnt(struct bpf_tramp_nodes *nodes)
^
arch/riscv/net/bpf_jit_comp64.c:1175:23: error: incompatible pointer types passing 'struct bpf_tramp_links *' to parameter of type 'struct bpf_tramp_nodes *' [-Werror,-Wincompatible-pointer-types]
if (bpf_fsession_cnt(tlinks)) {
^~~~~~
include/linux/bpf.h:2189:60: note: passing argument to parameter 'nodes' here
static inline int bpf_fsession_cnt(struct bpf_tramp_nodes *nodes)
^
arch/riscv/net/bpf_jit_comp64.c:1190:12: error: incomplete definition of type 'struct bpf_tramp_links'
if (fentry->nr_links) {
~~~~~~^
arch/riscv/net/bpf_jit_comp64.c:1024:14: note: forward declaration of 'struct bpf_tramp_links'
struct bpf_tramp_links *tlinks,
^
arch/riscv/net/bpf_jit_comp64.c:1191:20: error: incompatible pointer types passing 'struct bpf_tramp_links *' to parameter of type 'struct bpf_tramp_links *' [-Werror,-Wincompatible-pointer-types]
ret = invoke_bpf(fentry, args_off, retval_off, run_ctx_off, func_meta_off,
^~~~~~
arch/riscv/net/bpf_jit_comp64.c:999:47: note: passing argument to parameter 'tl' here
static int invoke_bpf(struct bpf_tramp_links *tl, int args_off, int retval_off,
^
arch/riscv/net/bpf_jit_comp64.c:1197:14: error: incomplete definition of type 'struct bpf_tramp_links'
if (fmod_ret->nr_links) {
~~~~~~~~^
arch/riscv/net/bpf_jit_comp64.c:1024:14: note: forward declaration of 'struct bpf_tramp_links'
struct bpf_tramp_links *tlinks,
^
arch/riscv/net/bpf_jit_comp64.c:1198:34: error: incomplete definition of type 'struct bpf_tramp_links'
branches_off = kcalloc(fmod_ret->nr_links, sizeof(int), GFP_KERNEL);
~~~~~~~~^
include/linux/slab.h:1154:48: note: expanded from macro 'kcalloc'
#define kcalloc(n, size, flags) kmalloc_array(n, size, (flags) | __GFP_ZERO)
^
include/linux/slab.h:1115:63: note: expanded from macro 'kmalloc_array'
#define kmalloc_array(...) alloc_hooks(kmalloc_array_noprof(__VA_ARGS__))
^~~~~~~~~~~
include/linux/alloc_tag.h:265:31: note: expanded from macro 'alloc_hooks'
alloc_hooks_tag(&_alloc_tag, _do_alloc); \
^~~~~~~~~
include/linux/alloc_tag.h:251:9: note: expanded from macro 'alloc_hooks_tag'
typeof(_do_alloc) _res; \
^~~~~~~~~
arch/riscv/net/bpf_jit_comp64.c:1024:14: note: forward declaration of 'struct bpf_tramp_links'
struct bpf_tramp_links *tlinks,
^
arch/riscv/net/bpf_jit_comp64.c:1198:34: error: incomplete definition of type 'struct bpf_tramp_links'
branches_off = kcalloc(fmod_ret->nr_links, sizeof(int), GFP_KERNEL);
~~~~~~~~^
include/linux/slab.h:1154:48: note: expanded from macro 'kcalloc'
#define kcalloc(n, size, flags) kmalloc_array(n, size, (flags) | __GFP_ZERO)
^
include/linux/slab.h:1115:63: note: expanded from macro 'kmalloc_array'
#define kmalloc_array(...) alloc_hooks(kmalloc_array_noprof(__VA_ARGS__))
^~~~~~~~~~~
include/linux/alloc_tag.h:265:31: note: expanded from macro 'alloc_hooks'
alloc_hooks_tag(&_alloc_tag, _do_alloc); \
^~~~~~~~~
include/linux/alloc_tag.h:255:10: note: expanded from macro 'alloc_hooks_tag'
_res = _do_alloc; \
^~~~~~~~~
arch/riscv/net/bpf_jit_comp64.c:1024:14: note: forward declaration of 'struct bpf_tramp_links'
struct bpf_tramp_links *tlinks,
^
arch/riscv/net/bpf_jit_comp64.c:1198:34: error: incomplete definition of type 'struct bpf_tramp_links'
branches_off = kcalloc(fmod_ret->nr_links, sizeof(int), GFP_KERNEL);
~~~~~~~~^
include/linux/slab.h:1154:48: note: expanded from macro 'kcalloc'
#define kcalloc(n, size, flags) kmalloc_array(n, size, (flags) | __GFP_ZERO)
^
include/linux/slab.h:1115:63: note: expanded from macro 'kmalloc_array'
#define kmalloc_array(...) alloc_hooks(kmalloc_array_noprof(__VA_ARGS__))
^~~~~~~~~~~
include/linux/alloc_tag.h:265:31: note: expanded from macro 'alloc_hooks'
alloc_hooks_tag(&_alloc_tag, _do_alloc); \
^~~~~~~~~
include/linux/alloc_tag.h:258:10: note: expanded from macro 'alloc_hooks_tag'
_res = _do_alloc; \
^~~~~~~~~
arch/riscv/net/bpf_jit_comp64.c:1024:14: note: forward declaration of 'struct bpf_tramp_links'
struct bpf_tramp_links *tlinks,
^
arch/riscv/net/bpf_jit_comp64.c:1204:27: error: incomplete definition of type 'struct bpf_tramp_links'
for (i = 0; i < fmod_ret->nr_links; i++) {
~~~~~~~~^
arch/riscv/net/bpf_jit_comp64.c:1024:14: note: forward declaration of 'struct bpf_tramp_links'
struct bpf_tramp_links *tlinks,
^
arch/riscv/net/bpf_jit_comp64.c:1205:34: error: incomplete definition of type 'struct bpf_tramp_links'
ret = invoke_bpf_prog(fmod_ret->links[i], args_off, retval_off,
~~~~~~~~^
arch/riscv/net/bpf_jit_comp64.c:1024:14: note: forward declaration of 'struct bpf_tramp_links'
struct bpf_tramp_links *tlinks,
^
arch/riscv/net/bpf_jit_comp64.c:1233:40: error: incomplete definition of type 'struct bpf_tramp_links'
for (i = 0; ctx->insns && i < fmod_ret->nr_links; i++) {
~~~~~~~~^
arch/riscv/net/bpf_jit_comp64.c:1024:14: note: forward declaration of 'struct bpf_tramp_links'
struct bpf_tramp_links *tlinks,
vim +1118 arch/riscv/net/bpf_jit_comp64.c
35b3515be0ecb9 Menglong Dong 2026-02-08 1021
49b5e77ae3e214 Pu Lehui 2023-02-15 1022 static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
49b5e77ae3e214 Pu Lehui 2023-02-15 1023 const struct btf_func_model *m,
49b5e77ae3e214 Pu Lehui 2023-02-15 1024 struct bpf_tramp_links *tlinks,
49b5e77ae3e214 Pu Lehui 2023-02-15 1025 void *func_addr, u32 flags,
49b5e77ae3e214 Pu Lehui 2023-02-15 1026 struct rv_jit_context *ctx)
49b5e77ae3e214 Pu Lehui 2023-02-15 1027 {
49b5e77ae3e214 Pu Lehui 2023-02-15 1028 int i, ret, offset;
49b5e77ae3e214 Pu Lehui 2023-02-15 1029 int *branches_off = NULL;
6801b0aef79db4 Pu Lehui 2024-07-02 1030 int stack_size = 0, nr_arg_slots = 0;
35b3515be0ecb9 Menglong Dong 2026-02-08 1031 int retval_off, args_off, func_meta_off, ip_off, run_ctx_off, sreg_off, stk_arg_off;
35b3515be0ecb9 Menglong Dong 2026-02-08 1032 int cookie_off, cookie_cnt;
49b5e77ae3e214 Pu Lehui 2023-02-15 1033 struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
49b5e77ae3e214 Pu Lehui 2023-02-15 1034 struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
49b5e77ae3e214 Pu Lehui 2023-02-15 1035 struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
1732ebc4a26181 Pu Lehui 2024-01-23 1036 bool is_struct_ops = flags & BPF_TRAMP_F_INDIRECT;
49b5e77ae3e214 Pu Lehui 2023-02-15 1037 void *orig_call = func_addr;
49b5e77ae3e214 Pu Lehui 2023-02-15 1038 bool save_ret;
35b3515be0ecb9 Menglong Dong 2026-02-08 1039 u64 func_meta;
49b5e77ae3e214 Pu Lehui 2023-02-15 1040 u32 insn;
49b5e77ae3e214 Pu Lehui 2023-02-15 1041
25ad10658dc106 Pu Lehui 2023-07-21 1042 /* Two types of generated trampoline stack layout:
25ad10658dc106 Pu Lehui 2023-07-21 1043 *
25ad10658dc106 Pu Lehui 2023-07-21 1044 * 1. trampoline called from function entry
25ad10658dc106 Pu Lehui 2023-07-21 1045 * --------------------------------------
25ad10658dc106 Pu Lehui 2023-07-21 1046 * FP + 8 [ RA to parent func ] return address to parent
25ad10658dc106 Pu Lehui 2023-07-21 1047 * function
25ad10658dc106 Pu Lehui 2023-07-21 1048 * FP + 0 [ FP of parent func ] frame pointer of parent
25ad10658dc106 Pu Lehui 2023-07-21 1049 * function
25ad10658dc106 Pu Lehui 2023-07-21 1050 * FP - 8 [ T0 to traced func ] return address of traced
25ad10658dc106 Pu Lehui 2023-07-21 1051 * function
25ad10658dc106 Pu Lehui 2023-07-21 1052 * FP - 16 [ FP of traced func ] frame pointer of traced
25ad10658dc106 Pu Lehui 2023-07-21 1053 * function
25ad10658dc106 Pu Lehui 2023-07-21 1054 * --------------------------------------
49b5e77ae3e214 Pu Lehui 2023-02-15 1055 *
25ad10658dc106 Pu Lehui 2023-07-21 1056 * 2. trampoline called directly
25ad10658dc106 Pu Lehui 2023-07-21 1057 * --------------------------------------
25ad10658dc106 Pu Lehui 2023-07-21 1058 * FP - 8 [ RA to caller func ] return address to caller
49b5e77ae3e214 Pu Lehui 2023-02-15 1059 * function
25ad10658dc106 Pu Lehui 2023-07-21 1060 * FP - 16 [ FP of caller func ] frame pointer of caller
49b5e77ae3e214 Pu Lehui 2023-02-15 1061 * function
25ad10658dc106 Pu Lehui 2023-07-21 1062 * --------------------------------------
49b5e77ae3e214 Pu Lehui 2023-02-15 1063 *
49b5e77ae3e214 Pu Lehui 2023-02-15 1064 * FP - retval_off [ return value ] BPF_TRAMP_F_CALL_ORIG or
49b5e77ae3e214 Pu Lehui 2023-02-15 1065 * BPF_TRAMP_F_RET_FENTRY_RET
49b5e77ae3e214 Pu Lehui 2023-02-15 1066 * [ argN ]
49b5e77ae3e214 Pu Lehui 2023-02-15 1067 * [ ... ]
49b5e77ae3e214 Pu Lehui 2023-02-15 1068 * FP - args_off [ arg1 ]
49b5e77ae3e214 Pu Lehui 2023-02-15 1069 *
35b3515be0ecb9 Menglong Dong 2026-02-08 1070 * FP - func_meta_off [ regs count, etc ]
49b5e77ae3e214 Pu Lehui 2023-02-15 1071 *
49b5e77ae3e214 Pu Lehui 2023-02-15 1072 * FP - ip_off [ traced func ] BPF_TRAMP_F_IP_ARG
49b5e77ae3e214 Pu Lehui 2023-02-15 1073 *
35b3515be0ecb9 Menglong Dong 2026-02-08 1074 * [ stack cookie N ]
35b3515be0ecb9 Menglong Dong 2026-02-08 1075 * [ ... ]
35b3515be0ecb9 Menglong Dong 2026-02-08 1076 * FP - cookie_off [ stack cookie 1 ]
35b3515be0ecb9 Menglong Dong 2026-02-08 1077 *
49b5e77ae3e214 Pu Lehui 2023-02-15 1078 * FP - run_ctx_off [ bpf_tramp_run_ctx ]
49b5e77ae3e214 Pu Lehui 2023-02-15 1079 *
49b5e77ae3e214 Pu Lehui 2023-02-15 1080 * FP - sreg_off [ callee saved reg ]
49b5e77ae3e214 Pu Lehui 2023-02-15 1081 *
49b5e77ae3e214 Pu Lehui 2023-02-15 1082 * [ pads ] pads for 16 bytes alignment
6801b0aef79db4 Pu Lehui 2024-07-02 1083 *
6801b0aef79db4 Pu Lehui 2024-07-02 1084 * [ stack_argN ]
6801b0aef79db4 Pu Lehui 2024-07-02 1085 * [ ... ]
6801b0aef79db4 Pu Lehui 2024-07-02 1086 * FP - stk_arg_off [ stack_arg1 ] BPF_TRAMP_F_CALL_ORIG
49b5e77ae3e214 Pu Lehui 2023-02-15 1087 */
49b5e77ae3e214 Pu Lehui 2023-02-15 1088
49b5e77ae3e214 Pu Lehui 2023-02-15 1089 if (flags & (BPF_TRAMP_F_ORIG_STACK | BPF_TRAMP_F_SHARE_IPMODIFY))
49b5e77ae3e214 Pu Lehui 2023-02-15 1090 return -ENOTSUPP;
49b5e77ae3e214 Pu Lehui 2023-02-15 1091
6801b0aef79db4 Pu Lehui 2024-07-02 1092 if (m->nr_args > MAX_BPF_FUNC_ARGS)
49b5e77ae3e214 Pu Lehui 2023-02-15 1093 return -ENOTSUPP;
49b5e77ae3e214 Pu Lehui 2023-02-15 1094
6801b0aef79db4 Pu Lehui 2024-07-02 1095 for (i = 0; i < m->nr_args; i++)
6801b0aef79db4 Pu Lehui 2024-07-02 1096 nr_arg_slots += round_up(m->arg_size[i], 8) / 8;
6801b0aef79db4 Pu Lehui 2024-07-02 1097
25ad10658dc106 Pu Lehui 2023-07-21 1098 /* room of trampoline frame to store return address and frame pointer */
25ad10658dc106 Pu Lehui 2023-07-21 1099 stack_size += 16;
49b5e77ae3e214 Pu Lehui 2023-02-15 1100
49b5e77ae3e214 Pu Lehui 2023-02-15 1101 save_ret = flags & (BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_RET_FENTRY_RET);
d0bf7cd5df1846 Chenghao Duan 2025-09-22 1102 if (save_ret)
7112cd26e606c7 Björn Töpel 2023-10-04 1103 stack_size += 16; /* Save both A5 (BPF R0) and A0 */
49b5e77ae3e214 Pu Lehui 2023-02-15 1104 retval_off = stack_size;
49b5e77ae3e214 Pu Lehui 2023-02-15 1105
6801b0aef79db4 Pu Lehui 2024-07-02 1106 stack_size += nr_arg_slots * 8;
49b5e77ae3e214 Pu Lehui 2023-02-15 1107 args_off = stack_size;
49b5e77ae3e214 Pu Lehui 2023-02-15 1108
35b3515be0ecb9 Menglong Dong 2026-02-08 1109 /* function metadata, such as regs count */
49b5e77ae3e214 Pu Lehui 2023-02-15 1110 stack_size += 8;
35b3515be0ecb9 Menglong Dong 2026-02-08 1111 func_meta_off = stack_size;
49b5e77ae3e214 Pu Lehui 2023-02-15 1112
49b5e77ae3e214 Pu Lehui 2023-02-15 1113 if (flags & BPF_TRAMP_F_IP_ARG) {
49b5e77ae3e214 Pu Lehui 2023-02-15 1114 stack_size += 8;
49b5e77ae3e214 Pu Lehui 2023-02-15 1115 ip_off = stack_size;
49b5e77ae3e214 Pu Lehui 2023-02-15 1116 }
49b5e77ae3e214 Pu Lehui 2023-02-15 1117
35b3515be0ecb9 Menglong Dong 2026-02-08 @1118 cookie_cnt = bpf_fsession_cookie_cnt(tlinks);
35b3515be0ecb9 Menglong Dong 2026-02-08 1119 /* room for session cookies */
35b3515be0ecb9 Menglong Dong 2026-02-08 1120 stack_size += cookie_cnt * 8;
35b3515be0ecb9 Menglong Dong 2026-02-08 1121 cookie_off = stack_size;
35b3515be0ecb9 Menglong Dong 2026-02-08 1122
49b5e77ae3e214 Pu Lehui 2023-02-15 1123 stack_size += round_up(sizeof(struct bpf_tramp_run_ctx), 8);
49b5e77ae3e214 Pu Lehui 2023-02-15 1124 run_ctx_off = stack_size;
49b5e77ae3e214 Pu Lehui 2023-02-15 1125
49b5e77ae3e214 Pu Lehui 2023-02-15 1126 stack_size += 8;
49b5e77ae3e214 Pu Lehui 2023-02-15 1127 sreg_off = stack_size;
49b5e77ae3e214 Pu Lehui 2023-02-15 1128
a5912c37faf723 Puranjay Mohan 2024-07-08 1129 if ((flags & BPF_TRAMP_F_CALL_ORIG) && (nr_arg_slots - RV_MAX_REG_ARGS > 0))
6801b0aef79db4 Pu Lehui 2024-07-02 1130 stack_size += (nr_arg_slots - RV_MAX_REG_ARGS) * 8;
6801b0aef79db4 Pu Lehui 2024-07-02 1131
e944fc8152744a Xiao Wang 2024-05-23 1132 stack_size = round_up(stack_size, STACK_ALIGN);
49b5e77ae3e214 Pu Lehui 2023-02-15 1133
6801b0aef79db4 Pu Lehui 2024-07-02 1134 /* room for args on stack must be at the top of stack */
6801b0aef79db4 Pu Lehui 2024-07-02 1135 stk_arg_off = stack_size;
6801b0aef79db4 Pu Lehui 2024-07-02 1136
1732ebc4a26181 Pu Lehui 2024-01-23 1137 if (!is_struct_ops) {
25ad10658dc106 Pu Lehui 2023-07-21 1138 /* For the trampoline called from function entry,
25ad10658dc106 Pu Lehui 2023-07-21 1139 * the frame of traced function and the frame of
25ad10658dc106 Pu Lehui 2023-07-21 1140 * trampoline need to be considered.
25ad10658dc106 Pu Lehui 2023-07-21 1141 */
25ad10658dc106 Pu Lehui 2023-07-21 1142 emit_addi(RV_REG_SP, RV_REG_SP, -16, ctx);
25ad10658dc106 Pu Lehui 2023-07-21 1143 emit_sd(RV_REG_SP, 8, RV_REG_RA, ctx);
25ad10658dc106 Pu Lehui 2023-07-21 1144 emit_sd(RV_REG_SP, 0, RV_REG_FP, ctx);
25ad10658dc106 Pu Lehui 2023-07-21 1145 emit_addi(RV_REG_FP, RV_REG_SP, 16, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1146
25ad10658dc106 Pu Lehui 2023-07-21 1147 emit_addi(RV_REG_SP, RV_REG_SP, -stack_size, ctx);
25ad10658dc106 Pu Lehui 2023-07-21 1148 emit_sd(RV_REG_SP, stack_size - 8, RV_REG_T0, ctx);
25ad10658dc106 Pu Lehui 2023-07-21 1149 emit_sd(RV_REG_SP, stack_size - 16, RV_REG_FP, ctx);
25ad10658dc106 Pu Lehui 2023-07-21 1150 emit_addi(RV_REG_FP, RV_REG_SP, stack_size, ctx);
25ad10658dc106 Pu Lehui 2023-07-21 1151 } else {
e63985ecd22681 Puranjay Mohan 2024-03-03 1152 /* emit kcfi hash */
e63985ecd22681 Puranjay Mohan 2024-03-03 1153 emit_kcfi(cfi_get_func_hash(func_addr), ctx);
25ad10658dc106 Pu Lehui 2023-07-21 1154 /* For the trampoline called directly, just handle
25ad10658dc106 Pu Lehui 2023-07-21 1155 * the frame of trampoline.
25ad10658dc106 Pu Lehui 2023-07-21 1156 */
25ad10658dc106 Pu Lehui 2023-07-21 1157 emit_addi(RV_REG_SP, RV_REG_SP, -stack_size, ctx);
25ad10658dc106 Pu Lehui 2023-07-21 1158 emit_sd(RV_REG_SP, stack_size - 8, RV_REG_RA, ctx);
25ad10658dc106 Pu Lehui 2023-07-21 1159 emit_sd(RV_REG_SP, stack_size - 16, RV_REG_FP, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1160 emit_addi(RV_REG_FP, RV_REG_SP, stack_size, ctx);
25ad10658dc106 Pu Lehui 2023-07-21 1161 }
49b5e77ae3e214 Pu Lehui 2023-02-15 1162
49b5e77ae3e214 Pu Lehui 2023-02-15 1163 /* callee saved register S1 to pass start time */
49b5e77ae3e214 Pu Lehui 2023-02-15 1164 emit_sd(RV_REG_FP, -sreg_off, RV_REG_S1, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1165
49b5e77ae3e214 Pu Lehui 2023-02-15 1166 /* store ip address of the traced function */
93fd420d71beed Menglong Dong 2026-02-08 1167 if (flags & BPF_TRAMP_F_IP_ARG)
93fd420d71beed Menglong Dong 2026-02-08 1168 emit_store_stack_imm64(RV_REG_T1, -ip_off, (u64)func_addr, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1169
35b3515be0ecb9 Menglong Dong 2026-02-08 1170 func_meta = nr_arg_slots;
35b3515be0ecb9 Menglong Dong 2026-02-08 1171 emit_store_stack_imm64(RV_REG_T1, -func_meta_off, func_meta, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1172
6801b0aef79db4 Pu Lehui 2024-07-02 1173 store_args(nr_arg_slots, args_off, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1174
35b3515be0ecb9 Menglong Dong 2026-02-08 1175 if (bpf_fsession_cnt(tlinks)) {
35b3515be0ecb9 Menglong Dong 2026-02-08 1176 /* clear all session cookies' value */
35b3515be0ecb9 Menglong Dong 2026-02-08 1177 for (i = 0; i < cookie_cnt; i++)
35b3515be0ecb9 Menglong Dong 2026-02-08 1178 emit_sd(RV_REG_FP, -cookie_off + 8 * i, RV_REG_ZERO, ctx);
35b3515be0ecb9 Menglong Dong 2026-02-08 1179 /* clear return value to make sure fentry always get 0 */
35b3515be0ecb9 Menglong Dong 2026-02-08 1180 emit_sd(RV_REG_FP, -retval_off, RV_REG_ZERO, ctx);
35b3515be0ecb9 Menglong Dong 2026-02-08 1181 }
35b3515be0ecb9 Menglong Dong 2026-02-08 1182
49b5e77ae3e214 Pu Lehui 2023-02-15 1183 if (flags & BPF_TRAMP_F_CALL_ORIG) {
9f1e16fb1fc982 Pu Lehui 2024-06-22 1184 emit_imm(RV_REG_A0, ctx->insns ? (const s64)im : RV_MAX_COUNT_IMM, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1185 ret = emit_call((const u64)__bpf_tramp_enter, true, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1186 if (ret)
49b5e77ae3e214 Pu Lehui 2023-02-15 1187 return ret;
49b5e77ae3e214 Pu Lehui 2023-02-15 1188 }
49b5e77ae3e214 Pu Lehui 2023-02-15 1189
35b3515be0ecb9 Menglong Dong 2026-02-08 1190 if (fentry->nr_links) {
35b3515be0ecb9 Menglong Dong 2026-02-08 1191 ret = invoke_bpf(fentry, args_off, retval_off, run_ctx_off, func_meta_off,
35b3515be0ecb9 Menglong Dong 2026-02-08 1192 flags & BPF_TRAMP_F_RET_FENTRY_RET, func_meta, cookie_off, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1193 if (ret)
49b5e77ae3e214 Pu Lehui 2023-02-15 1194 return ret;
49b5e77ae3e214 Pu Lehui 2023-02-15 1195 }
49b5e77ae3e214 Pu Lehui 2023-02-15 1196
49b5e77ae3e214 Pu Lehui 2023-02-15 1197 if (fmod_ret->nr_links) {
49b5e77ae3e214 Pu Lehui 2023-02-15 1198 branches_off = kcalloc(fmod_ret->nr_links, sizeof(int), GFP_KERNEL);
49b5e77ae3e214 Pu Lehui 2023-02-15 1199 if (!branches_off)
49b5e77ae3e214 Pu Lehui 2023-02-15 1200 return -ENOMEM;
49b5e77ae3e214 Pu Lehui 2023-02-15 1201
49b5e77ae3e214 Pu Lehui 2023-02-15 1202 /* cleanup to avoid garbage return value confusion */
49b5e77ae3e214 Pu Lehui 2023-02-15 1203 emit_sd(RV_REG_FP, -retval_off, RV_REG_ZERO, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1204 for (i = 0; i < fmod_ret->nr_links; i++) {
49b5e77ae3e214 Pu Lehui 2023-02-15 1205 ret = invoke_bpf_prog(fmod_ret->links[i], args_off, retval_off,
49b5e77ae3e214 Pu Lehui 2023-02-15 1206 run_ctx_off, true, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1207 if (ret)
49b5e77ae3e214 Pu Lehui 2023-02-15 1208 goto out;
49b5e77ae3e214 Pu Lehui 2023-02-15 1209 emit_ld(RV_REG_T1, -retval_off, RV_REG_FP, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1210 branches_off[i] = ctx->ninsns;
49b5e77ae3e214 Pu Lehui 2023-02-15 1211 /* nop reserved for conditional jump */
49b5e77ae3e214 Pu Lehui 2023-02-15 1212 emit(rv_nop(), ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1213 }
49b5e77ae3e214 Pu Lehui 2023-02-15 1214 }
49b5e77ae3e214 Pu Lehui 2023-02-15 1215
49b5e77ae3e214 Pu Lehui 2023-02-15 1216 if (flags & BPF_TRAMP_F_CALL_ORIG) {
8f3e00af8e52c0 Menglong Dong 2025-12-19 1217 /* skip to actual body of traced function */
8f3e00af8e52c0 Menglong Dong 2025-12-19 1218 orig_call += RV_FENTRY_NINSNS * 4;
6801b0aef79db4 Pu Lehui 2024-07-02 1219 restore_args(min_t(int, nr_arg_slots, RV_MAX_REG_ARGS), args_off, ctx);
6801b0aef79db4 Pu Lehui 2024-07-02 1220 restore_stack_args(nr_arg_slots - RV_MAX_REG_ARGS, args_off, stk_arg_off, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1221 ret = emit_call((const u64)orig_call, true, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1222 if (ret)
49b5e77ae3e214 Pu Lehui 2023-02-15 1223 goto out;
49b5e77ae3e214 Pu Lehui 2023-02-15 1224 emit_sd(RV_REG_FP, -retval_off, RV_REG_A0, ctx);
7112cd26e606c7 Björn Töpel 2023-10-04 1225 emit_sd(RV_REG_FP, -(retval_off - 8), regmap[BPF_REG_0], ctx);
2382a405c581ae Pu Lehui 2024-06-22 1226 im->ip_after_call = ctx->ro_insns + ctx->ninsns;
49b5e77ae3e214 Pu Lehui 2023-02-15 1227 /* 2 nops reserved for auipc+jalr pair */
49b5e77ae3e214 Pu Lehui 2023-02-15 1228 emit(rv_nop(), ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1229 emit(rv_nop(), ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1230 }
49b5e77ae3e214 Pu Lehui 2023-02-15 1231
49b5e77ae3e214 Pu Lehui 2023-02-15 1232 /* update branches saved in invoke_bpf_mod_ret with bnez */
49b5e77ae3e214 Pu Lehui 2023-02-15 1233 for (i = 0; ctx->insns && i < fmod_ret->nr_links; i++) {
49b5e77ae3e214 Pu Lehui 2023-02-15 1234 offset = ninsns_rvoff(ctx->ninsns - branches_off[i]);
49b5e77ae3e214 Pu Lehui 2023-02-15 1235 insn = rv_bne(RV_REG_T1, RV_REG_ZERO, offset >> 1);
49b5e77ae3e214 Pu Lehui 2023-02-15 1236 *(u32 *)(ctx->insns + branches_off[i]) = insn;
49b5e77ae3e214 Pu Lehui 2023-02-15 1237 }
49b5e77ae3e214 Pu Lehui 2023-02-15 1238
35b3515be0ecb9 Menglong Dong 2026-02-08 1239 /* set "is_return" flag for fsession */
35b3515be0ecb9 Menglong Dong 2026-02-08 1240 func_meta |= (1ULL << BPF_TRAMP_IS_RETURN_SHIFT);
35b3515be0ecb9 Menglong Dong 2026-02-08 1241 if (bpf_fsession_cnt(tlinks))
35b3515be0ecb9 Menglong Dong 2026-02-08 1242 emit_store_stack_imm64(RV_REG_T1, -func_meta_off, func_meta, ctx);
35b3515be0ecb9 Menglong Dong 2026-02-08 1243
35b3515be0ecb9 Menglong Dong 2026-02-08 1244 if (fexit->nr_links) {
35b3515be0ecb9 Menglong Dong 2026-02-08 1245 ret = invoke_bpf(fexit, args_off, retval_off, run_ctx_off, func_meta_off,
35b3515be0ecb9 Menglong Dong 2026-02-08 1246 false, func_meta, cookie_off, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1247 if (ret)
49b5e77ae3e214 Pu Lehui 2023-02-15 1248 goto out;
49b5e77ae3e214 Pu Lehui 2023-02-15 1249 }
49b5e77ae3e214 Pu Lehui 2023-02-15 1250
49b5e77ae3e214 Pu Lehui 2023-02-15 1251 if (flags & BPF_TRAMP_F_CALL_ORIG) {
2382a405c581ae Pu Lehui 2024-06-22 1252 im->ip_epilogue = ctx->ro_insns + ctx->ninsns;
9f1e16fb1fc982 Pu Lehui 2024-06-22 1253 emit_imm(RV_REG_A0, ctx->insns ? (const s64)im : RV_MAX_COUNT_IMM, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1254 ret = emit_call((const u64)__bpf_tramp_exit, true, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1255 if (ret)
49b5e77ae3e214 Pu Lehui 2023-02-15 1256 goto out;
49b5e77ae3e214 Pu Lehui 2023-02-15 1257 }
49b5e77ae3e214 Pu Lehui 2023-02-15 1258
49b5e77ae3e214 Pu Lehui 2023-02-15 1259 if (flags & BPF_TRAMP_F_RESTORE_REGS)
6801b0aef79db4 Pu Lehui 2024-07-02 1260 restore_args(min_t(int, nr_arg_slots, RV_MAX_REG_ARGS), args_off, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1261
7112cd26e606c7 Björn Töpel 2023-10-04 1262 if (save_ret) {
7112cd26e606c7 Björn Töpel 2023-10-04 1263 emit_ld(regmap[BPF_REG_0], -(retval_off - 8), RV_REG_FP, ctx);
fd2e08128944a7 Hengqi Chen 2025-09-08 1264 if (is_struct_ops) {
fd2e08128944a7 Hengqi Chen 2025-09-08 1265 ret = sign_extend(RV_REG_A0, regmap[BPF_REG_0], m->ret_size,
fd2e08128944a7 Hengqi Chen 2025-09-08 1266 m->ret_flags & BTF_FMODEL_SIGNED_ARG, ctx);
fd2e08128944a7 Hengqi Chen 2025-09-08 1267 if (ret)
fd2e08128944a7 Hengqi Chen 2025-09-08 1268 goto out;
fd2e08128944a7 Hengqi Chen 2025-09-08 1269 } else {
fd2e08128944a7 Hengqi Chen 2025-09-08 1270 emit_ld(RV_REG_A0, -retval_off, RV_REG_FP, ctx);
fd2e08128944a7 Hengqi Chen 2025-09-08 1271 }
7112cd26e606c7 Björn Töpel 2023-10-04 1272 }
49b5e77ae3e214 Pu Lehui 2023-02-15 1273
49b5e77ae3e214 Pu Lehui 2023-02-15 1274 emit_ld(RV_REG_S1, -sreg_off, RV_REG_FP, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1275
1732ebc4a26181 Pu Lehui 2024-01-23 1276 if (!is_struct_ops) {
25ad10658dc106 Pu Lehui 2023-07-21 1277 /* trampoline called from function entry */
25ad10658dc106 Pu Lehui 2023-07-21 1278 emit_ld(RV_REG_T0, stack_size - 8, RV_REG_SP, ctx);
25ad10658dc106 Pu Lehui 2023-07-21 1279 emit_ld(RV_REG_FP, stack_size - 16, RV_REG_SP, ctx);
25ad10658dc106 Pu Lehui 2023-07-21 1280 emit_addi(RV_REG_SP, RV_REG_SP, stack_size, ctx);
25ad10658dc106 Pu Lehui 2023-07-21 1281
25ad10658dc106 Pu Lehui 2023-07-21 1282 emit_ld(RV_REG_RA, 8, RV_REG_SP, ctx);
25ad10658dc106 Pu Lehui 2023-07-21 1283 emit_ld(RV_REG_FP, 0, RV_REG_SP, ctx);
25ad10658dc106 Pu Lehui 2023-07-21 1284 emit_addi(RV_REG_SP, RV_REG_SP, 16, ctx);
25ad10658dc106 Pu Lehui 2023-07-21 1285
49b5e77ae3e214 Pu Lehui 2023-02-15 1286 if (flags & BPF_TRAMP_F_SKIP_FRAME)
25ad10658dc106 Pu Lehui 2023-07-21 1287 /* return to parent function */
25ad10658dc106 Pu Lehui 2023-07-21 1288 emit_jalr(RV_REG_ZERO, RV_REG_RA, 0, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1289 else
25ad10658dc106 Pu Lehui 2023-07-21 1290 /* return to traced function */
25ad10658dc106 Pu Lehui 2023-07-21 1291 emit_jalr(RV_REG_ZERO, RV_REG_T0, 0, ctx);
25ad10658dc106 Pu Lehui 2023-07-21 1292 } else {
25ad10658dc106 Pu Lehui 2023-07-21 1293 /* trampoline called directly */
25ad10658dc106 Pu Lehui 2023-07-21 1294 emit_ld(RV_REG_RA, stack_size - 8, RV_REG_SP, ctx);
25ad10658dc106 Pu Lehui 2023-07-21 1295 emit_ld(RV_REG_FP, stack_size - 16, RV_REG_SP, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1296 emit_addi(RV_REG_SP, RV_REG_SP, stack_size, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 1297
49b5e77ae3e214 Pu Lehui 2023-02-15 1298 emit_jalr(RV_REG_ZERO, RV_REG_RA, 0, ctx);
25ad10658dc106 Pu Lehui 2023-07-21 1299 }
49b5e77ae3e214 Pu Lehui 2023-02-15 1300
49b5e77ae3e214 Pu Lehui 2023-02-15 1301 ret = ctx->ninsns;
49b5e77ae3e214 Pu Lehui 2023-02-15 1302 out:
49b5e77ae3e214 Pu Lehui 2023-02-15 1303 kfree(branches_off);
49b5e77ae3e214 Pu Lehui 2023-02-15 1304 return ret;
49b5e77ae3e214 Pu Lehui 2023-02-15 1305 }
49b5e77ae3e214 Pu Lehui 2023-02-15 1306
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH bpf-next 02/17] bpf: Use mutex lock pool for bpf trampolines
2026-02-20 10:06 ` [PATCH bpf-next 02/17] bpf: Use mutex lock pool for bpf trampolines Jiri Olsa
2026-02-20 10:57 ` bot+bpf-ci
@ 2026-02-20 19:58 ` Alexei Starovoitov
2026-02-22 14:34 ` Jiri Olsa
1 sibling, 1 reply; 38+ messages in thread
From: Alexei Starovoitov @ 2026-02-20 19:58 UTC (permalink / raw)
To: Jiri Olsa
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
Yonghong Song, Menglong Dong, Steven Rostedt
On Fri, Feb 20, 2026 at 2:07 AM Jiri Olsa <jolsa@kernel.org> wrote:
>
> Adding mutex lock pool that replaces bpf trampolines mutex.
>
> For tracing_multi link coming in following changes we need to lock all
> the involved trampolines during the attachment. This could mean thousands
> of mutex locks, which is not convenient.
>
> As suggested by Andrii we can replace bpf trampolines mutex with mutex
> pool, where each trampoline is hash-ed to one of the locks from the pool.
>
> It's better to lock all the pool mutexes (64 at the moment) than
> thousands of them.
>
> Removing the mutex_is_locked in bpf_trampoline_put, because we removed
> the mutex from bpf_trampoline.
>
> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
> include/linux/bpf.h | 2 --
> kernel/bpf/trampoline.c | 74 +++++++++++++++++++++++++++++++----------
> 2 files changed, 56 insertions(+), 20 deletions(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index cd9b96434904..46bf3d86bdb2 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1335,8 +1335,6 @@ struct bpf_trampoline {
> /* hlist for trampoline_ip_table */
> struct hlist_node hlist_ip;
> struct ftrace_ops *fops;
> - /* serializes access to fields of this trampoline */
> - struct mutex mutex;
> refcount_t refcnt;
> u32 flags;
> u64 key;
> diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> index 952cd7932461..05dc0358654d 100644
> --- a/kernel/bpf/trampoline.c
> +++ b/kernel/bpf/trampoline.c
> @@ -30,6 +30,45 @@ static struct hlist_head trampoline_ip_table[TRAMPOLINE_TABLE_SIZE];
> /* serializes access to trampoline tables */
> static DEFINE_MUTEX(trampoline_mutex);
>
> +#define TRAMPOLINE_LOCKS_BITS 6
> +#define TRAMPOLINE_LOCKS_TABLE_SIZE (1 << TRAMPOLINE_LOCKS_BITS)
> +
> +static struct {
> + struct mutex mutex;
> + struct lock_class_key key;
> +} *trampoline_locks;
> +
> +static struct mutex *trampoline_locks_lookup(struct bpf_trampoline *tr)
select_trampoline_lock() ?
> +{
> + return &trampoline_locks[hash_64((u64) tr, TRAMPOLINE_LOCKS_BITS)].mutex;
> +}
> +
> +static void trampoline_lock(struct bpf_trampoline *tr)
> +{
> + mutex_lock(trampoline_locks_lookup(tr));
> +}
> +
> +static void trampoline_unlock(struct bpf_trampoline *tr)
> +{
> + mutex_unlock(trampoline_locks_lookup(tr));
> +}
> +
> +static int __init trampoline_locks_init(void)
> +{
> + int i;
> +
> + trampoline_locks = kmalloc_array(TRAMPOLINE_LOCKS_TABLE_SIZE,
> + sizeof(trampoline_locks[0]), GFP_KERNEL);
why bother with memory allocation? This is just 64 mutexes.
> + if (!trampoline_locks)
> + return -ENOMEM;
> +
> + for (i = 0; i < TRAMPOLINE_LOCKS_TABLE_SIZE; i++) {
> + lockdep_register_key(&trampoline_locks[i].key);
why special key?
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH bpf-next 04/17] bpf: Add struct bpf_tramp_node object
2026-02-20 10:06 ` [PATCH bpf-next 04/17] bpf: Add struct bpf_tramp_node object Jiri Olsa
2026-02-20 10:58 ` bot+bpf-ci
2026-02-20 19:52 ` kernel test robot
@ 2026-02-20 21:05 ` kernel test robot
2026-02-21 3:00 ` kernel test robot
3 siblings, 0 replies; 38+ messages in thread
From: kernel test robot @ 2026-02-20 21:05 UTC (permalink / raw)
To: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: oe-kbuild-all, bpf, linux-trace-kernel, Martin KaFai Lau,
Eduard Zingerman, Song Liu, Yonghong Song, Menglong Dong,
Steven Rostedt
Hi Jiri,
kernel test robot noticed the following build errors:
[auto build test ERROR on bpf-next/master]
url: https://github.com/intel-lab-lkp/linux/commits/Jiri-Olsa/ftrace-Add-ftrace_hash_count-function/20260220-181324
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link: https://lore.kernel.org/r/20260220100649.628307-5-jolsa%40kernel.org
patch subject: [PATCH bpf-next 04/17] bpf: Add struct bpf_tramp_node object
config: riscv-allnoconfig-bpf (https://download.01.org/0day-ci/archive/20260220/202602202212.yC5wLunx-lkp@intel.com/config)
compiler: riscv64-linux-gnu-gcc (Debian 14.2.0-19) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260220/202602202212.yC5wLunx-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602202212.yC5wLunx-lkp@intel.com/
All errors (new ones prefixed by >>):
arch/riscv/net/bpf_jit_comp64.c: In function 'invoke_bpf_prog':
>> arch/riscv/net/bpf_jit_comp64.c:944:14: error: 'struct bpf_tramp_link' has no member named 'cookie'
944 | if (l->cookie)
| ^~
arch/riscv/net/bpf_jit_comp64.c:945:79: error: 'struct bpf_tramp_link' has no member named 'cookie'
945 | emit_store_stack_imm64(RV_REG_T1, -run_ctx_off + cookie_off, l->cookie, ctx);
| ^~
arch/riscv/net/bpf_jit_comp64.c: At top level:
arch/riscv/net/bpf_jit_comp64.c:999:30: warning: 'struct bpf_tramp_links' declared inside parameter list will not be visible outside of this definition or declaration
999 | static int invoke_bpf(struct bpf_tramp_links *tl, int args_off, int retval_off,
| ^~~~~~~~~~~~~~~
arch/riscv/net/bpf_jit_comp64.c: In function 'invoke_bpf':
>> arch/riscv/net/bpf_jit_comp64.c:1005:27: error: invalid use of undefined type 'struct bpf_tramp_links'
1005 | for (i = 0; i < tl->nr_links; i++) {
| ^~
arch/riscv/net/bpf_jit_comp64.c:1008:53: error: invalid use of undefined type 'struct bpf_tramp_links'
1008 | if (bpf_prog_calls_session_cookie(tl->links[i])) {
| ^~
arch/riscv/net/bpf_jit_comp64.c:1014:41: error: invalid use of undefined type 'struct bpf_tramp_links'
1014 | err = invoke_bpf_prog(tl->links[i], args_off, retval_off, run_ctx_off,
| ^~
arch/riscv/net/bpf_jit_comp64.c: At top level:
arch/riscv/net/bpf_jit_comp64.c:1024:49: warning: 'struct bpf_tramp_links' declared inside parameter list will not be visible outside of this definition or declaration
1024 | struct bpf_tramp_links *tlinks,
| ^~~~~~~~~~~~~~~
arch/riscv/net/bpf_jit_comp64.c: In function '__arch_prepare_bpf_trampoline':
arch/riscv/net/bpf_jit_comp64.c:1033:49: error: invalid use of undefined type 'struct bpf_tramp_links'
1033 | struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
| ^
arch/riscv/net/bpf_jit_comp64.c:1034:48: error: invalid use of undefined type 'struct bpf_tramp_links'
1034 | struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
| ^
arch/riscv/net/bpf_jit_comp64.c:1035:51: error: invalid use of undefined type 'struct bpf_tramp_links'
1035 | struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
| ^
arch/riscv/net/bpf_jit_comp64.c:1118:46: error: passing argument 1 of 'bpf_fsession_cookie_cnt' from incompatible pointer type [-Wincompatible-pointer-types]
1118 | cookie_cnt = bpf_fsession_cookie_cnt(tlinks);
| ^~~~~~
| |
| struct bpf_tramp_links *
In file included from arch/riscv/net/bpf_jit_comp64.c:9:
./include/linux/bpf.h:2207:67: note: expected 'struct bpf_tramp_nodes *' but argument is of type 'struct bpf_tramp_links *'
2207 | static inline int bpf_fsession_cookie_cnt(struct bpf_tramp_nodes *nodes)
| ~~~~~~~~~~~~~~~~~~~~~~~~^~~~~
arch/riscv/net/bpf_jit_comp64.c:1175:30: error: passing argument 1 of 'bpf_fsession_cnt' from incompatible pointer type [-Wincompatible-pointer-types]
1175 | if (bpf_fsession_cnt(tlinks)) {
| ^~~~~~
| |
| struct bpf_tramp_links *
./include/linux/bpf.h:2189:60: note: expected 'struct bpf_tramp_nodes *' but argument is of type 'struct bpf_tramp_links *'
2189 | static inline int bpf_fsession_cnt(struct bpf_tramp_nodes *nodes)
| ~~~~~~~~~~~~~~~~~~~~~~~~^~~~~
arch/riscv/net/bpf_jit_comp64.c:1190:19: error: invalid use of undefined type 'struct bpf_tramp_links'
1190 | if (fentry->nr_links) {
| ^~
arch/riscv/net/bpf_jit_comp64.c:1191:34: error: passing argument 1 of 'invoke_bpf' from incompatible pointer type [-Wincompatible-pointer-types]
1191 | ret = invoke_bpf(fentry, args_off, retval_off, run_ctx_off, func_meta_off,
| ^~~~~~
| |
| struct bpf_tramp_links *
arch/riscv/net/bpf_jit_comp64.c:999:47: note: expected 'struct bpf_tramp_links *' but argument is of type 'struct bpf_tramp_links *'
999 | static int invoke_bpf(struct bpf_tramp_links *tl, int args_off, int retval_off,
| ~~~~~~~~~~~~~~~~~~~~~~~~^~
arch/riscv/net/bpf_jit_comp64.c:1197:21: error: invalid use of undefined type 'struct bpf_tramp_links'
1197 | if (fmod_ret->nr_links) {
| ^~
In file included from ./include/linux/workqueue.h:9,
from ./include/linux/bpf.h:11:
arch/riscv/net/bpf_jit_comp64.c:1198:48: error: invalid use of undefined type 'struct bpf_tramp_links'
1198 | branches_off = kcalloc(fmod_ret->nr_links, sizeof(int), GFP_KERNEL);
| ^~
./include/linux/alloc_tag.h:251:16: note: in definition of macro 'alloc_hooks_tag'
251 | typeof(_do_alloc) _res; \
| ^~~~~~~~~
./include/linux/slab.h:1115:49: note: in expansion of macro 'alloc_hooks'
1115 | #define kmalloc_array(...) alloc_hooks(kmalloc_array_noprof(__VA_ARGS__))
| ^~~~~~~~~~~
./include/linux/slab.h:1154:41: note: in expansion of macro 'kmalloc_array'
1154 | #define kcalloc(n, size, flags) kmalloc_array(n, size, (flags) | __GFP_ZERO)
| ^~~~~~~~~~~~~
arch/riscv/net/bpf_jit_comp64.c:1198:32: note: in expansion of macro 'kcalloc'
1198 | branches_off = kcalloc(fmod_ret->nr_links, sizeof(int), GFP_KERNEL);
| ^~~~~~~
arch/riscv/net/bpf_jit_comp64.c:1198:48: error: invalid use of undefined type 'struct bpf_tramp_links'
1198 | branches_off = kcalloc(fmod_ret->nr_links, sizeof(int), GFP_KERNEL);
| ^~
./include/linux/alloc_tag.h:255:24: note: in definition of macro 'alloc_hooks_tag'
255 | _res = _do_alloc; \
| ^~~~~~~~~
./include/linux/slab.h:1115:49: note: in expansion of macro 'alloc_hooks'
1115 | #define kmalloc_array(...) alloc_hooks(kmalloc_array_noprof(__VA_ARGS__))
| ^~~~~~~~~~~
./include/linux/slab.h:1154:41: note: in expansion of macro 'kmalloc_array'
1154 | #define kcalloc(n, size, flags) kmalloc_array(n, size, (flags) | __GFP_ZERO)
| ^~~~~~~~~~~~~
arch/riscv/net/bpf_jit_comp64.c:1198:32: note: in expansion of macro 'kcalloc'
1198 | branches_off = kcalloc(fmod_ret->nr_links, sizeof(int), GFP_KERNEL);
| ^~~~~~~
arch/riscv/net/bpf_jit_comp64.c:1198:48: error: invalid use of undefined type 'struct bpf_tramp_links'
1198 | branches_off = kcalloc(fmod_ret->nr_links, sizeof(int), GFP_KERNEL);
| ^~
./include/linux/alloc_tag.h:258:24: note: in definition of macro 'alloc_hooks_tag'
258 | _res = _do_alloc; \
| ^~~~~~~~~
./include/linux/slab.h:1115:49: note: in expansion of macro 'alloc_hooks'
1115 | #define kmalloc_array(...) alloc_hooks(kmalloc_array_noprof(__VA_ARGS__))
| ^~~~~~~~~~~
./include/linux/slab.h:1154:41: note: in expansion of macro 'kmalloc_array'
1154 | #define kcalloc(n, size, flags) kmalloc_array(n, size, (flags) | __GFP_ZERO)
| ^~~~~~~~~~~~~
arch/riscv/net/bpf_jit_comp64.c:1198:32: note: in expansion of macro 'kcalloc'
1198 | branches_off = kcalloc(fmod_ret->nr_links, sizeof(int), GFP_KERNEL);
| ^~~~~~~
vim +944 arch/riscv/net/bpf_jit_comp64.c
93fd420d71beed Menglong Dong 2026-02-08 936
49b5e77ae3e214 Pu Lehui 2023-02-15 937 static int invoke_bpf_prog(struct bpf_tramp_link *l, int args_off, int retval_off,
49b5e77ae3e214 Pu Lehui 2023-02-15 938 int run_ctx_off, bool save_ret, struct rv_jit_context *ctx)
49b5e77ae3e214 Pu Lehui 2023-02-15 939 {
49b5e77ae3e214 Pu Lehui 2023-02-15 940 int ret, branch_off;
49b5e77ae3e214 Pu Lehui 2023-02-15 941 struct bpf_prog *p = l->link.prog;
49b5e77ae3e214 Pu Lehui 2023-02-15 942 int cookie_off = offsetof(struct bpf_tramp_run_ctx, bpf_cookie);
49b5e77ae3e214 Pu Lehui 2023-02-15 943
93fd420d71beed Menglong Dong 2026-02-08 @944 if (l->cookie)
93fd420d71beed Menglong Dong 2026-02-08 945 emit_store_stack_imm64(RV_REG_T1, -run_ctx_off + cookie_off, l->cookie, ctx);
93fd420d71beed Menglong Dong 2026-02-08 946 else
49b5e77ae3e214 Pu Lehui 2023-02-15 947 emit_sd(RV_REG_FP, -run_ctx_off + cookie_off, RV_REG_ZERO, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 948
49b5e77ae3e214 Pu Lehui 2023-02-15 949 /* arg1: prog */
49b5e77ae3e214 Pu Lehui 2023-02-15 950 emit_imm(RV_REG_A0, (const s64)p, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 951 /* arg2: &run_ctx */
49b5e77ae3e214 Pu Lehui 2023-02-15 952 emit_addi(RV_REG_A1, RV_REG_FP, -run_ctx_off, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 953 ret = emit_call((const u64)bpf_trampoline_enter(p), true, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 954 if (ret)
49b5e77ae3e214 Pu Lehui 2023-02-15 955 return ret;
49b5e77ae3e214 Pu Lehui 2023-02-15 956
10541b374aa05c Xu Kuohai 2024-04-16 957 /* store prog start time */
10541b374aa05c Xu Kuohai 2024-04-16 958 emit_mv(RV_REG_S1, RV_REG_A0, ctx);
10541b374aa05c Xu Kuohai 2024-04-16 959
49b5e77ae3e214 Pu Lehui 2023-02-15 960 /* if (__bpf_prog_enter(prog) == 0)
49b5e77ae3e214 Pu Lehui 2023-02-15 961 * goto skip_exec_of_prog;
49b5e77ae3e214 Pu Lehui 2023-02-15 962 */
49b5e77ae3e214 Pu Lehui 2023-02-15 963 branch_off = ctx->ninsns;
49b5e77ae3e214 Pu Lehui 2023-02-15 964 /* nop reserved for conditional jump */
49b5e77ae3e214 Pu Lehui 2023-02-15 965 emit(rv_nop(), ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 966
49b5e77ae3e214 Pu Lehui 2023-02-15 967 /* arg1: &args_off */
49b5e77ae3e214 Pu Lehui 2023-02-15 968 emit_addi(RV_REG_A0, RV_REG_FP, -args_off, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 969 if (!p->jited)
49b5e77ae3e214 Pu Lehui 2023-02-15 970 /* arg2: progs[i]->insnsi for interpreter */
49b5e77ae3e214 Pu Lehui 2023-02-15 971 emit_imm(RV_REG_A1, (const s64)p->insnsi, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 972 ret = emit_call((const u64)p->bpf_func, true, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 973 if (ret)
49b5e77ae3e214 Pu Lehui 2023-02-15 974 return ret;
49b5e77ae3e214 Pu Lehui 2023-02-15 975
7112cd26e606c7 Björn Töpel 2023-10-04 976 if (save_ret) {
7112cd26e606c7 Björn Töpel 2023-10-04 977 emit_sd(RV_REG_FP, -retval_off, RV_REG_A0, ctx);
7112cd26e606c7 Björn Töpel 2023-10-04 978 emit_sd(RV_REG_FP, -(retval_off - 8), regmap[BPF_REG_0], ctx);
7112cd26e606c7 Björn Töpel 2023-10-04 979 }
49b5e77ae3e214 Pu Lehui 2023-02-15 980
49b5e77ae3e214 Pu Lehui 2023-02-15 981 /* update branch with beqz */
49b5e77ae3e214 Pu Lehui 2023-02-15 982 if (ctx->insns) {
49b5e77ae3e214 Pu Lehui 2023-02-15 983 int offset = ninsns_rvoff(ctx->ninsns - branch_off);
49b5e77ae3e214 Pu Lehui 2023-02-15 984 u32 insn = rv_beq(RV_REG_A0, RV_REG_ZERO, offset >> 1);
49b5e77ae3e214 Pu Lehui 2023-02-15 985 *(u32 *)(ctx->insns + branch_off) = insn;
49b5e77ae3e214 Pu Lehui 2023-02-15 986 }
49b5e77ae3e214 Pu Lehui 2023-02-15 987
49b5e77ae3e214 Pu Lehui 2023-02-15 988 /* arg1: prog */
49b5e77ae3e214 Pu Lehui 2023-02-15 989 emit_imm(RV_REG_A0, (const s64)p, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 990 /* arg2: prog start time */
49b5e77ae3e214 Pu Lehui 2023-02-15 991 emit_mv(RV_REG_A1, RV_REG_S1, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 992 /* arg3: &run_ctx */
49b5e77ae3e214 Pu Lehui 2023-02-15 993 emit_addi(RV_REG_A2, RV_REG_FP, -run_ctx_off, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 994 ret = emit_call((const u64)bpf_trampoline_exit(p), true, ctx);
49b5e77ae3e214 Pu Lehui 2023-02-15 995
49b5e77ae3e214 Pu Lehui 2023-02-15 996 return ret;
49b5e77ae3e214 Pu Lehui 2023-02-15 997 }
49b5e77ae3e214 Pu Lehui 2023-02-15 998
35b3515be0ecb9 Menglong Dong 2026-02-08 999 static int invoke_bpf(struct bpf_tramp_links *tl, int args_off, int retval_off,
35b3515be0ecb9 Menglong Dong 2026-02-08 1000 int run_ctx_off, int func_meta_off, bool save_ret, u64 func_meta,
35b3515be0ecb9 Menglong Dong 2026-02-08 1001 int cookie_off, struct rv_jit_context *ctx)
35b3515be0ecb9 Menglong Dong 2026-02-08 1002 {
35b3515be0ecb9 Menglong Dong 2026-02-08 1003 int i, cur_cookie = (cookie_off - args_off) / 8;
35b3515be0ecb9 Menglong Dong 2026-02-08 1004
35b3515be0ecb9 Menglong Dong 2026-02-08 @1005 for (i = 0; i < tl->nr_links; i++) {
35b3515be0ecb9 Menglong Dong 2026-02-08 1006 int err;
35b3515be0ecb9 Menglong Dong 2026-02-08 1007
35b3515be0ecb9 Menglong Dong 2026-02-08 1008 if (bpf_prog_calls_session_cookie(tl->links[i])) {
35b3515be0ecb9 Menglong Dong 2026-02-08 1009 u64 meta = func_meta | ((u64)cur_cookie << BPF_TRAMP_COOKIE_INDEX_SHIFT);
35b3515be0ecb9 Menglong Dong 2026-02-08 1010
35b3515be0ecb9 Menglong Dong 2026-02-08 1011 emit_store_stack_imm64(RV_REG_T1, -func_meta_off, meta, ctx);
35b3515be0ecb9 Menglong Dong 2026-02-08 1012 cur_cookie--;
35b3515be0ecb9 Menglong Dong 2026-02-08 1013 }
35b3515be0ecb9 Menglong Dong 2026-02-08 1014 err = invoke_bpf_prog(tl->links[i], args_off, retval_off, run_ctx_off,
35b3515be0ecb9 Menglong Dong 2026-02-08 1015 save_ret, ctx);
35b3515be0ecb9 Menglong Dong 2026-02-08 1016 if (err)
35b3515be0ecb9 Menglong Dong 2026-02-08 1017 return err;
35b3515be0ecb9 Menglong Dong 2026-02-08 1018 }
35b3515be0ecb9 Menglong Dong 2026-02-08 1019 return 0;
35b3515be0ecb9 Menglong Dong 2026-02-08 1020 }
35b3515be0ecb9 Menglong Dong 2026-02-08 1021
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH bpf-next 04/17] bpf: Add struct bpf_tramp_node object
2026-02-20 10:06 ` [PATCH bpf-next 04/17] bpf: Add struct bpf_tramp_node object Jiri Olsa
` (2 preceding siblings ...)
2026-02-20 21:05 ` kernel test robot
@ 2026-02-21 3:00 ` kernel test robot
3 siblings, 0 replies; 38+ messages in thread
From: kernel test robot @ 2026-02-21 3:00 UTC (permalink / raw)
To: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: oe-kbuild-all, bpf, linux-trace-kernel, Martin KaFai Lau,
Eduard Zingerman, Song Liu, Yonghong Song, Menglong Dong,
Steven Rostedt
Hi Jiri,
kernel test robot noticed the following build errors:
[auto build test ERROR on bpf-next/master]
url: https://github.com/intel-lab-lkp/linux/commits/Jiri-Olsa/ftrace-Add-ftrace_hash_count-function/20260220-181324
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link: https://lore.kernel.org/r/20260220100649.628307-5-jolsa%40kernel.org
patch subject: [PATCH bpf-next 04/17] bpf: Add struct bpf_tramp_node object
config: riscv-randconfig-001-20260221 (https://download.01.org/0day-ci/archive/20260221/202602211023.EiuS4wkF-lkp@intel.com/config)
compiler: riscv64-linux-gcc (GCC) 8.5.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260221/202602211023.EiuS4wkF-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602211023.EiuS4wkF-lkp@intel.com/
All errors (new ones prefixed by >>):
arch/riscv/net/bpf_jit_comp64.c: In function 'invoke_bpf_prog':
>> arch/riscv/net/bpf_jit_comp64.c:944:7: error: 'struct bpf_tramp_link' has no member named 'cookie'
if (l->cookie)
^~
arch/riscv/net/bpf_jit_comp64.c:945:65: error: 'struct bpf_tramp_link' has no member named 'cookie'
emit_store_stack_imm64(RV_REG_T1, -run_ctx_off + cookie_off, l->cookie, ctx);
^~
arch/riscv/net/bpf_jit_comp64.c: At top level:
arch/riscv/net/bpf_jit_comp64.c:999:30: warning: 'struct bpf_tramp_links' declared inside parameter list will not be visible outside of this definition or declaration
static int invoke_bpf(struct bpf_tramp_links *tl, int args_off, int retval_off,
^~~~~~~~~~~~~~~
arch/riscv/net/bpf_jit_comp64.c: In function 'invoke_bpf':
>> arch/riscv/net/bpf_jit_comp64.c:1005:20: error: dereferencing pointer to incomplete type 'struct bpf_tramp_links'
for (i = 0; i < tl->nr_links; i++) {
^~
arch/riscv/net/bpf_jit_comp64.c: At top level:
arch/riscv/net/bpf_jit_comp64.c:1024:14: warning: 'struct bpf_tramp_links' declared inside parameter list will not be visible outside of this definition or declaration
struct bpf_tramp_links *tlinks,
^~~~~~~~~~~~~~~
arch/riscv/net/bpf_jit_comp64.c: In function '__arch_prepare_bpf_trampoline':
>> arch/riscv/net/bpf_jit_comp64.c:1033:42: error: invalid use of undefined type 'struct bpf_tramp_links'
struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
^
arch/riscv/net/bpf_jit_comp64.c:1033:42: error: dereferencing pointer to incomplete type 'struct bpf_tramp_links'
arch/riscv/net/bpf_jit_comp64.c:1034:41: error: invalid use of undefined type 'struct bpf_tramp_links'
struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
^
arch/riscv/net/bpf_jit_comp64.c:1035:44: error: invalid use of undefined type 'struct bpf_tramp_links'
struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
^
arch/riscv/net/bpf_jit_comp64.c:1118:39: error: passing argument 1 of 'bpf_fsession_cookie_cnt' from incompatible pointer type [-Werror=incompatible-pointer-types]
cookie_cnt = bpf_fsession_cookie_cnt(tlinks);
^~~~~~
In file included from arch/riscv/net/bpf_jit_comp64.c:9:
include/linux/bpf.h:2207:67: note: expected 'struct bpf_tramp_nodes *' but argument is of type 'struct bpf_tramp_links *'
static inline int bpf_fsession_cookie_cnt(struct bpf_tramp_nodes *nodes)
~~~~~~~~~~~~~~~~~~~~~~~~^~~~~
arch/riscv/net/bpf_jit_comp64.c:1175:23: error: passing argument 1 of 'bpf_fsession_cnt' from incompatible pointer type [-Werror=incompatible-pointer-types]
if (bpf_fsession_cnt(tlinks)) {
^~~~~~
In file included from arch/riscv/net/bpf_jit_comp64.c:9:
include/linux/bpf.h:2189:60: note: expected 'struct bpf_tramp_nodes *' but argument is of type 'struct bpf_tramp_links *'
static inline int bpf_fsession_cnt(struct bpf_tramp_nodes *nodes)
~~~~~~~~~~~~~~~~~~~~~~~~^~~~~
arch/riscv/net/bpf_jit_comp64.c:1191:20: error: passing argument 1 of 'invoke_bpf' from incompatible pointer type [-Werror=incompatible-pointer-types]
ret = invoke_bpf(fentry, args_off, retval_off, run_ctx_off, func_meta_off,
^~~~~~
arch/riscv/net/bpf_jit_comp64.c:999:47: note: expected 'struct bpf_tramp_links *' but argument is of type 'struct bpf_tramp_links *'
static int invoke_bpf(struct bpf_tramp_links *tl, int args_off, int retval_off,
~~~~~~~~~~~~~~~~~~~~~~~~^~
arch/riscv/net/bpf_jit_comp64.c:1198:16: warning: assignment to 'int *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
branches_off = kcalloc(fmod_ret->nr_links, sizeof(int), GFP_KERNEL);
^
arch/riscv/net/bpf_jit_comp64.c:1241:23: error: passing argument 1 of 'bpf_fsession_cnt' from incompatible pointer type [-Werror=incompatible-pointer-types]
if (bpf_fsession_cnt(tlinks))
^~~~~~
In file included from arch/riscv/net/bpf_jit_comp64.c:9:
include/linux/bpf.h:2189:60: note: expected 'struct bpf_tramp_nodes *' but argument is of type 'struct bpf_tramp_links *'
static inline int bpf_fsession_cnt(struct bpf_tramp_nodes *nodes)
~~~~~~~~~~~~~~~~~~~~~~~~^~~~~
arch/riscv/net/bpf_jit_comp64.c:1245:20: error: passing argument 1 of 'invoke_bpf' from incompatible pointer type [-Werror=incompatible-pointer-types]
ret = invoke_bpf(fexit, args_off, retval_off, run_ctx_off, func_meta_off,
^~~~~
arch/riscv/net/bpf_jit_comp64.c:999:47: note: expected 'struct bpf_tramp_links *' but argument is of type 'struct bpf_tramp_links *'
static int invoke_bpf(struct bpf_tramp_links *tl, int args_off, int retval_off,
~~~~~~~~~~~~~~~~~~~~~~~~^~
arch/riscv/net/bpf_jit_comp64.c: At top level:
arch/riscv/net/bpf_jit_comp64.c:1308:16: warning: 'struct bpf_tramp_links' declared inside parameter list will not be visible outside of this definition or declaration
struct bpf_tramp_links *tlinks, void *func_addr)
^~~~~~~~~~~~~~~
arch/riscv/net/bpf_jit_comp64.c:1307:5: error: conflicting types for 'arch_bpf_trampoline_size'
int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
^~~~~~~~~~~~~~~~~~~~~~~~
In file included from arch/riscv/net/bpf_jit_comp64.c:9:
include/linux/bpf.h:1271:5: note: previous declaration of 'arch_bpf_trampoline_size' was here
int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
^~~~~~~~~~~~~~~~~~~~~~~~
arch/riscv/net/bpf_jit_comp64.c: In function 'arch_bpf_trampoline_size':
arch/riscv/net/bpf_jit_comp64.c:1317:46: error: passing argument 3 of '__arch_prepare_bpf_trampoline' from incompatible pointer type [-Werror=incompatible-pointer-types]
ret = __arch_prepare_bpf_trampoline(&im, m, tlinks, func_addr, flags, &ctx);
^~~~~~
arch/riscv/net/bpf_jit_comp64.c:1024:31: note: expected 'struct bpf_tramp_links *' but argument is of type 'struct bpf_tramp_links *'
struct bpf_tramp_links *tlinks,
~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~
arch/riscv/net/bpf_jit_comp64.c: At top level:
arch/riscv/net/bpf_jit_comp64.c:1334:23: warning: 'struct bpf_tramp_links' declared inside parameter list will not be visible outside of this definition or declaration
u32 flags, struct bpf_tramp_links *tlinks,
^~~~~~~~~~~~~~~
arch/riscv/net/bpf_jit_comp64.c:1332:5: error: conflicting types for 'arch_prepare_bpf_trampoline'
int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *ro_image,
^~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from arch/riscv/net/bpf_jit_comp64.c:9:
include/linux/bpf.h:1264:5: note: previous declaration of 'arch_prepare_bpf_trampoline' was here
int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *image_end,
^~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/riscv/net/bpf_jit_comp64.c: In function 'arch_prepare_bpf_trampoline':
arch/riscv/net/bpf_jit_comp64.c:1349:45: error: passing argument 3 of '__arch_prepare_bpf_trampoline' from incompatible pointer type [-Werror=incompatible-pointer-types]
ret = __arch_prepare_bpf_trampoline(im, m, tlinks, func_addr, flags, &ctx);
^~~~~~
arch/riscv/net/bpf_jit_comp64.c:1024:31: note: expected 'struct bpf_tramp_links *' but argument is of type 'struct bpf_tramp_links *'
struct bpf_tramp_links *tlinks,
~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~
cc1: some warnings being treated as errors
vim +944 arch/riscv/net/bpf_jit_comp64.c
93fd420d71beed5 Menglong Dong 2026-02-08 936
49b5e77ae3e214a Pu Lehui 2023-02-15 937 static int invoke_bpf_prog(struct bpf_tramp_link *l, int args_off, int retval_off,
49b5e77ae3e214a Pu Lehui 2023-02-15 938 int run_ctx_off, bool save_ret, struct rv_jit_context *ctx)
49b5e77ae3e214a Pu Lehui 2023-02-15 939 {
49b5e77ae3e214a Pu Lehui 2023-02-15 940 int ret, branch_off;
49b5e77ae3e214a Pu Lehui 2023-02-15 941 struct bpf_prog *p = l->link.prog;
49b5e77ae3e214a Pu Lehui 2023-02-15 942 int cookie_off = offsetof(struct bpf_tramp_run_ctx, bpf_cookie);
49b5e77ae3e214a Pu Lehui 2023-02-15 943
93fd420d71beed5 Menglong Dong 2026-02-08 @944 if (l->cookie)
93fd420d71beed5 Menglong Dong 2026-02-08 945 emit_store_stack_imm64(RV_REG_T1, -run_ctx_off + cookie_off, l->cookie, ctx);
93fd420d71beed5 Menglong Dong 2026-02-08 946 else
49b5e77ae3e214a Pu Lehui 2023-02-15 947 emit_sd(RV_REG_FP, -run_ctx_off + cookie_off, RV_REG_ZERO, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 948
49b5e77ae3e214a Pu Lehui 2023-02-15 949 /* arg1: prog */
49b5e77ae3e214a Pu Lehui 2023-02-15 950 emit_imm(RV_REG_A0, (const s64)p, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 951 /* arg2: &run_ctx */
49b5e77ae3e214a Pu Lehui 2023-02-15 952 emit_addi(RV_REG_A1, RV_REG_FP, -run_ctx_off, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 953 ret = emit_call((const u64)bpf_trampoline_enter(p), true, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 954 if (ret)
49b5e77ae3e214a Pu Lehui 2023-02-15 955 return ret;
49b5e77ae3e214a Pu Lehui 2023-02-15 956
10541b374aa05c8 Xu Kuohai 2024-04-16 957 /* store prog start time */
10541b374aa05c8 Xu Kuohai 2024-04-16 958 emit_mv(RV_REG_S1, RV_REG_A0, ctx);
10541b374aa05c8 Xu Kuohai 2024-04-16 959
49b5e77ae3e214a Pu Lehui 2023-02-15 960 /* if (__bpf_prog_enter(prog) == 0)
49b5e77ae3e214a Pu Lehui 2023-02-15 961 * goto skip_exec_of_prog;
49b5e77ae3e214a Pu Lehui 2023-02-15 962 */
49b5e77ae3e214a Pu Lehui 2023-02-15 963 branch_off = ctx->ninsns;
49b5e77ae3e214a Pu Lehui 2023-02-15 964 /* nop reserved for conditional jump */
49b5e77ae3e214a Pu Lehui 2023-02-15 965 emit(rv_nop(), ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 966
49b5e77ae3e214a Pu Lehui 2023-02-15 967 /* arg1: &args_off */
49b5e77ae3e214a Pu Lehui 2023-02-15 968 emit_addi(RV_REG_A0, RV_REG_FP, -args_off, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 969 if (!p->jited)
49b5e77ae3e214a Pu Lehui 2023-02-15 970 /* arg2: progs[i]->insnsi for interpreter */
49b5e77ae3e214a Pu Lehui 2023-02-15 971 emit_imm(RV_REG_A1, (const s64)p->insnsi, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 972 ret = emit_call((const u64)p->bpf_func, true, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 973 if (ret)
49b5e77ae3e214a Pu Lehui 2023-02-15 974 return ret;
49b5e77ae3e214a Pu Lehui 2023-02-15 975
7112cd26e606c7b Björn Töpel 2023-10-04 976 if (save_ret) {
7112cd26e606c7b Björn Töpel 2023-10-04 977 emit_sd(RV_REG_FP, -retval_off, RV_REG_A0, ctx);
7112cd26e606c7b Björn Töpel 2023-10-04 978 emit_sd(RV_REG_FP, -(retval_off - 8), regmap[BPF_REG_0], ctx);
7112cd26e606c7b Björn Töpel 2023-10-04 979 }
49b5e77ae3e214a Pu Lehui 2023-02-15 980
49b5e77ae3e214a Pu Lehui 2023-02-15 981 /* update branch with beqz */
49b5e77ae3e214a Pu Lehui 2023-02-15 982 if (ctx->insns) {
49b5e77ae3e214a Pu Lehui 2023-02-15 983 int offset = ninsns_rvoff(ctx->ninsns - branch_off);
49b5e77ae3e214a Pu Lehui 2023-02-15 984 u32 insn = rv_beq(RV_REG_A0, RV_REG_ZERO, offset >> 1);
49b5e77ae3e214a Pu Lehui 2023-02-15 985 *(u32 *)(ctx->insns + branch_off) = insn;
49b5e77ae3e214a Pu Lehui 2023-02-15 986 }
49b5e77ae3e214a Pu Lehui 2023-02-15 987
49b5e77ae3e214a Pu Lehui 2023-02-15 988 /* arg1: prog */
49b5e77ae3e214a Pu Lehui 2023-02-15 989 emit_imm(RV_REG_A0, (const s64)p, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 990 /* arg2: prog start time */
49b5e77ae3e214a Pu Lehui 2023-02-15 991 emit_mv(RV_REG_A1, RV_REG_S1, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 992 /* arg3: &run_ctx */
49b5e77ae3e214a Pu Lehui 2023-02-15 993 emit_addi(RV_REG_A2, RV_REG_FP, -run_ctx_off, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 994 ret = emit_call((const u64)bpf_trampoline_exit(p), true, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 995
49b5e77ae3e214a Pu Lehui 2023-02-15 996 return ret;
49b5e77ae3e214a Pu Lehui 2023-02-15 997 }
49b5e77ae3e214a Pu Lehui 2023-02-15 998
35b3515be0ecb9d Menglong Dong 2026-02-08 999 static int invoke_bpf(struct bpf_tramp_links *tl, int args_off, int retval_off,
35b3515be0ecb9d Menglong Dong 2026-02-08 1000 int run_ctx_off, int func_meta_off, bool save_ret, u64 func_meta,
35b3515be0ecb9d Menglong Dong 2026-02-08 1001 int cookie_off, struct rv_jit_context *ctx)
35b3515be0ecb9d Menglong Dong 2026-02-08 1002 {
35b3515be0ecb9d Menglong Dong 2026-02-08 1003 int i, cur_cookie = (cookie_off - args_off) / 8;
35b3515be0ecb9d Menglong Dong 2026-02-08 1004
35b3515be0ecb9d Menglong Dong 2026-02-08 @1005 for (i = 0; i < tl->nr_links; i++) {
35b3515be0ecb9d Menglong Dong 2026-02-08 1006 int err;
35b3515be0ecb9d Menglong Dong 2026-02-08 1007
35b3515be0ecb9d Menglong Dong 2026-02-08 1008 if (bpf_prog_calls_session_cookie(tl->links[i])) {
35b3515be0ecb9d Menglong Dong 2026-02-08 1009 u64 meta = func_meta | ((u64)cur_cookie << BPF_TRAMP_COOKIE_INDEX_SHIFT);
35b3515be0ecb9d Menglong Dong 2026-02-08 1010
35b3515be0ecb9d Menglong Dong 2026-02-08 1011 emit_store_stack_imm64(RV_REG_T1, -func_meta_off, meta, ctx);
35b3515be0ecb9d Menglong Dong 2026-02-08 1012 cur_cookie--;
35b3515be0ecb9d Menglong Dong 2026-02-08 1013 }
35b3515be0ecb9d Menglong Dong 2026-02-08 1014 err = invoke_bpf_prog(tl->links[i], args_off, retval_off, run_ctx_off,
35b3515be0ecb9d Menglong Dong 2026-02-08 1015 save_ret, ctx);
35b3515be0ecb9d Menglong Dong 2026-02-08 1016 if (err)
35b3515be0ecb9d Menglong Dong 2026-02-08 1017 return err;
35b3515be0ecb9d Menglong Dong 2026-02-08 1018 }
35b3515be0ecb9d Menglong Dong 2026-02-08 1019 return 0;
35b3515be0ecb9d Menglong Dong 2026-02-08 1020 }
35b3515be0ecb9d Menglong Dong 2026-02-08 1021
49b5e77ae3e214a Pu Lehui 2023-02-15 1022 static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
49b5e77ae3e214a Pu Lehui 2023-02-15 1023 const struct btf_func_model *m,
49b5e77ae3e214a Pu Lehui 2023-02-15 1024 struct bpf_tramp_links *tlinks,
49b5e77ae3e214a Pu Lehui 2023-02-15 1025 void *func_addr, u32 flags,
49b5e77ae3e214a Pu Lehui 2023-02-15 1026 struct rv_jit_context *ctx)
49b5e77ae3e214a Pu Lehui 2023-02-15 1027 {
49b5e77ae3e214a Pu Lehui 2023-02-15 1028 int i, ret, offset;
49b5e77ae3e214a Pu Lehui 2023-02-15 1029 int *branches_off = NULL;
6801b0aef79db47 Pu Lehui 2024-07-02 1030 int stack_size = 0, nr_arg_slots = 0;
35b3515be0ecb9d Menglong Dong 2026-02-08 1031 int retval_off, args_off, func_meta_off, ip_off, run_ctx_off, sreg_off, stk_arg_off;
35b3515be0ecb9d Menglong Dong 2026-02-08 1032 int cookie_off, cookie_cnt;
49b5e77ae3e214a Pu Lehui 2023-02-15 @1033 struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
49b5e77ae3e214a Pu Lehui 2023-02-15 1034 struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
49b5e77ae3e214a Pu Lehui 2023-02-15 1035 struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
1732ebc4a26181c Pu Lehui 2024-01-23 1036 bool is_struct_ops = flags & BPF_TRAMP_F_INDIRECT;
49b5e77ae3e214a Pu Lehui 2023-02-15 1037 void *orig_call = func_addr;
49b5e77ae3e214a Pu Lehui 2023-02-15 1038 bool save_ret;
35b3515be0ecb9d Menglong Dong 2026-02-08 1039 u64 func_meta;
49b5e77ae3e214a Pu Lehui 2023-02-15 1040 u32 insn;
49b5e77ae3e214a Pu Lehui 2023-02-15 1041
25ad10658dc1068 Pu Lehui 2023-07-21 1042 /* Two types of generated trampoline stack layout:
25ad10658dc1068 Pu Lehui 2023-07-21 1043 *
25ad10658dc1068 Pu Lehui 2023-07-21 1044 * 1. trampoline called from function entry
25ad10658dc1068 Pu Lehui 2023-07-21 1045 * --------------------------------------
25ad10658dc1068 Pu Lehui 2023-07-21 1046 * FP + 8 [ RA to parent func ] return address to parent
25ad10658dc1068 Pu Lehui 2023-07-21 1047 * function
25ad10658dc1068 Pu Lehui 2023-07-21 1048 * FP + 0 [ FP of parent func ] frame pointer of parent
25ad10658dc1068 Pu Lehui 2023-07-21 1049 * function
25ad10658dc1068 Pu Lehui 2023-07-21 1050 * FP - 8 [ T0 to traced func ] return address of traced
25ad10658dc1068 Pu Lehui 2023-07-21 1051 * function
25ad10658dc1068 Pu Lehui 2023-07-21 1052 * FP - 16 [ FP of traced func ] frame pointer of traced
25ad10658dc1068 Pu Lehui 2023-07-21 1053 * function
25ad10658dc1068 Pu Lehui 2023-07-21 1054 * --------------------------------------
49b5e77ae3e214a Pu Lehui 2023-02-15 1055 *
25ad10658dc1068 Pu Lehui 2023-07-21 1056 * 2. trampoline called directly
25ad10658dc1068 Pu Lehui 2023-07-21 1057 * --------------------------------------
25ad10658dc1068 Pu Lehui 2023-07-21 1058 * FP - 8 [ RA to caller func ] return address to caller
49b5e77ae3e214a Pu Lehui 2023-02-15 1059 * function
25ad10658dc1068 Pu Lehui 2023-07-21 1060 * FP - 16 [ FP of caller func ] frame pointer of caller
49b5e77ae3e214a Pu Lehui 2023-02-15 1061 * function
25ad10658dc1068 Pu Lehui 2023-07-21 1062 * --------------------------------------
49b5e77ae3e214a Pu Lehui 2023-02-15 1063 *
49b5e77ae3e214a Pu Lehui 2023-02-15 1064 * FP - retval_off [ return value ] BPF_TRAMP_F_CALL_ORIG or
49b5e77ae3e214a Pu Lehui 2023-02-15 1065 * BPF_TRAMP_F_RET_FENTRY_RET
49b5e77ae3e214a Pu Lehui 2023-02-15 1066 * [ argN ]
49b5e77ae3e214a Pu Lehui 2023-02-15 1067 * [ ... ]
49b5e77ae3e214a Pu Lehui 2023-02-15 1068 * FP - args_off [ arg1 ]
49b5e77ae3e214a Pu Lehui 2023-02-15 1069 *
35b3515be0ecb9d Menglong Dong 2026-02-08 1070 * FP - func_meta_off [ regs count, etc ]
49b5e77ae3e214a Pu Lehui 2023-02-15 1071 *
49b5e77ae3e214a Pu Lehui 2023-02-15 1072 * FP - ip_off [ traced func ] BPF_TRAMP_F_IP_ARG
49b5e77ae3e214a Pu Lehui 2023-02-15 1073 *
35b3515be0ecb9d Menglong Dong 2026-02-08 1074 * [ stack cookie N ]
35b3515be0ecb9d Menglong Dong 2026-02-08 1075 * [ ... ]
35b3515be0ecb9d Menglong Dong 2026-02-08 1076 * FP - cookie_off [ stack cookie 1 ]
35b3515be0ecb9d Menglong Dong 2026-02-08 1077 *
49b5e77ae3e214a Pu Lehui 2023-02-15 1078 * FP - run_ctx_off [ bpf_tramp_run_ctx ]
49b5e77ae3e214a Pu Lehui 2023-02-15 1079 *
49b5e77ae3e214a Pu Lehui 2023-02-15 1080 * FP - sreg_off [ callee saved reg ]
49b5e77ae3e214a Pu Lehui 2023-02-15 1081 *
49b5e77ae3e214a Pu Lehui 2023-02-15 1082 * [ pads ] pads for 16 bytes alignment
6801b0aef79db47 Pu Lehui 2024-07-02 1083 *
6801b0aef79db47 Pu Lehui 2024-07-02 1084 * [ stack_argN ]
6801b0aef79db47 Pu Lehui 2024-07-02 1085 * [ ... ]
6801b0aef79db47 Pu Lehui 2024-07-02 1086 * FP - stk_arg_off [ stack_arg1 ] BPF_TRAMP_F_CALL_ORIG
49b5e77ae3e214a Pu Lehui 2023-02-15 1087 */
49b5e77ae3e214a Pu Lehui 2023-02-15 1088
49b5e77ae3e214a Pu Lehui 2023-02-15 1089 if (flags & (BPF_TRAMP_F_ORIG_STACK | BPF_TRAMP_F_SHARE_IPMODIFY))
49b5e77ae3e214a Pu Lehui 2023-02-15 1090 return -ENOTSUPP;
49b5e77ae3e214a Pu Lehui 2023-02-15 1091
6801b0aef79db47 Pu Lehui 2024-07-02 1092 if (m->nr_args > MAX_BPF_FUNC_ARGS)
49b5e77ae3e214a Pu Lehui 2023-02-15 1093 return -ENOTSUPP;
49b5e77ae3e214a Pu Lehui 2023-02-15 1094
6801b0aef79db47 Pu Lehui 2024-07-02 1095 for (i = 0; i < m->nr_args; i++)
6801b0aef79db47 Pu Lehui 2024-07-02 1096 nr_arg_slots += round_up(m->arg_size[i], 8) / 8;
6801b0aef79db47 Pu Lehui 2024-07-02 1097
25ad10658dc1068 Pu Lehui 2023-07-21 1098 /* room of trampoline frame to store return address and frame pointer */
25ad10658dc1068 Pu Lehui 2023-07-21 1099 stack_size += 16;
49b5e77ae3e214a Pu Lehui 2023-02-15 1100
49b5e77ae3e214a Pu Lehui 2023-02-15 1101 save_ret = flags & (BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_RET_FENTRY_RET);
d0bf7cd5df18466 Chenghao Duan 2025-09-22 1102 if (save_ret)
7112cd26e606c7b Björn Töpel 2023-10-04 1103 stack_size += 16; /* Save both A5 (BPF R0) and A0 */
49b5e77ae3e214a Pu Lehui 2023-02-15 1104 retval_off = stack_size;
49b5e77ae3e214a Pu Lehui 2023-02-15 1105
6801b0aef79db47 Pu Lehui 2024-07-02 1106 stack_size += nr_arg_slots * 8;
49b5e77ae3e214a Pu Lehui 2023-02-15 1107 args_off = stack_size;
49b5e77ae3e214a Pu Lehui 2023-02-15 1108
35b3515be0ecb9d Menglong Dong 2026-02-08 1109 /* function metadata, such as regs count */
49b5e77ae3e214a Pu Lehui 2023-02-15 1110 stack_size += 8;
35b3515be0ecb9d Menglong Dong 2026-02-08 1111 func_meta_off = stack_size;
49b5e77ae3e214a Pu Lehui 2023-02-15 1112
49b5e77ae3e214a Pu Lehui 2023-02-15 1113 if (flags & BPF_TRAMP_F_IP_ARG) {
49b5e77ae3e214a Pu Lehui 2023-02-15 1114 stack_size += 8;
49b5e77ae3e214a Pu Lehui 2023-02-15 1115 ip_off = stack_size;
49b5e77ae3e214a Pu Lehui 2023-02-15 1116 }
49b5e77ae3e214a Pu Lehui 2023-02-15 1117
35b3515be0ecb9d Menglong Dong 2026-02-08 1118 cookie_cnt = bpf_fsession_cookie_cnt(tlinks);
35b3515be0ecb9d Menglong Dong 2026-02-08 1119 /* room for session cookies */
35b3515be0ecb9d Menglong Dong 2026-02-08 1120 stack_size += cookie_cnt * 8;
35b3515be0ecb9d Menglong Dong 2026-02-08 1121 cookie_off = stack_size;
35b3515be0ecb9d Menglong Dong 2026-02-08 1122
49b5e77ae3e214a Pu Lehui 2023-02-15 1123 stack_size += round_up(sizeof(struct bpf_tramp_run_ctx), 8);
49b5e77ae3e214a Pu Lehui 2023-02-15 1124 run_ctx_off = stack_size;
49b5e77ae3e214a Pu Lehui 2023-02-15 1125
49b5e77ae3e214a Pu Lehui 2023-02-15 1126 stack_size += 8;
49b5e77ae3e214a Pu Lehui 2023-02-15 1127 sreg_off = stack_size;
49b5e77ae3e214a Pu Lehui 2023-02-15 1128
a5912c37faf723c Puranjay Mohan 2024-07-08 1129 if ((flags & BPF_TRAMP_F_CALL_ORIG) && (nr_arg_slots - RV_MAX_REG_ARGS > 0))
6801b0aef79db47 Pu Lehui 2024-07-02 1130 stack_size += (nr_arg_slots - RV_MAX_REG_ARGS) * 8;
6801b0aef79db47 Pu Lehui 2024-07-02 1131
e944fc8152744a4 Xiao Wang 2024-05-23 1132 stack_size = round_up(stack_size, STACK_ALIGN);
49b5e77ae3e214a Pu Lehui 2023-02-15 1133
6801b0aef79db47 Pu Lehui 2024-07-02 1134 /* room for args on stack must be at the top of stack */
6801b0aef79db47 Pu Lehui 2024-07-02 1135 stk_arg_off = stack_size;
6801b0aef79db47 Pu Lehui 2024-07-02 1136
1732ebc4a26181c Pu Lehui 2024-01-23 1137 if (!is_struct_ops) {
25ad10658dc1068 Pu Lehui 2023-07-21 1138 /* For the trampoline called from function entry,
25ad10658dc1068 Pu Lehui 2023-07-21 1139 * the frame of traced function and the frame of
25ad10658dc1068 Pu Lehui 2023-07-21 1140 * trampoline need to be considered.
25ad10658dc1068 Pu Lehui 2023-07-21 1141 */
25ad10658dc1068 Pu Lehui 2023-07-21 1142 emit_addi(RV_REG_SP, RV_REG_SP, -16, ctx);
25ad10658dc1068 Pu Lehui 2023-07-21 1143 emit_sd(RV_REG_SP, 8, RV_REG_RA, ctx);
25ad10658dc1068 Pu Lehui 2023-07-21 1144 emit_sd(RV_REG_SP, 0, RV_REG_FP, ctx);
25ad10658dc1068 Pu Lehui 2023-07-21 1145 emit_addi(RV_REG_FP, RV_REG_SP, 16, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1146
25ad10658dc1068 Pu Lehui 2023-07-21 1147 emit_addi(RV_REG_SP, RV_REG_SP, -stack_size, ctx);
25ad10658dc1068 Pu Lehui 2023-07-21 1148 emit_sd(RV_REG_SP, stack_size - 8, RV_REG_T0, ctx);
25ad10658dc1068 Pu Lehui 2023-07-21 1149 emit_sd(RV_REG_SP, stack_size - 16, RV_REG_FP, ctx);
25ad10658dc1068 Pu Lehui 2023-07-21 1150 emit_addi(RV_REG_FP, RV_REG_SP, stack_size, ctx);
25ad10658dc1068 Pu Lehui 2023-07-21 1151 } else {
e63985ecd22681c Puranjay Mohan 2024-03-03 1152 /* emit kcfi hash */
e63985ecd22681c Puranjay Mohan 2024-03-03 1153 emit_kcfi(cfi_get_func_hash(func_addr), ctx);
25ad10658dc1068 Pu Lehui 2023-07-21 1154 /* For the trampoline called directly, just handle
25ad10658dc1068 Pu Lehui 2023-07-21 1155 * the frame of trampoline.
25ad10658dc1068 Pu Lehui 2023-07-21 1156 */
25ad10658dc1068 Pu Lehui 2023-07-21 1157 emit_addi(RV_REG_SP, RV_REG_SP, -stack_size, ctx);
25ad10658dc1068 Pu Lehui 2023-07-21 1158 emit_sd(RV_REG_SP, stack_size - 8, RV_REG_RA, ctx);
25ad10658dc1068 Pu Lehui 2023-07-21 1159 emit_sd(RV_REG_SP, stack_size - 16, RV_REG_FP, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1160 emit_addi(RV_REG_FP, RV_REG_SP, stack_size, ctx);
25ad10658dc1068 Pu Lehui 2023-07-21 1161 }
49b5e77ae3e214a Pu Lehui 2023-02-15 1162
49b5e77ae3e214a Pu Lehui 2023-02-15 1163 /* callee saved register S1 to pass start time */
49b5e77ae3e214a Pu Lehui 2023-02-15 1164 emit_sd(RV_REG_FP, -sreg_off, RV_REG_S1, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1165
49b5e77ae3e214a Pu Lehui 2023-02-15 1166 /* store ip address of the traced function */
93fd420d71beed5 Menglong Dong 2026-02-08 1167 if (flags & BPF_TRAMP_F_IP_ARG)
93fd420d71beed5 Menglong Dong 2026-02-08 1168 emit_store_stack_imm64(RV_REG_T1, -ip_off, (u64)func_addr, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1169
35b3515be0ecb9d Menglong Dong 2026-02-08 1170 func_meta = nr_arg_slots;
35b3515be0ecb9d Menglong Dong 2026-02-08 1171 emit_store_stack_imm64(RV_REG_T1, -func_meta_off, func_meta, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1172
6801b0aef79db47 Pu Lehui 2024-07-02 1173 store_args(nr_arg_slots, args_off, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1174
35b3515be0ecb9d Menglong Dong 2026-02-08 1175 if (bpf_fsession_cnt(tlinks)) {
35b3515be0ecb9d Menglong Dong 2026-02-08 1176 /* clear all session cookies' value */
35b3515be0ecb9d Menglong Dong 2026-02-08 1177 for (i = 0; i < cookie_cnt; i++)
35b3515be0ecb9d Menglong Dong 2026-02-08 1178 emit_sd(RV_REG_FP, -cookie_off + 8 * i, RV_REG_ZERO, ctx);
35b3515be0ecb9d Menglong Dong 2026-02-08 1179 /* clear return value to make sure fentry always get 0 */
35b3515be0ecb9d Menglong Dong 2026-02-08 1180 emit_sd(RV_REG_FP, -retval_off, RV_REG_ZERO, ctx);
35b3515be0ecb9d Menglong Dong 2026-02-08 1181 }
35b3515be0ecb9d Menglong Dong 2026-02-08 1182
49b5e77ae3e214a Pu Lehui 2023-02-15 1183 if (flags & BPF_TRAMP_F_CALL_ORIG) {
9f1e16fb1fc9826 Pu Lehui 2024-06-22 1184 emit_imm(RV_REG_A0, ctx->insns ? (const s64)im : RV_MAX_COUNT_IMM, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1185 ret = emit_call((const u64)__bpf_tramp_enter, true, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1186 if (ret)
49b5e77ae3e214a Pu Lehui 2023-02-15 1187 return ret;
49b5e77ae3e214a Pu Lehui 2023-02-15 1188 }
49b5e77ae3e214a Pu Lehui 2023-02-15 1189
35b3515be0ecb9d Menglong Dong 2026-02-08 1190 if (fentry->nr_links) {
35b3515be0ecb9d Menglong Dong 2026-02-08 1191 ret = invoke_bpf(fentry, args_off, retval_off, run_ctx_off, func_meta_off,
35b3515be0ecb9d Menglong Dong 2026-02-08 1192 flags & BPF_TRAMP_F_RET_FENTRY_RET, func_meta, cookie_off, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1193 if (ret)
49b5e77ae3e214a Pu Lehui 2023-02-15 1194 return ret;
49b5e77ae3e214a Pu Lehui 2023-02-15 1195 }
49b5e77ae3e214a Pu Lehui 2023-02-15 1196
49b5e77ae3e214a Pu Lehui 2023-02-15 1197 if (fmod_ret->nr_links) {
49b5e77ae3e214a Pu Lehui 2023-02-15 1198 branches_off = kcalloc(fmod_ret->nr_links, sizeof(int), GFP_KERNEL);
49b5e77ae3e214a Pu Lehui 2023-02-15 1199 if (!branches_off)
49b5e77ae3e214a Pu Lehui 2023-02-15 1200 return -ENOMEM;
49b5e77ae3e214a Pu Lehui 2023-02-15 1201
49b5e77ae3e214a Pu Lehui 2023-02-15 1202 /* cleanup to avoid garbage return value confusion */
49b5e77ae3e214a Pu Lehui 2023-02-15 1203 emit_sd(RV_REG_FP, -retval_off, RV_REG_ZERO, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1204 for (i = 0; i < fmod_ret->nr_links; i++) {
49b5e77ae3e214a Pu Lehui 2023-02-15 1205 ret = invoke_bpf_prog(fmod_ret->links[i], args_off, retval_off,
49b5e77ae3e214a Pu Lehui 2023-02-15 1206 run_ctx_off, true, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1207 if (ret)
49b5e77ae3e214a Pu Lehui 2023-02-15 1208 goto out;
49b5e77ae3e214a Pu Lehui 2023-02-15 1209 emit_ld(RV_REG_T1, -retval_off, RV_REG_FP, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1210 branches_off[i] = ctx->ninsns;
49b5e77ae3e214a Pu Lehui 2023-02-15 1211 /* nop reserved for conditional jump */
49b5e77ae3e214a Pu Lehui 2023-02-15 1212 emit(rv_nop(), ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1213 }
49b5e77ae3e214a Pu Lehui 2023-02-15 1214 }
49b5e77ae3e214a Pu Lehui 2023-02-15 1215
49b5e77ae3e214a Pu Lehui 2023-02-15 1216 if (flags & BPF_TRAMP_F_CALL_ORIG) {
8f3e00af8e52c0d Menglong Dong 2025-12-19 1217 /* skip to actual body of traced function */
8f3e00af8e52c0d Menglong Dong 2025-12-19 1218 orig_call += RV_FENTRY_NINSNS * 4;
6801b0aef79db47 Pu Lehui 2024-07-02 1219 restore_args(min_t(int, nr_arg_slots, RV_MAX_REG_ARGS), args_off, ctx);
6801b0aef79db47 Pu Lehui 2024-07-02 1220 restore_stack_args(nr_arg_slots - RV_MAX_REG_ARGS, args_off, stk_arg_off, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1221 ret = emit_call((const u64)orig_call, true, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1222 if (ret)
49b5e77ae3e214a Pu Lehui 2023-02-15 1223 goto out;
49b5e77ae3e214a Pu Lehui 2023-02-15 1224 emit_sd(RV_REG_FP, -retval_off, RV_REG_A0, ctx);
7112cd26e606c7b Björn Töpel 2023-10-04 1225 emit_sd(RV_REG_FP, -(retval_off - 8), regmap[BPF_REG_0], ctx);
2382a405c581ae8 Pu Lehui 2024-06-22 1226 im->ip_after_call = ctx->ro_insns + ctx->ninsns;
49b5e77ae3e214a Pu Lehui 2023-02-15 1227 /* 2 nops reserved for auipc+jalr pair */
49b5e77ae3e214a Pu Lehui 2023-02-15 1228 emit(rv_nop(), ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1229 emit(rv_nop(), ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1230 }
49b5e77ae3e214a Pu Lehui 2023-02-15 1231
49b5e77ae3e214a Pu Lehui 2023-02-15 1232 /* update branches saved in invoke_bpf_mod_ret with bnez */
49b5e77ae3e214a Pu Lehui 2023-02-15 1233 for (i = 0; ctx->insns && i < fmod_ret->nr_links; i++) {
49b5e77ae3e214a Pu Lehui 2023-02-15 1234 offset = ninsns_rvoff(ctx->ninsns - branches_off[i]);
49b5e77ae3e214a Pu Lehui 2023-02-15 1235 insn = rv_bne(RV_REG_T1, RV_REG_ZERO, offset >> 1);
49b5e77ae3e214a Pu Lehui 2023-02-15 1236 *(u32 *)(ctx->insns + branches_off[i]) = insn;
49b5e77ae3e214a Pu Lehui 2023-02-15 1237 }
49b5e77ae3e214a Pu Lehui 2023-02-15 1238
35b3515be0ecb9d Menglong Dong 2026-02-08 1239 /* set "is_return" flag for fsession */
35b3515be0ecb9d Menglong Dong 2026-02-08 1240 func_meta |= (1ULL << BPF_TRAMP_IS_RETURN_SHIFT);
35b3515be0ecb9d Menglong Dong 2026-02-08 1241 if (bpf_fsession_cnt(tlinks))
35b3515be0ecb9d Menglong Dong 2026-02-08 1242 emit_store_stack_imm64(RV_REG_T1, -func_meta_off, func_meta, ctx);
35b3515be0ecb9d Menglong Dong 2026-02-08 1243
35b3515be0ecb9d Menglong Dong 2026-02-08 1244 if (fexit->nr_links) {
35b3515be0ecb9d Menglong Dong 2026-02-08 1245 ret = invoke_bpf(fexit, args_off, retval_off, run_ctx_off, func_meta_off,
35b3515be0ecb9d Menglong Dong 2026-02-08 1246 false, func_meta, cookie_off, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1247 if (ret)
49b5e77ae3e214a Pu Lehui 2023-02-15 1248 goto out;
49b5e77ae3e214a Pu Lehui 2023-02-15 1249 }
49b5e77ae3e214a Pu Lehui 2023-02-15 1250
49b5e77ae3e214a Pu Lehui 2023-02-15 1251 if (flags & BPF_TRAMP_F_CALL_ORIG) {
2382a405c581ae8 Pu Lehui 2024-06-22 1252 im->ip_epilogue = ctx->ro_insns + ctx->ninsns;
9f1e16fb1fc9826 Pu Lehui 2024-06-22 1253 emit_imm(RV_REG_A0, ctx->insns ? (const s64)im : RV_MAX_COUNT_IMM, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1254 ret = emit_call((const u64)__bpf_tramp_exit, true, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1255 if (ret)
49b5e77ae3e214a Pu Lehui 2023-02-15 1256 goto out;
49b5e77ae3e214a Pu Lehui 2023-02-15 1257 }
49b5e77ae3e214a Pu Lehui 2023-02-15 1258
49b5e77ae3e214a Pu Lehui 2023-02-15 1259 if (flags & BPF_TRAMP_F_RESTORE_REGS)
6801b0aef79db47 Pu Lehui 2024-07-02 1260 restore_args(min_t(int, nr_arg_slots, RV_MAX_REG_ARGS), args_off, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1261
7112cd26e606c7b Björn Töpel 2023-10-04 1262 if (save_ret) {
7112cd26e606c7b Björn Töpel 2023-10-04 1263 emit_ld(regmap[BPF_REG_0], -(retval_off - 8), RV_REG_FP, ctx);
fd2e08128944a76 Hengqi Chen 2025-09-08 1264 if (is_struct_ops) {
fd2e08128944a76 Hengqi Chen 2025-09-08 1265 ret = sign_extend(RV_REG_A0, regmap[BPF_REG_0], m->ret_size,
fd2e08128944a76 Hengqi Chen 2025-09-08 1266 m->ret_flags & BTF_FMODEL_SIGNED_ARG, ctx);
fd2e08128944a76 Hengqi Chen 2025-09-08 1267 if (ret)
fd2e08128944a76 Hengqi Chen 2025-09-08 1268 goto out;
fd2e08128944a76 Hengqi Chen 2025-09-08 1269 } else {
fd2e08128944a76 Hengqi Chen 2025-09-08 1270 emit_ld(RV_REG_A0, -retval_off, RV_REG_FP, ctx);
fd2e08128944a76 Hengqi Chen 2025-09-08 1271 }
7112cd26e606c7b Björn Töpel 2023-10-04 1272 }
49b5e77ae3e214a Pu Lehui 2023-02-15 1273
49b5e77ae3e214a Pu Lehui 2023-02-15 1274 emit_ld(RV_REG_S1, -sreg_off, RV_REG_FP, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1275
1732ebc4a26181c Pu Lehui 2024-01-23 1276 if (!is_struct_ops) {
25ad10658dc1068 Pu Lehui 2023-07-21 1277 /* trampoline called from function entry */
25ad10658dc1068 Pu Lehui 2023-07-21 1278 emit_ld(RV_REG_T0, stack_size - 8, RV_REG_SP, ctx);
25ad10658dc1068 Pu Lehui 2023-07-21 1279 emit_ld(RV_REG_FP, stack_size - 16, RV_REG_SP, ctx);
25ad10658dc1068 Pu Lehui 2023-07-21 1280 emit_addi(RV_REG_SP, RV_REG_SP, stack_size, ctx);
25ad10658dc1068 Pu Lehui 2023-07-21 1281
25ad10658dc1068 Pu Lehui 2023-07-21 1282 emit_ld(RV_REG_RA, 8, RV_REG_SP, ctx);
25ad10658dc1068 Pu Lehui 2023-07-21 1283 emit_ld(RV_REG_FP, 0, RV_REG_SP, ctx);
25ad10658dc1068 Pu Lehui 2023-07-21 1284 emit_addi(RV_REG_SP, RV_REG_SP, 16, ctx);
25ad10658dc1068 Pu Lehui 2023-07-21 1285
49b5e77ae3e214a Pu Lehui 2023-02-15 1286 if (flags & BPF_TRAMP_F_SKIP_FRAME)
25ad10658dc1068 Pu Lehui 2023-07-21 1287 /* return to parent function */
25ad10658dc1068 Pu Lehui 2023-07-21 1288 emit_jalr(RV_REG_ZERO, RV_REG_RA, 0, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1289 else
25ad10658dc1068 Pu Lehui 2023-07-21 1290 /* return to traced function */
25ad10658dc1068 Pu Lehui 2023-07-21 1291 emit_jalr(RV_REG_ZERO, RV_REG_T0, 0, ctx);
25ad10658dc1068 Pu Lehui 2023-07-21 1292 } else {
25ad10658dc1068 Pu Lehui 2023-07-21 1293 /* trampoline called directly */
25ad10658dc1068 Pu Lehui 2023-07-21 1294 emit_ld(RV_REG_RA, stack_size - 8, RV_REG_SP, ctx);
25ad10658dc1068 Pu Lehui 2023-07-21 1295 emit_ld(RV_REG_FP, stack_size - 16, RV_REG_SP, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1296 emit_addi(RV_REG_SP, RV_REG_SP, stack_size, ctx);
49b5e77ae3e214a Pu Lehui 2023-02-15 1297
49b5e77ae3e214a Pu Lehui 2023-02-15 1298 emit_jalr(RV_REG_ZERO, RV_REG_RA, 0, ctx);
25ad10658dc1068 Pu Lehui 2023-07-21 1299 }
49b5e77ae3e214a Pu Lehui 2023-02-15 1300
49b5e77ae3e214a Pu Lehui 2023-02-15 1301 ret = ctx->ninsns;
49b5e77ae3e214a Pu Lehui 2023-02-15 1302 out:
49b5e77ae3e214a Pu Lehui 2023-02-15 1303 kfree(branches_off);
49b5e77ae3e214a Pu Lehui 2023-02-15 1304 return ret;
49b5e77ae3e214a Pu Lehui 2023-02-15 1305 }
49b5e77ae3e214a Pu Lehui 2023-02-15 1306
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH bpf-next 02/17] bpf: Use mutex lock pool for bpf trampolines
2026-02-20 10:57 ` bot+bpf-ci
@ 2026-02-22 14:33 ` Jiri Olsa
0 siblings, 0 replies; 38+ messages in thread
From: Jiri Olsa @ 2026-02-22 14:33 UTC (permalink / raw)
To: bot+bpf-ci
Cc: ast, daniel, andrii, bpf, linux-trace-kernel, kafai, eddyz87,
songliubraving, yhs, menglong8.dong, rostedt, martin.lau,
yonghong.song, clm, ihor.solodrai
On Fri, Feb 20, 2026 at 10:57:51AM +0000, bot+bpf-ci@kernel.org wrote:
> > diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> > --- a/kernel/bpf/trampoline.c
> > +++ b/kernel/bpf/trampoline.c
>
> [ ... ]
>
> > @@ -71,7 +110,7 @@ static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, unsigned long ip,
> > /* This is called inside register_ftrace_direct_multi(), so
> > * tr->mutex is already locked.
> > */
> > - lockdep_assert_held_once(&tr->mutex);
> > + lockdep_assert_held_once(trampoline_locks_lookup(tr));
>
> This comment still references tr->mutex, but the field was removed from
> struct bpf_trampoline by this patch. Should the comment say the
> trampoline pool lock is already held instead?
yes, I overlooked the comments..
>
> [ ... ]
>
> > @@ -102,7 +141,7 @@ static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, unsigned long ip,
> > /* The normal locking order is
> > * tr->mutex => direct_mutex (ftrace.c) => ftrace_lock (ftrace.c)
> > *
> > * The following two commands are called from
> > *
> > * prepare_direct_functions_for_ipmodify
> > * cleanup_direct_functions_after_ipmodify
> > *
> > * In both cases, direct_mutex is already locked. Use
> > * mutex_trylock(&tr->mutex) to avoid deadlock in race condition
> > * (something else is making changes to this same trampoline).
> > */
> > - if (!mutex_trylock(&tr->mutex)) {
> > + if (!mutex_trylock(trampoline_locks_lookup(tr))) {
> > /* sleep 1 ms to make sure whatever holding tr->mutex makes
> > * some progress.
> > */
>
> The locking order comment and the two references to tr->mutex in this
> block also appear to need updating. The locking order is now pool lock
> rather than tr->mutex, and the trylock and sleep comments still name the
> removed field.
yes, will fix
>
> [ ... ]
>
> > @@ -1026,11 +1065,11 @@ struct bpf_attach_target_info tgt_info = {};
> > /* note, we're still holding tr refcnt from above */
> >
> > - mutex_unlock(&tr->mutex);
> > + trampoline_unlock(tr);
> >
> > return 0;
> > err:
> > - mutex_unlock(&tr->mutex);
> > + trampoline_unlock(tr);
> >
> > if (shim_link)
> > bpf_link_put(&shim_link->link.link);
> >
> > /* have to release tr while _not_ holding its mutex */
> > bpf_trampoline_put(tr); /* bpf_trampoline_get above */
>
> Similarly, this comment in bpf_trampoline_link_cgroup_shim() references
> "its mutex" which no longer exists as a per-trampoline field.
right, will fix
thanks,
jirka
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH bpf-next 02/17] bpf: Use mutex lock pool for bpf trampolines
2026-02-20 19:58 ` Alexei Starovoitov
@ 2026-02-22 14:34 ` Jiri Olsa
2026-02-23 19:35 ` Alexei Starovoitov
0 siblings, 1 reply; 38+ messages in thread
From: Jiri Olsa @ 2026-02-22 14:34 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
Yonghong Song, Menglong Dong, Steven Rostedt
On Fri, Feb 20, 2026 at 11:58:13AM -0800, Alexei Starovoitov wrote:
> On Fri, Feb 20, 2026 at 2:07 AM Jiri Olsa <jolsa@kernel.org> wrote:
> >
> > Adding mutex lock pool that replaces bpf trampolines mutex.
> >
> > For tracing_multi link coming in following changes we need to lock all
> > the involved trampolines during the attachment. This could mean thousands
> > of mutex locks, which is not convenient.
> >
> > As suggested by Andrii we can replace bpf trampolines mutex with mutex
> > pool, where each trampoline is hash-ed to one of the locks from the pool.
> >
> > It's better to lock all the pool mutexes (64 at the moment) than
> > thousands of them.
> >
> > Removing the mutex_is_locked in bpf_trampoline_put, because we removed
> > the mutex from bpf_trampoline.
> >
> > Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> > include/linux/bpf.h | 2 --
> > kernel/bpf/trampoline.c | 74 +++++++++++++++++++++++++++++++----------
> > 2 files changed, 56 insertions(+), 20 deletions(-)
> >
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index cd9b96434904..46bf3d86bdb2 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -1335,8 +1335,6 @@ struct bpf_trampoline {
> > /* hlist for trampoline_ip_table */
> > struct hlist_node hlist_ip;
> > struct ftrace_ops *fops;
> > - /* serializes access to fields of this trampoline */
> > - struct mutex mutex;
> > refcount_t refcnt;
> > u32 flags;
> > u64 key;
> > diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> > index 952cd7932461..05dc0358654d 100644
> > --- a/kernel/bpf/trampoline.c
> > +++ b/kernel/bpf/trampoline.c
> > @@ -30,6 +30,45 @@ static struct hlist_head trampoline_ip_table[TRAMPOLINE_TABLE_SIZE];
> > /* serializes access to trampoline tables */
> > static DEFINE_MUTEX(trampoline_mutex);
> >
> > +#define TRAMPOLINE_LOCKS_BITS 6
> > +#define TRAMPOLINE_LOCKS_TABLE_SIZE (1 << TRAMPOLINE_LOCKS_BITS)
> > +
> > +static struct {
> > + struct mutex mutex;
> > + struct lock_class_key key;
> > +} *trampoline_locks;
> > +
> > +static struct mutex *trampoline_locks_lookup(struct bpf_trampoline *tr)
>
> select_trampoline_lock() ?
ok
>
> > +{
> > + return &trampoline_locks[hash_64((u64) tr, TRAMPOLINE_LOCKS_BITS)].mutex;
> > +}
> > +
> > +static void trampoline_lock(struct bpf_trampoline *tr)
> > +{
> > + mutex_lock(trampoline_locks_lookup(tr));
> > +}
> > +
> > +static void trampoline_unlock(struct bpf_trampoline *tr)
> > +{
> > + mutex_unlock(trampoline_locks_lookup(tr));
> > +}
> > +
> > +static int __init trampoline_locks_init(void)
> > +{
> > + int i;
> > +
> > + trampoline_locks = kmalloc_array(TRAMPOLINE_LOCKS_TABLE_SIZE,
> > + sizeof(trampoline_locks[0]), GFP_KERNEL);
>
> why bother with memory allocation? This is just 64 mutexes.
ok, I could probably use __mutex_init directly for static key
about 64.. not sure how I missed that but there's lockdep limit for
maximum locks depth and it's 48.. so we'll need to use 32 locks,
which is probably still ok
>
> > + if (!trampoline_locks)
> > + return -ENOMEM;
> > +
> > + for (i = 0; i < TRAMPOLINE_LOCKS_TABLE_SIZE; i++) {
> > + lockdep_register_key(&trampoline_locks[i].key);
>
> why special key?
if we keep single key we will get lockdep 'recursive locking' warning
during bpf_trampoline_multi_attach, because lockdep will think we lock
the same mutex
there's support to annotate nested locking with mutex_lock_nested but
it allows maximum of 8 nested instances
jirka
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH bpf-next 04/17] bpf: Add struct bpf_tramp_node object
2026-02-20 10:58 ` bot+bpf-ci
@ 2026-02-22 14:34 ` Jiri Olsa
0 siblings, 0 replies; 38+ messages in thread
From: Jiri Olsa @ 2026-02-22 14:34 UTC (permalink / raw)
To: bot+bpf-ci
Cc: ast, daniel, andrii, bpf, linux-trace-kernel, kafai, eddyz87,
songliubraving, yhs, menglong8.dong, rostedt, martin.lau,
yonghong.song, clm, ihor.solodrai
On Fri, Feb 20, 2026 at 10:58:00AM +0000, bot+bpf-ci@kernel.org wrote:
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
>
> [ ... ]
>
> > +struct bpf_tramp_node {
> > + struct bpf_link *link;
> > + struct hlist_node tramp_hlist;
> > + u64 cookie;
> > +};
>
> During the RFC review, Andrii Nakryiko suggested that instead of
> co-locating a prog/link pointer in bpf_tramp_node, the JIT functions
> like invoke_bpf_prog() could accept the prog pointer as a separate
> argument, keeping bpf_tramp_node to just {tramp_hlist, cookie}:
>
> https://lore.kernel.org/bpf/CAEf4BzY=vqpnjrDNtLVvC3_nU5=Z7iROU_He=xbkEJDuGCoqug@mail.gmail.com/
>
> The RFC had `struct bpf_prog *prog` in bpf_tramp_node and this v1
> changed it to `struct bpf_link *link`, which still co-locates the
> path to the program inside the node. Was the refactoring to pass
> prog separately to invoke_bpf_prog() considered and ruled out, or
> is that planned for a follow-up?
yep, I ended up with different change and mentioned it in the cover letter,
I'm open to discus different solution if it's needed
jirka
>
>
> ---
> AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
> See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
>
> CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22220470181
>
> AI-authorship-score: low
> AI-authorship-explanation: Mechanical refactoring with lore history dating back to 2021, consistent with the author's known kernel development style and iterative review process.
> issues-found: 1
> issue-severity-score: low
> issue-severity-explanation: Unaddressed RFC review comment from maintainer about struct design; not a code bug but a design concern about co-locating link pointer in bpf_tramp_node.
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH bpf-next 07/17] bpf: Add bpf_trampoline_multi_attach/detach functions
2026-02-20 10:57 ` bot+bpf-ci
@ 2026-02-22 14:34 ` Jiri Olsa
0 siblings, 0 replies; 38+ messages in thread
From: Jiri Olsa @ 2026-02-22 14:34 UTC (permalink / raw)
To: bot+bpf-ci
Cc: ast, daniel, andrii, bpf, linux-trace-kernel, kafai, eddyz87,
songliubraving, yhs, menglong8.dong, rostedt, martin.lau,
yonghong.song, clm, ihor.solodrai
On Fri, Feb 20, 2026 at 10:57:54AM +0000, bot+bpf-ci@kernel.org wrote:
> > diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> > index 2d701bc6e1a5..c32205adfebe 100644
> > --- a/kernel/bpf/trampoline.c
> > +++ b/kernel/bpf/trampoline.c
>
> [ ... ]
>
> > +static int unregister_fentry_multi(struct bpf_trampoline *tr, u32 orig_flags, void *old_addr,
> > + void *ptr)
> > +{
> > + unsigned long addr = (unsigned long) old_addr;
> > + unsigned long ip = ftrace_location(tr->ip);
> > + struct fentry_multi_data *data = ptr;
> > +
> > + if (bpf_trampoline_use_jmp(tr->flags))
> > + addr = ftrace_jmp_set(addr);
> > + return add_ftrace_hash_entry_direct(data->unreg, ip, addr) ? 0 : -ENOMEM;
> > +}
>
> [ ... ]
>
> > +int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
> > + struct bpf_tracing_multi_link *link)
> > +{
> > + struct bpf_attach_target_info tgt_info = {};
> > + struct bpf_tracing_multi_node *mnode;
> > + int j, i, err, cnt = link->nodes_cnt;
> > + struct fentry_multi_data data = {};
> > + struct bpf_trampoline *tr;
> > + u64 key;
> > +
> > + data.reg = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
> > + if (!data.reg)
> > + return -ENOMEM;
> > +
> > + data.modify = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
> > + if (!data.modify) {
> > + free_ftrace_hash(data.reg);
> > + return -ENOMEM;
> > + }
>
> data.unreg is left as NULL here. In the forward path this is
> fine since only data.reg and data.modify are used, but the
> rollback path can reach unregister_fentry_multi() which calls
> add_ftrace_hash_entry_direct(data->unreg, ...) with a NULL hash.
>
> This happens when rollback_unlink calls
> __bpf_trampoline_unlink_prog() on a trampoline that had no
> other BPF programs before this attach. Removing the only
> program brings total to 0, so bpf_trampoline_update_ops()
> calls ops->unregister_fentry, reaching:
>
> unregister_fentry_multi()
> add_ftrace_hash_entry_direct(data->unreg, ip, addr)
> __add_hash_entry(hash, entry)
> ftrace_hash_key(hash, ...) <-- NULL dereference
>
> Should data.unreg also be allocated here to cover the rollback
> path?
I knew I should have added rollback tests ;-) right, we need the unreg
hash as well.. it's on my todo, will add fix and add tests next version
>
> > +
> > + for (i = 0; i < cnt; i++) {
> > + mnode = &link->nodes[i];
> > + err = bpf_check_attach_target(NULL, prog, NULL, ids[i], &tgt_info);
> > + if (err)
> > + goto rollback_put;
> > +
> > + key = bpf_trampoline_compute_key(NULL, prog->aux->attach_btf, ids[i]);
> > +
> > + tr = bpf_trampoline_get(key, &tgt_info);
> > + if (!tr) {
> > + err = -ENOMEM;
> > + goto rollback_put;
> > + }
> > +
> > + mnode->trampoline = tr;
> > + mnode->node.link = &link->link;
> > + }
>
> After this loop completes, all cnt trampolines have been
> obtained via bpf_trampoline_get() with refcounts incremented.
>
> > +
> > + trampoline_lock_all();
> > +
> > + for (i = 0; i < cnt; i++) {
> > + mnode = &link->nodes[i];
> > + err = __bpf_trampoline_link_prog(&mnode->node, mnode->trampoline, NULL,
> > + &trampoline_multi_ops, &data);
> > + if (err)
> > + goto rollback_unlink;
> > + }
>
> [ ... ]
>
> > +rollback_unlink:
> > + for (j = 0; j < i; j++) {
> > + mnode = &link->nodes[j];
> > + WARN_ON_ONCE(__bpf_trampoline_unlink_prog(&mnode->node, mnode->trampoline,
> > + NULL, &trampoline_multi_ops, &data));
> > + }
> > + trampoline_unlock_all();
> > +
> > +rollback_put:
> > + for (j = 0; j < i; j++)
> > + bpf_trampoline_put(link->nodes[j].trampoline);
>
> When __bpf_trampoline_link_prog() fails at index i in the
> second loop, this falls through from rollback_unlink to
> rollback_put with i still set to the failing index. But all
> cnt trampolines were obtained in the first loop above.
>
> The rollback_put loop only puts trampolines 0..i-1, leaving
> trampolines i..cnt-1 with elevated refcounts that are never
> released.
>
> When rollback_put is reached directly from the first loop
> (bpf_trampoline_get or bpf_check_attach_target failure), the
> j < i bound is correct since only 0..i-1 were obtained.
>
> Could the rollback_unlink path set i = cnt before falling
> through, or use a separate loop bound for the put?
good catch, I think that's a good fix, thnx
jirka
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH bpf-next 08/17] bpf: Add support for tracing multi link
2026-02-20 10:57 ` bot+bpf-ci
@ 2026-02-22 14:35 ` Jiri Olsa
0 siblings, 0 replies; 38+ messages in thread
From: Jiri Olsa @ 2026-02-22 14:35 UTC (permalink / raw)
To: bot+bpf-ci
Cc: ast, daniel, andrii, bpf, linux-trace-kernel, kafai, eddyz87,
songliubraving, yhs, menglong8.dong, rostedt, martin.lau,
yonghong.song, clm, ihor.solodrai
On Fri, Feb 20, 2026 at 10:57:58AM +0000, bot+bpf-ci@kernel.org wrote:
> > diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
> > --- a/include/linux/bpf_types.h
> > +++ b/include/linux/bpf_types.h
>
> [ ... ]
>
> > +BPF_LINK_TYPE(BPF_LINK_TYPE_TRACING_MULTI, tracing)
>
> The existing BPF_LINK_TYPE_TRACING also uses "tracing" as its name
> argument:
>
> BPF_LINK_TYPE(BPF_LINK_TYPE_TRACING, tracing)
>
> This means bpf_link_type_strs[] will contain "tracing" for both
> BPF_LINK_TYPE_TRACING and BPF_LINK_TYPE_TRACING_MULTI, making them
> indistinguishable in fdinfo output via bpf_link_show_fdinfo().
>
> The libbpf side correctly uses "tracing_multi":
>
> [BPF_LINK_TYPE_TRACING_MULTI] = "tracing_multi",
>
> Should this be BPF_LINK_TYPE(BPF_LINK_TYPE_TRACING_MULTI, tracing_multi)
> to match?
it should, fixed
>
> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
>
> [ ... ]
>
> > +int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
> > +{
>
> [ ... ]
>
> > + ids = kvmalloc_array(cnt, sizeof(*ids), GFP_KERNEL);
>
> [ ... ]
>
> > + link = kzalloc(struct_size(link, nodes, cnt), GFP_KERNEL);
> ^^^^^^
>
> The ids array above uses kvmalloc_array() to handle large allocations,
> but the link struct uses kzalloc(). With MAX_TRACING_MULTI_CNT at 1M
> and each bpf_tracing_multi_node being roughly 40 bytes,
> struct_size() can request up to ~40MB of contiguous memory. kzalloc()
> is limited to KMALLOC_MAX_SIZE (~4MB on x86_64), so this will fail
> for cnt values above roughly 100K.
>
> Would kvzalloc() work here instead? bpf_tracing_multi_link_dealloc()
> would need to use kvfree() accordingly:
>
> > +static void bpf_tracing_multi_link_dealloc(struct bpf_link *link)
> > +{
> > + struct bpf_tracing_multi_link *tr_link =
> > + container_of(link, struct bpf_tracing_multi_link, link);
> > +
> > + kfree(tr_link);
yes, makes sense, will fix, thnx
jirka
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH bpf-next 10/17] bpf: Add support for tracing_multi link session
2026-02-20 10:57 ` bot+bpf-ci
@ 2026-02-22 14:35 ` Jiri Olsa
0 siblings, 0 replies; 38+ messages in thread
From: Jiri Olsa @ 2026-02-22 14:35 UTC (permalink / raw)
To: bot+bpf-ci
Cc: ast, daniel, andrii, bpf, linux-trace-kernel, kafai, eddyz87,
songliubraving, yhs, menglong8.dong, rostedt, martin.lau,
yonghong.song, clm, ihor.solodrai
On Fri, Feb 20, 2026 at 10:57:56AM +0000, bot+bpf-ci@kernel.org wrote:
> > diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> > index 5892dca20b7e..1cd6c1457bd3 100644
> > --- a/kernel/bpf/syscall.c
> > +++ b/kernel/bpf/syscall.c
> > @@ -3611,6 +3611,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
> > if (prog->expected_attach_type != BPF_TRACE_FENTRY &&
> > prog->expected_attach_type != BPF_TRACE_FEXIT &&
> > prog->expected_attach_type != BPF_TRACE_FSESSION &&
> > + prog->expected_attach_type != BPF_TRACE_FSESSION_MULTI &&
> > prog->expected_attach_type != BPF_TRACE_FENTRY_MULTI &&
> > prog->expected_attach_type != BPF_TRACE_FEXIT_MULTI &&
> > prog->expected_attach_type != BPF_MODIFY_RETURN) {
>
> BPF_TRACE_FSESSION_MULTI is now accepted here, which means
> a program with this type can enter bpf_tracing_prog_attach()
> via BPF_RAW_TRACEPOINT_OPEN:
>
> bpf_raw_tracepoint_open()
> bpf_raw_tp_link_attach() /* name == NULL */
> bpf_tracing_prog_attach() /* BPF_TRACE_FSESSION_MULTI */
>
> Further down in bpf_tracing_prog_attach(), the fexit node
> initialization only checks for BPF_TRACE_FSESSION:
>
> kernel/bpf/syscall.c:bpf_tracing_prog_attach() {
> ...
> if (prog->expected_attach_type == BPF_TRACE_FSESSION) {
> link->fexit.link = &link->link.link;
> link->fexit.cookie = bpf_cookie;
> }
> ...
> }
>
> So for BPF_TRACE_FSESSION_MULTI, link->fexit.link stays NULL
> (from kzalloc). When __bpf_trampoline_link_prog() later calls
> fsession_exit(), it returns &link->fexit with a NULL link
> field. This node gets added to the trampoline FEXIT list, and
> bpf_trampoline_get_progs() then dereferences it:
>
> kernel/bpf/trampoline.c:bpf_trampoline_get_progs() {
> ...
> hlist_for_each_entry(node, &tr->progs_hlist[kind], tramp_hlist) {
> *ip_arg |= node->link->prog->call_get_func_ip;
> ^^^^^^^^^^
> ...
> }
>
> Would it make sense to either add BPF_TRACE_FSESSION_MULTI to
> the fexit initialization, or reject this type in
> bpf_tracing_prog_attach() since it should only be used through
> bpf_tracing_multi_attach()?
yes, I think we need to reject or MULTI types in bpf_raw_tracepoint_open,
it's allowed to attach only through new multi_link attach api,
good catch, thnx
jirka
>
>
> ---
> AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
> See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
>
> CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22220470181
>
> AI-authorship-score: low
> AI-authorship-explanation: Code follows established kernel BPF patterns for adding new attach types, written by a well-known BPF maintainer with terse, practical commit message style.
> issues-found: 1
> issue-severity-score: high
> issue-severity-explanation: NULL pointer dereference reachable from BPF_RAW_TRACEPOINT_OPEN syscall when BPF_TRACE_FSESSION_MULTI enters bpf_tracing_prog_attach without fexit initialization, leading to kernel crash.
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH bpf-next 11/17] libbpf: Add support to create tracing multi link
2026-02-20 10:57 ` bot+bpf-ci
@ 2026-02-22 14:36 ` Jiri Olsa
0 siblings, 0 replies; 38+ messages in thread
From: Jiri Olsa @ 2026-02-22 14:36 UTC (permalink / raw)
To: bot+bpf-ci
Cc: ast, daniel, andrii, bpf, linux-trace-kernel, kafai, eddyz87,
songliubraving, yhs, menglong8.dong, rostedt, martin.lau,
yonghong.song, clm, ihor.solodrai
On Fri, Feb 20, 2026 at 10:57:53AM +0000, bot+bpf-ci@kernel.org wrote:
> > The provided <pattern> is used as 'pattern' argument in
> > bpf_program__attach_kprobe_multi_opts function.
>
> This isn't a bug, but the commit message references
> bpf_program__attach_kprobe_multi_opts when the code actually calls
> bpf_program__attach_tracing_multi. Looks like a leftover from
> the kprobe_multi code this was modeled after.
copy&paste error :-\ thnx
>
> > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > --- a/tools/lib/bpf/libbpf.c
> > +++ b/tools/lib/bpf/libbpf.c
>
> [ ... ]
>
> > +struct bpf_link *
> > +bpf_program__attach_tracing_multi(const struct bpf_program *prog, const char *pattern,
> > + const struct bpf_tracing_multi_opts *opts)
> > +{
> > + LIBBPF_OPTS(bpf_link_create_opts, lopts);
> > + __u32 *ids, cnt, *free_ids = NULL;
>
> cnt is declared as __u32 here, but collect_btf_func_ids_by_glob()
> returns int. When that function returns a negative error such as
> -ENOMEM, the value wraps to a large unsigned number.
yes, will fix
>
> > + __u64 *cookies;
> > + int prog_fd, link_fd, err;
> > + struct bpf_link *link;
>
> [ ... ]
>
> > + if (pattern) {
> > + err = bpf_object__load_vmlinux_btf(prog->obj, true);
> > + if (err)
> > + return libbpf_err_ptr(err);
> > +
> > + cnt = collect_btf_func_ids_by_glob(prog->obj->btf_vmlinux, pattern, &ids);
> > + if (cnt < 0)
> > + return libbpf_err_ptr(cnt);
>
> Since cnt is __u32, this comparison is always false. On an allocation
> failure inside collect_btf_func_ids_by_glob(), the error is silently
> ignored. That function also does free(*ids) before returning -ENOMEM,
> so ids becomes a dangling pointer. The code then continues with a
> freed ids pointer and a garbage cnt, and eventually double-frees ids
> through the error label via free_ids.
>
> Should cnt be declared as int instead of __u32?
it should, will fix
>
> > + if (cnt == 0)
> > + return libbpf_err_ptr(-EINVAL);
> > + free_ids = ids;
> > + }
>
> [ ... ]
>
> > +error:
> > + free(link);
> > + free(free_ids);
> > + return libbpf_err_ptr(err);
> > +}
> > +
> > +static int attach_tracing_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link)
> > +{
> > + bool is_fexit, is_fsession;
> > + const char *spec;
> > + char *pattern;
> > + int n;
> > +
> > + /* Do not allow auto attach if there's no function pattern. */
> > + if (strcmp(prog->sec_name, "fentry.multi") == 0 ||
> > + strcmp(prog->sec_name, "fexit.multi") == 0 ||
> > + strcmp(prog->sec_name, "fsession.multi") == 0 ||
> > + strcmp(prog->sec_name, "fentry.multi.s") == 0 ||
> > + strcmp(prog->sec_name, "fexit.multi.s") == 0 ||
> > + strcmp(prog->sec_name, "fsession.multi.s") == 0)
> > + return 0;
> > +
> > + is_fexit = str_has_pfx(prog->sec_name, "fexit.multi/");
> > + is_fsession = str_has_pfx(prog->sec_name, "fsession.multi/");
> > +
> > + if (is_fsession)
> > + spec = prog->sec_name + sizeof("fsession.multi/") - 1;
> > + else if (is_fexit)
> > + spec = prog->sec_name + sizeof("fexit.multi/") - 1;
> > + else
> > + spec = prog->sec_name + sizeof("fentry.multi/") - 1;
>
> The sleepable variants with patterns don't seem to be handled here.
> For a section name like "fentry.multi.s/my_func", none of the
> str_has_pfx checks match because "fexit.multi/" and "fsession.multi/"
> are not prefixes of "fentry.multi.s/..." (the character at position
> 11 or 14 is '.' not '/').
>
> The else branch then computes spec using sizeof("fentry.multi/") - 1
> which gives an offset of 13, pointing into "s/my_func" instead of
> "my_func". The sscanf then extracts "s" as the pattern, and the
> program attaches to any function matching the glob "s" rather than
> "my_func".
>
> Similarly for "fexit.multi.s/X" and "fsession.multi.s/X", the
> offsets are wrong and produce either error returns or incorrect
> patterns.
>
> The SEC_DEF entries register all six sleepable variants, and
> sec_def_matches() correctly distinguishes "fentry.multi.s+" from
> "fentry.multi+" using the '/' separator check, so the matching
> works but the pattern extraction here does not.
>
> Would it work to also check for the ".s/" variants, for example:
>
> is_fexit = str_has_pfx(prog->sec_name, "fexit.multi/") ||
> str_has_pfx(prog->sec_name, "fexit.multi.s/");
>
> with the corresponding sizeof adjustments for the spec offset?
yes, I mentioned in the cover letter that I did not add tests for
sleepable functions attachments.. I should have ;-) will fix
thanks,
jirka
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH bpf-next 02/17] bpf: Use mutex lock pool for bpf trampolines
2026-02-22 14:34 ` Jiri Olsa
@ 2026-02-23 19:35 ` Alexei Starovoitov
2026-02-24 12:27 ` Jiri Olsa
0 siblings, 1 reply; 38+ messages in thread
From: Alexei Starovoitov @ 2026-02-23 19:35 UTC (permalink / raw)
To: Jiri Olsa
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
Yonghong Song, Menglong Dong, Steven Rostedt
On Sun, Feb 22, 2026 at 6:34 AM Jiri Olsa <olsajiri@gmail.com> wrote:
>
> On Fri, Feb 20, 2026 at 11:58:13AM -0800, Alexei Starovoitov wrote:
> > On Fri, Feb 20, 2026 at 2:07 AM Jiri Olsa <jolsa@kernel.org> wrote:
> > >
> > > Adding mutex lock pool that replaces bpf trampolines mutex.
> > >
> > > For tracing_multi link coming in following changes we need to lock all
> > > the involved trampolines during the attachment. This could mean thousands
> > > of mutex locks, which is not convenient.
> > >
> > > As suggested by Andrii we can replace bpf trampolines mutex with mutex
> > > pool, where each trampoline is hash-ed to one of the locks from the pool.
> > >
> > > It's better to lock all the pool mutexes (64 at the moment) than
> > > thousands of them.
> > >
> > > Removing the mutex_is_locked in bpf_trampoline_put, because we removed
> > > the mutex from bpf_trampoline.
> > >
> > > Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > > ---
> > > include/linux/bpf.h | 2 --
> > > kernel/bpf/trampoline.c | 74 +++++++++++++++++++++++++++++++----------
> > > 2 files changed, 56 insertions(+), 20 deletions(-)
> > >
> > > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > > index cd9b96434904..46bf3d86bdb2 100644
> > > --- a/include/linux/bpf.h
> > > +++ b/include/linux/bpf.h
> > > @@ -1335,8 +1335,6 @@ struct bpf_trampoline {
> > > /* hlist for trampoline_ip_table */
> > > struct hlist_node hlist_ip;
> > > struct ftrace_ops *fops;
> > > - /* serializes access to fields of this trampoline */
> > > - struct mutex mutex;
> > > refcount_t refcnt;
> > > u32 flags;
> > > u64 key;
> > > diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> > > index 952cd7932461..05dc0358654d 100644
> > > --- a/kernel/bpf/trampoline.c
> > > +++ b/kernel/bpf/trampoline.c
> > > @@ -30,6 +30,45 @@ static struct hlist_head trampoline_ip_table[TRAMPOLINE_TABLE_SIZE];
> > > /* serializes access to trampoline tables */
> > > static DEFINE_MUTEX(trampoline_mutex);
> > >
> > > +#define TRAMPOLINE_LOCKS_BITS 6
> > > +#define TRAMPOLINE_LOCKS_TABLE_SIZE (1 << TRAMPOLINE_LOCKS_BITS)
> > > +
> > > +static struct {
> > > + struct mutex mutex;
> > > + struct lock_class_key key;
> > > +} *trampoline_locks;
> > > +
> > > +static struct mutex *trampoline_locks_lookup(struct bpf_trampoline *tr)
> >
> > select_trampoline_lock() ?
>
> ok
>
> >
> > > +{
> > > + return &trampoline_locks[hash_64((u64) tr, TRAMPOLINE_LOCKS_BITS)].mutex;
> > > +}
> > > +
> > > +static void trampoline_lock(struct bpf_trampoline *tr)
> > > +{
> > > + mutex_lock(trampoline_locks_lookup(tr));
> > > +}
> > > +
> > > +static void trampoline_unlock(struct bpf_trampoline *tr)
> > > +{
> > > + mutex_unlock(trampoline_locks_lookup(tr));
> > > +}
> > > +
> > > +static int __init trampoline_locks_init(void)
> > > +{
> > > + int i;
> > > +
> > > + trampoline_locks = kmalloc_array(TRAMPOLINE_LOCKS_TABLE_SIZE,
> > > + sizeof(trampoline_locks[0]), GFP_KERNEL);
> >
> > why bother with memory allocation? This is just 64 mutexes.
>
> ok, I could probably use __mutex_init directly for static key
>
> about 64.. not sure how I missed that but there's lockdep limit for
> maximum locks depth and it's 48.. so we'll need to use 32 locks,
> which is probably still ok
>
> >
> > > + if (!trampoline_locks)
> > > + return -ENOMEM;
> > > +
> > > + for (i = 0; i < TRAMPOLINE_LOCKS_TABLE_SIZE; i++) {
> > > + lockdep_register_key(&trampoline_locks[i].key);
> >
> > why special key?
>
> if we keep single key we will get lockdep 'recursive locking' warning
> during bpf_trampoline_multi_attach, because lockdep will think we lock
> the same mutex
>
> there's support to annotate nested locking with mutex_lock_nested but
> it allows maximum of 8 nested instances
yeah. subclass limit of 8 is there for a different use case.
I guess you never validated your earlier approach of "let's take
all trampoline mutexes" with lockdep ? ;)
MAX_LOCK_DEPTH is indeed 48.
See fs/configfs/inode.c and default_group_class.
It does:
lockdep_set_class(&inode->i_rwsem,
&default_group_class[depth - 1]);
the idea here is that the number of lockdep keys doesn't have
to be equal to the actual number of mutexes.
I guess we can keep a total of 32 mutexes to avoid making it too fancy.
Please add a comment explaining 32 and why it needs lockdep_key.
I thought declaring all mutexes as static will avoid the need for the key,
but DEFINE_MUTEX doesn't support an array.
So since we need a loop anyway to init mutex and the key,
let's keep kmalloc_array() above. Which is now renamed to kmalloc_objs()
after 7.0-rc1.
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH bpf-next 02/17] bpf: Use mutex lock pool for bpf trampolines
2026-02-23 19:35 ` Alexei Starovoitov
@ 2026-02-24 12:27 ` Jiri Olsa
2026-02-24 17:13 ` Alexei Starovoitov
0 siblings, 1 reply; 38+ messages in thread
From: Jiri Olsa @ 2026-02-24 12:27 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
On Mon, Feb 23, 2026 at 11:35:29AM -0800, Alexei Starovoitov wrote:
> On Sun, Feb 22, 2026 at 6:34 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> >
> > On Fri, Feb 20, 2026 at 11:58:13AM -0800, Alexei Starovoitov wrote:
> > > On Fri, Feb 20, 2026 at 2:07 AM Jiri Olsa <jolsa@kernel.org> wrote:
> > > >
> > > > Adding mutex lock pool that replaces bpf trampolines mutex.
> > > >
> > > > For tracing_multi link coming in following changes we need to lock all
> > > > the involved trampolines during the attachment. This could mean thousands
> > > > of mutex locks, which is not convenient.
> > > >
> > > > As suggested by Andrii we can replace bpf trampolines mutex with mutex
> > > > pool, where each trampoline is hash-ed to one of the locks from the pool.
> > > >
> > > > It's better to lock all the pool mutexes (64 at the moment) than
> > > > thousands of them.
> > > >
> > > > Removing the mutex_is_locked in bpf_trampoline_put, because we removed
> > > > the mutex from bpf_trampoline.
> > > >
> > > > Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> > > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > > > ---
> > > > include/linux/bpf.h | 2 --
> > > > kernel/bpf/trampoline.c | 74 +++++++++++++++++++++++++++++++----------
> > > > 2 files changed, 56 insertions(+), 20 deletions(-)
> > > >
> > > > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > > > index cd9b96434904..46bf3d86bdb2 100644
> > > > --- a/include/linux/bpf.h
> > > > +++ b/include/linux/bpf.h
> > > > @@ -1335,8 +1335,6 @@ struct bpf_trampoline {
> > > > /* hlist for trampoline_ip_table */
> > > > struct hlist_node hlist_ip;
> > > > struct ftrace_ops *fops;
> > > > - /* serializes access to fields of this trampoline */
> > > > - struct mutex mutex;
> > > > refcount_t refcnt;
> > > > u32 flags;
> > > > u64 key;
> > > > diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> > > > index 952cd7932461..05dc0358654d 100644
> > > > --- a/kernel/bpf/trampoline.c
> > > > +++ b/kernel/bpf/trampoline.c
> > > > @@ -30,6 +30,45 @@ static struct hlist_head trampoline_ip_table[TRAMPOLINE_TABLE_SIZE];
> > > > /* serializes access to trampoline tables */
> > > > static DEFINE_MUTEX(trampoline_mutex);
> > > >
> > > > +#define TRAMPOLINE_LOCKS_BITS 6
> > > > +#define TRAMPOLINE_LOCKS_TABLE_SIZE (1 << TRAMPOLINE_LOCKS_BITS)
> > > > +
> > > > +static struct {
> > > > + struct mutex mutex;
> > > > + struct lock_class_key key;
> > > > +} *trampoline_locks;
> > > > +
> > > > +static struct mutex *trampoline_locks_lookup(struct bpf_trampoline *tr)
> > >
> > > select_trampoline_lock() ?
> >
> > ok
> >
> > >
> > > > +{
> > > > + return &trampoline_locks[hash_64((u64) tr, TRAMPOLINE_LOCKS_BITS)].mutex;
> > > > +}
> > > > +
> > > > +static void trampoline_lock(struct bpf_trampoline *tr)
> > > > +{
> > > > + mutex_lock(trampoline_locks_lookup(tr));
> > > > +}
> > > > +
> > > > +static void trampoline_unlock(struct bpf_trampoline *tr)
> > > > +{
> > > > + mutex_unlock(trampoline_locks_lookup(tr));
> > > > +}
> > > > +
> > > > +static int __init trampoline_locks_init(void)
> > > > +{
> > > > + int i;
> > > > +
> > > > + trampoline_locks = kmalloc_array(TRAMPOLINE_LOCKS_TABLE_SIZE,
> > > > + sizeof(trampoline_locks[0]), GFP_KERNEL);
> > >
> > > why bother with memory allocation? This is just 64 mutexes.
> >
> > ok, I could probably use __mutex_init directly for static key
> >
> > about 64.. not sure how I missed that but there's lockdep limit for
> > maximum locks depth and it's 48.. so we'll need to use 32 locks,
> > which is probably still ok
> >
> > >
> > > > + if (!trampoline_locks)
> > > > + return -ENOMEM;
> > > > +
> > > > + for (i = 0; i < TRAMPOLINE_LOCKS_TABLE_SIZE; i++) {
> > > > + lockdep_register_key(&trampoline_locks[i].key);
> > >
> > > why special key?
> >
> > if we keep single key we will get lockdep 'recursive locking' warning
> > during bpf_trampoline_multi_attach, because lockdep will think we lock
> > the same mutex
> >
> > there's support to annotate nested locking with mutex_lock_nested but
> > it allows maximum of 8 nested instances
>
> yeah. subclass limit of 8 is there for a different use case.
>
>
> I guess you never validated your earlier approach of "let's take
> all trampoline mutexes" with lockdep ? ;)
nope, the rfc had workaround for lockdep ;-)
+#ifdef CONFIG_LOCKDEP
+ mutex_init_with_key(&tr->mutex, &__lockdep_no_track__);
+#else
but I overlooked lockdep config for this version
> MAX_LOCK_DEPTH is indeed 48.
>
> See fs/configfs/inode.c and default_group_class.
> It does:
> lockdep_set_class(&inode->i_rwsem,
> &default_group_class[depth - 1]);
>
> the idea here is that the number of lockdep keys doesn't have
> to be equal to the actual number of mutexes.
I see, thanks for the pointer
>
> I guess we can keep a total of 32 mutexes to avoid making it too fancy.
> Please add a comment explaining 32 and why it needs lockdep_key.
ok
>
> I thought declaring all mutexes as static will avoid the need for the key,
> but DEFINE_MUTEX doesn't support an array.
> So since we need a loop anyway to init mutex and the key,
> let's keep kmalloc_array() above. Which is now renamed to kmalloc_objs()
> after 7.0-rc1.
I don't mind either way, meanwhile I used this version:
static struct {
struct mutex mutex;
struct lock_class_key key;
} trampoline_locks[TRAMPOLINE_LOCKS_TABLE_SIZE];
for (i = 0; i < TRAMPOLINE_LOCKS_TABLE_SIZE; i++)
__mutex_init(&trampoline_locks[i].mutex, "trampoline_lock", &trampoline_locks[i].key);
thanks,
jirka
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH bpf-next 02/17] bpf: Use mutex lock pool for bpf trampolines
2026-02-24 12:27 ` Jiri Olsa
@ 2026-02-24 17:13 ` Alexei Starovoitov
0 siblings, 0 replies; 38+ messages in thread
From: Alexei Starovoitov @ 2026-02-24 17:13 UTC (permalink / raw)
To: Jiri Olsa
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
Yonghong Song, Menglong Dong, Steven Rostedt
On Tue, Feb 24, 2026 at 4:27 AM Jiri Olsa <olsajiri@gmail.com> wrote:
>
> nope, the rfc had workaround for lockdep ;-)
>
> +#ifdef CONFIG_LOCKDEP
> + mutex_init_with_key(&tr->mutex, &__lockdep_no_track__);
> +#else
The only user of lockdep_set_notrack_class() was removed from the tree.
We shouldn't introduce new special cases.
> but I overlooked lockdep config for this version
>
> > MAX_LOCK_DEPTH is indeed 48.
> >
> > See fs/configfs/inode.c and default_group_class.
> > It does:
> > lockdep_set_class(&inode->i_rwsem,
> > &default_group_class[depth - 1]);
> >
> > the idea here is that the number of lockdep keys doesn't have
> > to be equal to the actual number of mutexes.
>
> I see, thanks for the pointer
>
> >
> > I guess we can keep a total of 32 mutexes to avoid making it too fancy.
> > Please add a comment explaining 32 and why it needs lockdep_key.
>
> ok
>
> >
> > I thought declaring all mutexes as static will avoid the need for the key,
> > but DEFINE_MUTEX doesn't support an array.
> > So since we need a loop anyway to init mutex and the key,
> > let's keep kmalloc_array() above. Which is now renamed to kmalloc_objs()
> > after 7.0-rc1.
>
> I don't mind either way, meanwhile I used this version:
>
> static struct {
> struct mutex mutex;
> struct lock_class_key key;
> } trampoline_locks[TRAMPOLINE_LOCKS_TABLE_SIZE];
>
> for (i = 0; i < TRAMPOLINE_LOCKS_TABLE_SIZE; i++)
> __mutex_init(&trampoline_locks[i].mutex, "trampoline_lock", &trampoline_locks[i].key);
works for me.
^ permalink raw reply [flat|nested] 38+ messages in thread
end of thread, other threads:[~2026-02-24 17:13 UTC | newest]
Thread overview: 38+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-20 10:06 [PATCH bpf-next 00/17] bpf: tracing_multi link Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 01/17] ftrace: Add ftrace_hash_count function Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 02/17] bpf: Use mutex lock pool for bpf trampolines Jiri Olsa
2026-02-20 10:57 ` bot+bpf-ci
2026-02-22 14:33 ` Jiri Olsa
2026-02-20 19:58 ` Alexei Starovoitov
2026-02-22 14:34 ` Jiri Olsa
2026-02-23 19:35 ` Alexei Starovoitov
2026-02-24 12:27 ` Jiri Olsa
2026-02-24 17:13 ` Alexei Starovoitov
2026-02-20 10:06 ` [PATCH bpf-next 03/17] bpf: Add struct bpf_trampoline_ops object Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 04/17] bpf: Add struct bpf_tramp_node object Jiri Olsa
2026-02-20 10:58 ` bot+bpf-ci
2026-02-22 14:34 ` Jiri Olsa
2026-02-20 19:52 ` kernel test robot
2026-02-20 21:05 ` kernel test robot
2026-02-21 3:00 ` kernel test robot
2026-02-20 10:06 ` [PATCH bpf-next 05/17] bpf: Factor fsession link to use struct bpf_tramp_node Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 06/17] bpf: Add multi tracing attach types Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 07/17] bpf: Add bpf_trampoline_multi_attach/detach functions Jiri Olsa
2026-02-20 10:57 ` bot+bpf-ci
2026-02-22 14:34 ` Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 08/17] bpf: Add support for tracing multi link Jiri Olsa
2026-02-20 10:57 ` bot+bpf-ci
2026-02-22 14:35 ` Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 09/17] bpf: Add support for tracing_multi link cookies Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 10/17] bpf: Add support for tracing_multi link session Jiri Olsa
2026-02-20 10:57 ` bot+bpf-ci
2026-02-22 14:35 ` Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 11/17] libbpf: Add support to create tracing multi link Jiri Olsa
2026-02-20 10:57 ` bot+bpf-ci
2026-02-22 14:36 ` Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 12/17] selftests/bpf: Add tracing multi skel/pattern/ids attach tests Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 13/17] selftests/bpf: Add tracing multi intersect tests Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 14/17] selftests/bpf: Add tracing multi cookies test Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 15/17] selftests/bpf: Add tracing multi session test Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 16/17] selftests/bpf: Add tracing multi attach fails test Jiri Olsa
2026-02-20 10:06 ` [PATCH bpf-next 17/17] selftests/bpf: Add tracing multi attach benchmark test Jiri Olsa
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox