* [PATCHv4 bpf-next 01/25] ftrace: Add ftrace_hash_count function
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 02/25] ftrace: Make ftrace_hash_clear global Jiri Olsa
` (24 subsequent siblings)
25 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding external ftrace_hash_count function so we could get hash
count outside of ftrace object.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/ftrace.h | 1 +
kernel/trace/ftrace.c | 7 ++++++-
2 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index c242fe49af4c..401f8dfd05d3 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -415,6 +415,7 @@ struct ftrace_hash *alloc_ftrace_hash(int size_bits);
void free_ftrace_hash(struct ftrace_hash *hash);
struct ftrace_func_entry *add_ftrace_hash_entry_direct(struct ftrace_hash *hash,
unsigned long ip, unsigned long direct);
+unsigned long ftrace_hash_count(struct ftrace_hash *hash);
/* The hash used to know what functions callbacks trace */
struct ftrace_ops_hash {
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 413310912609..68a071e80f32 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -6288,11 +6288,16 @@ int modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
}
EXPORT_SYMBOL_GPL(modify_ftrace_direct);
-static unsigned long hash_count(struct ftrace_hash *hash)
+static inline unsigned long hash_count(struct ftrace_hash *hash)
{
return hash ? hash->count : 0;
}
+unsigned long ftrace_hash_count(struct ftrace_hash *hash)
+{
+ return hash_count(hash);
+}
+
/**
* hash_add - adds two struct ftrace_hash and returns the result
* @a: struct ftrace_hash object
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* [PATCHv4 bpf-next 02/25] ftrace: Make ftrace_hash_clear global
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 01/25] ftrace: Add ftrace_hash_count function Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 03/25] bpf: Use mutex lock pool for bpf trampolines Jiri Olsa
` (23 subsequent siblings)
25 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Making ftrace_hash_clear function global, it will be used in
following changes outside ftrace.c object.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/ftrace.h | 1 +
kernel/trace/ftrace.c | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 401f8dfd05d3..c1265f26a73d 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -416,6 +416,7 @@ void free_ftrace_hash(struct ftrace_hash *hash);
struct ftrace_func_entry *add_ftrace_hash_entry_direct(struct ftrace_hash *hash,
unsigned long ip, unsigned long direct);
unsigned long ftrace_hash_count(struct ftrace_hash *hash);
+void ftrace_hash_clear(struct ftrace_hash *hash);
/* The hash used to know what functions callbacks trace */
struct ftrace_ops_hash {
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 68a071e80f32..632f5fad60ec 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1249,7 +1249,7 @@ remove_hash_entry(struct ftrace_hash *hash,
hash->count--;
}
-static void ftrace_hash_clear(struct ftrace_hash *hash)
+void ftrace_hash_clear(struct ftrace_hash *hash)
{
struct hlist_head *hhd;
struct hlist_node *tn;
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* [PATCHv4 bpf-next 03/25] bpf: Use mutex lock pool for bpf trampolines
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 01/25] ftrace: Add ftrace_hash_count function Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 02/25] ftrace: Make ftrace_hash_clear global Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 04/25] bpf: Add struct bpf_trampoline_ops object Jiri Olsa
` (22 subsequent siblings)
25 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding mutex lock pool that replaces bpf trampolines mutex.
For tracing_multi link coming in following changes we need to lock all
the involved trampolines during the attachment. This could mean thousands
of mutex locks, which is not convenient.
As suggested by Andrii we can replace bpf trampolines mutex with mutex
pool, where each trampoline is hash-ed to one of the locks from the pool.
It's better to lock all the pool mutexes (32 at the moment) than
thousands of them.
There is 48 (MAX_LOCK_DEPTH) lock limit allowed to be simultaneously
held by task, so we need to keep 32 mutexes (5 bits) in the pool, so
when we lock them all in following changes the lockdep won't scream.
Removing the mutex_is_locked in bpf_trampoline_put, because we removed
the mutex from bpf_trampoline.
Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/bpf.h | 2 --
kernel/bpf/trampoline.c | 76 ++++++++++++++++++++++++++++-------------
2 files changed, 52 insertions(+), 26 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 05b34a6355b0..1d900f49aff5 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1335,8 +1335,6 @@ struct bpf_trampoline {
/* hlist for trampoline_ip_table */
struct hlist_node hlist_ip;
struct ftrace_ops *fops;
- /* serializes access to fields of this trampoline */
- struct mutex mutex;
refcount_t refcnt;
u32 flags;
u64 key;
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index f02254a21585..eb4ea78ff77f 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -30,6 +30,34 @@ static struct hlist_head trampoline_ip_table[TRAMPOLINE_TABLE_SIZE];
/* serializes access to trampoline tables */
static DEFINE_MUTEX(trampoline_mutex);
+/*
+ * We keep 32 trampoline locks (5 bits) in the pool, because there is
+ * 48 (MAX_LOCK_DEPTH) locks limit allowed to be simultaneously held
+ * by task. Each lock has its own lockdep key to keep it simple.
+ */
+#define TRAMPOLINE_LOCKS_BITS 5
+#define TRAMPOLINE_LOCKS_TABLE_SIZE (1 << TRAMPOLINE_LOCKS_BITS)
+
+static struct {
+ struct mutex mutex;
+ struct lock_class_key key;
+} trampoline_locks[TRAMPOLINE_LOCKS_TABLE_SIZE];
+
+static struct mutex *select_trampoline_lock(struct bpf_trampoline *tr)
+{
+ return &trampoline_locks[hash_64((u64)(uintptr_t) tr, TRAMPOLINE_LOCKS_BITS)].mutex;
+}
+
+static void trampoline_lock(struct bpf_trampoline *tr)
+{
+ mutex_lock(select_trampoline_lock(tr));
+}
+
+static void trampoline_unlock(struct bpf_trampoline *tr)
+{
+ mutex_unlock(select_trampoline_lock(tr));
+}
+
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex);
@@ -69,9 +97,9 @@ static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, unsigned long ip,
if (cmd == FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_SELF) {
/* This is called inside register_ftrace_direct_multi(), so
- * tr->mutex is already locked.
+ * trampoline's mutex is already locked.
*/
- lockdep_assert_held_once(&tr->mutex);
+ lockdep_assert_held_once(select_trampoline_lock(tr));
/* Instead of updating the trampoline here, we propagate
* -EAGAIN to register_ftrace_direct(). Then we can
@@ -91,7 +119,7 @@ static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, unsigned long ip,
}
/* The normal locking order is
- * tr->mutex => direct_mutex (ftrace.c) => ftrace_lock (ftrace.c)
+ * select_trampoline_lock(tr) => direct_mutex (ftrace.c) => ftrace_lock (ftrace.c)
*
* The following two commands are called from
*
@@ -99,12 +127,12 @@ static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, unsigned long ip,
* cleanup_direct_functions_after_ipmodify
*
* In both cases, direct_mutex is already locked. Use
- * mutex_trylock(&tr->mutex) to avoid deadlock in race condition
- * (something else is making changes to this same trampoline).
+ * mutex_trylock(select_trampoline_lock(tr)) to avoid deadlock in race condition
+ * (something else holds the same pool lock).
*/
- if (!mutex_trylock(&tr->mutex)) {
- /* sleep 1 ms to make sure whatever holding tr->mutex makes
- * some progress.
+ if (!mutex_trylock(select_trampoline_lock(tr))) {
+ /* sleep 1 ms to make sure whatever holding select_trampoline_lock(tr)
+ * makes some progress.
*/
msleep(1);
return -EAGAIN;
@@ -129,7 +157,7 @@ static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, unsigned long ip,
break;
}
- mutex_unlock(&tr->mutex);
+ trampoline_unlock(tr);
return ret;
}
#endif
@@ -359,7 +387,6 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key, unsigned long ip)
head = &trampoline_ip_table[hash_64(tr->ip, TRAMPOLINE_HASH_BITS)];
hlist_add_head(&tr->hlist_ip, head);
refcount_set(&tr->refcnt, 1);
- mutex_init(&tr->mutex);
for (i = 0; i < BPF_TRAMP_MAX; i++)
INIT_HLIST_HEAD(&tr->progs_hlist[i]);
out:
@@ -844,9 +871,9 @@ int bpf_trampoline_link_prog(struct bpf_tramp_link *link,
{
int err;
- mutex_lock(&tr->mutex);
+ trampoline_lock(tr);
err = __bpf_trampoline_link_prog(link, tr, tgt_prog);
- mutex_unlock(&tr->mutex);
+ trampoline_unlock(tr);
return err;
}
@@ -887,9 +914,9 @@ int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
{
int err;
- mutex_lock(&tr->mutex);
+ trampoline_lock(tr);
err = __bpf_trampoline_unlink_prog(link, tr, tgt_prog);
- mutex_unlock(&tr->mutex);
+ trampoline_unlock(tr);
return err;
}
@@ -999,12 +1026,12 @@ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog,
if (!tr)
return -ENOMEM;
- mutex_lock(&tr->mutex);
+ trampoline_lock(tr);
shim_link = cgroup_shim_find(tr, bpf_func);
if (shim_link && !IS_ERR(bpf_link_inc_not_zero(&shim_link->link.link))) {
/* Reusing existing shim attached by the other program. */
- mutex_unlock(&tr->mutex);
+ trampoline_unlock(tr);
bpf_trampoline_put(tr); /* bpf_trampoline_get above */
return 0;
}
@@ -1024,16 +1051,16 @@ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog,
shim_link->trampoline = tr;
/* note, we're still holding tr refcnt from above */
- mutex_unlock(&tr->mutex);
+ trampoline_unlock(tr);
return 0;
err:
- mutex_unlock(&tr->mutex);
+ trampoline_unlock(tr);
if (shim_link)
bpf_link_put(&shim_link->link.link);
- /* have to release tr while _not_ holding its mutex */
+ /* have to release tr while _not_ holding pool mutex for trampoline */
bpf_trampoline_put(tr); /* bpf_trampoline_get above */
return err;
@@ -1054,9 +1081,9 @@ void bpf_trampoline_unlink_cgroup_shim(struct bpf_prog *prog)
if (WARN_ON_ONCE(!tr))
return;
- mutex_lock(&tr->mutex);
+ trampoline_lock(tr);
shim_link = cgroup_shim_find(tr, bpf_func);
- mutex_unlock(&tr->mutex);
+ trampoline_unlock(tr);
if (shim_link)
bpf_link_put(&shim_link->link.link);
@@ -1074,14 +1101,14 @@ struct bpf_trampoline *bpf_trampoline_get(u64 key,
if (!tr)
return NULL;
- mutex_lock(&tr->mutex);
+ trampoline_lock(tr);
if (tr->func.addr)
goto out;
memcpy(&tr->func.model, &tgt_info->fmodel, sizeof(tgt_info->fmodel));
tr->func.addr = (void *)tgt_info->tgt_addr;
out:
- mutex_unlock(&tr->mutex);
+ trampoline_unlock(tr);
return tr;
}
@@ -1094,7 +1121,6 @@ void bpf_trampoline_put(struct bpf_trampoline *tr)
mutex_lock(&trampoline_mutex);
if (!refcount_dec_and_test(&tr->refcnt))
goto out;
- WARN_ON_ONCE(mutex_is_locked(&tr->mutex));
for (i = 0; i < BPF_TRAMP_MAX; i++)
if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[i])))
@@ -1380,6 +1406,8 @@ static int __init init_trampolines(void)
INIT_HLIST_HEAD(&trampoline_key_table[i]);
for (i = 0; i < TRAMPOLINE_TABLE_SIZE; i++)
INIT_HLIST_HEAD(&trampoline_ip_table[i]);
+ for (i = 0; i < TRAMPOLINE_LOCKS_TABLE_SIZE; i++)
+ __mutex_init(&trampoline_locks[i].mutex, "trampoline_lock", &trampoline_locks[i].key);
return 0;
}
late_initcall(init_trampolines);
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* [PATCHv4 bpf-next 04/25] bpf: Add struct bpf_trampoline_ops object
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (2 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 03/25] bpf: Use mutex lock pool for bpf trampolines Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 05/25] bpf: Add struct bpf_tramp_node object Jiri Olsa
` (21 subsequent siblings)
25 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
In following changes we will need to override ftrace direct attachment
behaviour. In order to do that we are adding struct bpf_trampoline_ops
object that defines callbacks for ftrace direct attachment:
register_fentry
unregister_fentry
modify_fentry
The new struct bpf_trampoline_ops object is passed as an argument to
__bpf_trampoline_link/unlink_prog functions.
At the moment the default trampoline_ops is set to the current ftrace
direct attachment functions, so there's no functional change for the
current code.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
kernel/bpf/trampoline.c | 59 ++++++++++++++++++++++++++++-------------
1 file changed, 41 insertions(+), 18 deletions(-)
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index eb4ea78ff77f..a76e093c9092 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -58,8 +58,18 @@ static void trampoline_unlock(struct bpf_trampoline *tr)
mutex_unlock(select_trampoline_lock(tr));
}
+struct bpf_trampoline_ops {
+ int (*register_fentry)(struct bpf_trampoline *tr, void *new_addr, void *data);
+ int (*unregister_fentry)(struct bpf_trampoline *tr, u32 orig_flags, void *old_addr,
+ void *data);
+ int (*modify_fentry)(struct bpf_trampoline *tr, u32 orig_flags, void *old_addr,
+ void *new_addr, bool lock_direct_mutex, void *data);
+};
+
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
-static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex);
+static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex,
+ struct bpf_trampoline_ops *ops, void *data);
+static struct bpf_trampoline_ops trampoline_ops;
#ifdef CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS
static struct bpf_trampoline *direct_ops_ip_lookup(struct ftrace_ops *ops, unsigned long ip)
@@ -144,13 +154,15 @@ static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, unsigned long ip,
if ((tr->flags & BPF_TRAMP_F_CALL_ORIG) &&
!(tr->flags & BPF_TRAMP_F_ORIG_STACK))
- ret = bpf_trampoline_update(tr, false /* lock_direct_mutex */);
+ ret = bpf_trampoline_update(tr, false /* lock_direct_mutex */,
+ &trampoline_ops, NULL);
break;
case FTRACE_OPS_CMD_DISABLE_SHARE_IPMODIFY_PEER:
tr->flags &= ~BPF_TRAMP_F_SHARE_IPMODIFY;
if (tr->flags & BPF_TRAMP_F_ORIG_STACK)
- ret = bpf_trampoline_update(tr, false /* lock_direct_mutex */);
+ ret = bpf_trampoline_update(tr, false /* lock_direct_mutex */,
+ &trampoline_ops, NULL);
break;
default:
ret = -EINVAL;
@@ -414,7 +426,7 @@ static int bpf_trampoline_update_fentry(struct bpf_trampoline *tr, u32 orig_flag
}
static int unregister_fentry(struct bpf_trampoline *tr, u32 orig_flags,
- void *old_addr)
+ void *old_addr, void *data)
{
int ret;
@@ -428,7 +440,7 @@ static int unregister_fentry(struct bpf_trampoline *tr, u32 orig_flags,
static int modify_fentry(struct bpf_trampoline *tr, u32 orig_flags,
void *old_addr, void *new_addr,
- bool lock_direct_mutex)
+ bool lock_direct_mutex, void *data __maybe_unused)
{
int ret;
@@ -442,7 +454,7 @@ static int modify_fentry(struct bpf_trampoline *tr, u32 orig_flags,
}
/* first time registering */
-static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
+static int register_fentry(struct bpf_trampoline *tr, void *new_addr, void *data __maybe_unused)
{
void *ip = tr->func.addr;
unsigned long faddr;
@@ -464,6 +476,12 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
return ret;
}
+static struct bpf_trampoline_ops trampoline_ops = {
+ .register_fentry = register_fentry,
+ .unregister_fentry = unregister_fentry,
+ .modify_fentry = modify_fentry,
+};
+
static struct bpf_tramp_links *
bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total, bool *ip_arg)
{
@@ -631,7 +649,8 @@ static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key, int size)
return ERR_PTR(err);
}
-static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex)
+static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex,
+ struct bpf_trampoline_ops *ops, void *data)
{
struct bpf_tramp_image *im;
struct bpf_tramp_links *tlinks;
@@ -644,7 +663,7 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
return PTR_ERR(tlinks);
if (total == 0) {
- err = unregister_fentry(tr, orig_flags, tr->cur_image->image);
+ err = ops->unregister_fentry(tr, orig_flags, tr->cur_image->image, data);
bpf_tramp_image_put(tr->cur_image);
tr->cur_image = NULL;
goto out;
@@ -715,11 +734,11 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
WARN_ON(tr->cur_image && total == 0);
if (tr->cur_image)
/* progs already running at this address */
- err = modify_fentry(tr, orig_flags, tr->cur_image->image,
- im->image, lock_direct_mutex);
+ err = ops->modify_fentry(tr, orig_flags, tr->cur_image->image,
+ im->image, lock_direct_mutex, data);
else
/* first time registering */
- err = register_fentry(tr, im->image);
+ err = ops->register_fentry(tr, im->image, data);
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
if (err == -EAGAIN) {
@@ -793,7 +812,9 @@ static int bpf_freplace_check_tgt_prog(struct bpf_prog *tgt_prog)
static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
struct bpf_trampoline *tr,
- struct bpf_prog *tgt_prog)
+ struct bpf_prog *tgt_prog,
+ struct bpf_trampoline_ops *ops,
+ void *data)
{
struct bpf_fsession_link *fslink = NULL;
enum bpf_tramp_prog_type kind;
@@ -851,7 +872,7 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
} else {
tr->progs_cnt[kind]++;
}
- err = bpf_trampoline_update(tr, true /* lock_direct_mutex */);
+ err = bpf_trampoline_update(tr, true /* lock_direct_mutex */, ops, data);
if (err) {
hlist_del_init(&link->tramp_hlist);
if (kind == BPF_TRAMP_FSESSION) {
@@ -872,14 +893,16 @@ int bpf_trampoline_link_prog(struct bpf_tramp_link *link,
int err;
trampoline_lock(tr);
- err = __bpf_trampoline_link_prog(link, tr, tgt_prog);
+ err = __bpf_trampoline_link_prog(link, tr, tgt_prog, &trampoline_ops, NULL);
trampoline_unlock(tr);
return err;
}
static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
struct bpf_trampoline *tr,
- struct bpf_prog *tgt_prog)
+ struct bpf_prog *tgt_prog,
+ struct bpf_trampoline_ops *ops,
+ void *data)
{
enum bpf_tramp_prog_type kind;
int err;
@@ -904,7 +927,7 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
}
hlist_del_init(&link->tramp_hlist);
tr->progs_cnt[kind]--;
- return bpf_trampoline_update(tr, true /* lock_direct_mutex */);
+ return bpf_trampoline_update(tr, true /* lock_direct_mutex */, ops, data);
}
/* bpf_trampoline_unlink_prog() should never fail. */
@@ -915,7 +938,7 @@ int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
int err;
trampoline_lock(tr);
- err = __bpf_trampoline_unlink_prog(link, tr, tgt_prog);
+ err = __bpf_trampoline_unlink_prog(link, tr, tgt_prog, &trampoline_ops, NULL);
trampoline_unlock(tr);
return err;
}
@@ -1044,7 +1067,7 @@ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog,
goto err;
}
- err = __bpf_trampoline_link_prog(&shim_link->link, tr, NULL);
+ err = __bpf_trampoline_link_prog(&shim_link->link, tr, NULL, &trampoline_ops, NULL);
if (err)
goto err;
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* [PATCHv4 bpf-next 05/25] bpf: Add struct bpf_tramp_node object
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (3 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 04/25] bpf: Add struct bpf_trampoline_ops object Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 06/25] bpf: Factor fsession link to use struct bpf_tramp_node Jiri Olsa
` (20 subsequent siblings)
25 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: Hengqi Chen, bpf, linux-trace-kernel, Martin KaFai Lau,
Eduard Zingerman, Song Liu, Yonghong Song, Menglong Dong,
Steven Rostedt
Adding struct bpf_tramp_node to decouple the link out of the trampoline
attachment info.
At the moment the object for attaching bpf program to the trampoline is
'struct bpf_tramp_link':
struct bpf_tramp_link {
struct bpf_link link;
struct hlist_node tramp_hlist;
u64 cookie;
}
The link holds the bpf_prog pointer and forces one link - one program
binding logic. In following changes we want to attach program to multiple
trampolines but we want to keep just one bpf_link object.
Splitting struct bpf_tramp_link into:
struct bpf_tramp_link {
struct bpf_link link;
struct bpf_tramp_node node;
};
struct bpf_tramp_node {
struct bpf_link *link;
struct hlist_node tramp_hlist;
u64 cookie;
};
The 'struct bpf_tramp_link' defines standard single trampoline link
and 'struct bpf_tramp_node' is the attachment trampoline object with
pointer to the bpf_link object.
This will allow us to define link for multiple trampolines, like:
struct bpf_tracing_multi_link {
struct bpf_link link;
...
int nodes_cnt;
struct bpf_tracing_multi_node nodes[] __counted_by(nodes_cnt);
};
Cc: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
arch/arm64/net/bpf_jit_comp.c | 58 +++++++++---------
arch/loongarch/net/bpf_jit.c | 44 ++++++-------
arch/powerpc/net/bpf_jit_comp.c | 46 +++++++-------
arch/riscv/net/bpf_jit_comp64.c | 52 ++++++++--------
arch/s390/net/bpf_jit_comp.c | 44 ++++++-------
arch/x86/net/bpf_jit_comp.c | 54 ++++++++--------
include/linux/bpf.h | 60 +++++++++++-------
kernel/bpf/bpf_struct_ops.c | 27 ++++----
kernel/bpf/syscall.c | 39 ++++++------
kernel/bpf/trampoline.c | 105 ++++++++++++++++----------------
net/bpf/bpf_dummy_struct_ops.c | 14 ++---
11 files changed, 281 insertions(+), 262 deletions(-)
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index adf84962d579..6d08a6f08a0c 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -2288,24 +2288,24 @@ bool bpf_jit_supports_subprog_tailcalls(void)
return true;
}
-static void invoke_bpf_prog(struct jit_ctx *ctx, struct bpf_tramp_link *l,
+static void invoke_bpf_prog(struct jit_ctx *ctx, struct bpf_tramp_node *node,
int bargs_off, int retval_off, int run_ctx_off,
bool save_ret)
{
__le32 *branch;
u64 enter_prog;
u64 exit_prog;
- struct bpf_prog *p = l->link.prog;
+ struct bpf_prog *p = node->link->prog;
int cookie_off = offsetof(struct bpf_tramp_run_ctx, bpf_cookie);
enter_prog = (u64)bpf_trampoline_enter(p);
exit_prog = (u64)bpf_trampoline_exit(p);
- if (l->cookie == 0) {
+ if (node->cookie == 0) {
/* if cookie is zero, one instruction is enough to store it */
emit(A64_STR64I(A64_ZR, A64_SP, run_ctx_off + cookie_off), ctx);
} else {
- emit_a64_mov_i64(A64_R(10), l->cookie, ctx);
+ emit_a64_mov_i64(A64_R(10), node->cookie, ctx);
emit(A64_STR64I(A64_R(10), A64_SP, run_ctx_off + cookie_off),
ctx);
}
@@ -2355,7 +2355,7 @@ static void invoke_bpf_prog(struct jit_ctx *ctx, struct bpf_tramp_link *l,
emit_call(exit_prog, ctx);
}
-static void invoke_bpf_mod_ret(struct jit_ctx *ctx, struct bpf_tramp_links *tl,
+static void invoke_bpf_mod_ret(struct jit_ctx *ctx, struct bpf_tramp_nodes *tn,
int bargs_off, int retval_off, int run_ctx_off,
__le32 **branches)
{
@@ -2365,8 +2365,8 @@ static void invoke_bpf_mod_ret(struct jit_ctx *ctx, struct bpf_tramp_links *tl,
* Set this to 0 to avoid confusing the program.
*/
emit(A64_STR64I(A64_ZR, A64_SP, retval_off), ctx);
- for (i = 0; i < tl->nr_links; i++) {
- invoke_bpf_prog(ctx, tl->links[i], bargs_off, retval_off,
+ for (i = 0; i < tn->nr_nodes; i++) {
+ invoke_bpf_prog(ctx, tn->nodes[i], bargs_off, retval_off,
run_ctx_off, true);
/* if (*(u64 *)(sp + retval_off) != 0)
* goto do_fexit;
@@ -2497,10 +2497,10 @@ static void restore_args(struct jit_ctx *ctx, int bargs_off, int nregs)
}
}
-static bool is_struct_ops_tramp(const struct bpf_tramp_links *fentry_links)
+static bool is_struct_ops_tramp(const struct bpf_tramp_nodes *fentry_nodes)
{
- return fentry_links->nr_links == 1 &&
- fentry_links->links[0]->link.type == BPF_LINK_TYPE_STRUCT_OPS;
+ return fentry_nodes->nr_nodes == 1 &&
+ fentry_nodes->nodes[0]->link->type == BPF_LINK_TYPE_STRUCT_OPS;
}
static void store_func_meta(struct jit_ctx *ctx, u64 func_meta, int func_meta_off)
@@ -2521,7 +2521,7 @@ static void store_func_meta(struct jit_ctx *ctx, u64 func_meta, int func_meta_of
*
*/
static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
- struct bpf_tramp_links *tlinks, void *func_addr,
+ struct bpf_tramp_nodes *tnodes, void *func_addr,
const struct btf_func_model *m,
const struct arg_aux *a,
u32 flags)
@@ -2537,14 +2537,14 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
int run_ctx_off;
int oargs_off;
int nfuncargs;
- struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
- struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
- struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
+ struct bpf_tramp_nodes *fentry = &tnodes[BPF_TRAMP_FENTRY];
+ struct bpf_tramp_nodes *fexit = &tnodes[BPF_TRAMP_FEXIT];
+ struct bpf_tramp_nodes *fmod_ret = &tnodes[BPF_TRAMP_MODIFY_RETURN];
bool save_ret;
__le32 **branches = NULL;
bool is_struct_ops = is_struct_ops_tramp(fentry);
int cookie_off, cookie_cnt, cookie_bargs_off;
- int fsession_cnt = bpf_fsession_cnt(tlinks);
+ int fsession_cnt = bpf_fsession_cnt(tnodes);
u64 func_meta;
/* trampoline stack layout:
@@ -2590,7 +2590,7 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
cookie_off = stack_size;
/* room for session cookies */
- cookie_cnt = bpf_fsession_cookie_cnt(tlinks);
+ cookie_cnt = bpf_fsession_cookie_cnt(tnodes);
stack_size += cookie_cnt * 8;
ip_off = stack_size;
@@ -2687,20 +2687,20 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
}
cookie_bargs_off = (bargs_off - cookie_off) / 8;
- for (i = 0; i < fentry->nr_links; i++) {
- if (bpf_prog_calls_session_cookie(fentry->links[i])) {
+ for (i = 0; i < fentry->nr_nodes; i++) {
+ if (bpf_prog_calls_session_cookie(fentry->nodes[i])) {
u64 meta = func_meta | (cookie_bargs_off << BPF_TRAMP_COOKIE_INDEX_SHIFT);
store_func_meta(ctx, meta, func_meta_off);
cookie_bargs_off--;
}
- invoke_bpf_prog(ctx, fentry->links[i], bargs_off,
+ invoke_bpf_prog(ctx, fentry->nodes[i], bargs_off,
retval_off, run_ctx_off,
flags & BPF_TRAMP_F_RET_FENTRY_RET);
}
- if (fmod_ret->nr_links) {
- branches = kcalloc(fmod_ret->nr_links, sizeof(__le32 *),
+ if (fmod_ret->nr_nodes) {
+ branches = kcalloc(fmod_ret->nr_nodes, sizeof(__le32 *),
GFP_KERNEL);
if (!branches)
return -ENOMEM;
@@ -2724,7 +2724,7 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
}
/* update the branches saved in invoke_bpf_mod_ret with cbnz */
- for (i = 0; i < fmod_ret->nr_links && ctx->image != NULL; i++) {
+ for (i = 0; i < fmod_ret->nr_nodes && ctx->image != NULL; i++) {
int offset = &ctx->image[ctx->idx] - branches[i];
*branches[i] = cpu_to_le32(A64_CBNZ(1, A64_R(10), offset));
}
@@ -2735,14 +2735,14 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
store_func_meta(ctx, func_meta, func_meta_off);
cookie_bargs_off = (bargs_off - cookie_off) / 8;
- for (i = 0; i < fexit->nr_links; i++) {
- if (bpf_prog_calls_session_cookie(fexit->links[i])) {
+ for (i = 0; i < fexit->nr_nodes; i++) {
+ if (bpf_prog_calls_session_cookie(fexit->nodes[i])) {
u64 meta = func_meta | (cookie_bargs_off << BPF_TRAMP_COOKIE_INDEX_SHIFT);
store_func_meta(ctx, meta, func_meta_off);
cookie_bargs_off--;
}
- invoke_bpf_prog(ctx, fexit->links[i], bargs_off, retval_off,
+ invoke_bpf_prog(ctx, fexit->nodes[i], bargs_off, retval_off,
run_ctx_off, false);
}
@@ -2800,7 +2800,7 @@ bool bpf_jit_supports_fsession(void)
}
int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks, void *func_addr)
+ struct bpf_tramp_nodes *tnodes, void *func_addr)
{
struct jit_ctx ctx = {
.image = NULL,
@@ -2814,7 +2814,7 @@ int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
if (ret < 0)
return ret;
- ret = prepare_trampoline(&ctx, &im, tlinks, func_addr, m, &aaux, flags);
+ ret = prepare_trampoline(&ctx, &im, tnodes, func_addr, m, &aaux, flags);
if (ret < 0)
return ret;
@@ -2838,7 +2838,7 @@ int arch_protect_bpf_trampoline(void *image, unsigned int size)
int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *ro_image,
void *ro_image_end, const struct btf_func_model *m,
- u32 flags, struct bpf_tramp_links *tlinks,
+ u32 flags, struct bpf_tramp_nodes *tnodes,
void *func_addr)
{
u32 size = ro_image_end - ro_image;
@@ -2865,7 +2865,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *ro_image,
ret = calc_arg_aux(m, &aaux);
if (ret)
goto out;
- ret = prepare_trampoline(&ctx, im, tlinks, func_addr, m, &aaux, flags);
+ ret = prepare_trampoline(&ctx, im, tnodes, func_addr, m, &aaux, flags);
if (ret > 0 && validate_code(&ctx) < 0) {
ret = -EINVAL;
diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c
index 9cb796e16379..af586827ad84 100644
--- a/arch/loongarch/net/bpf_jit.c
+++ b/arch/loongarch/net/bpf_jit.c
@@ -1486,16 +1486,16 @@ static void restore_args(struct jit_ctx *ctx, int nargs, int args_off)
}
}
-static int invoke_bpf_prog(struct jit_ctx *ctx, struct bpf_tramp_link *l,
+static int invoke_bpf_prog(struct jit_ctx *ctx, struct bpf_tramp_node *n,
int args_off, int retval_off, int run_ctx_off, bool save_ret)
{
int ret;
u32 *branch;
- struct bpf_prog *p = l->link.prog;
+ struct bpf_prog *p = n->link->prog;
int cookie_off = offsetof(struct bpf_tramp_run_ctx, bpf_cookie);
- if (l->cookie) {
- move_imm(ctx, LOONGARCH_GPR_T1, l->cookie, false);
+ if (n->cookie) {
+ move_imm(ctx, LOONGARCH_GPR_T1, n->cookie, false);
emit_insn(ctx, std, LOONGARCH_GPR_T1, LOONGARCH_GPR_FP, -run_ctx_off + cookie_off);
} else {
emit_insn(ctx, std, LOONGARCH_GPR_ZERO, LOONGARCH_GPR_FP, -run_ctx_off + cookie_off);
@@ -1550,14 +1550,14 @@ static int invoke_bpf_prog(struct jit_ctx *ctx, struct bpf_tramp_link *l,
return ret;
}
-static void invoke_bpf_mod_ret(struct jit_ctx *ctx, struct bpf_tramp_links *tl,
+static void invoke_bpf_mod_ret(struct jit_ctx *ctx, struct bpf_tramp_nodes *tn,
int args_off, int retval_off, int run_ctx_off, u32 **branches)
{
int i;
emit_insn(ctx, std, LOONGARCH_GPR_ZERO, LOONGARCH_GPR_FP, -retval_off);
- for (i = 0; i < tl->nr_links; i++) {
- invoke_bpf_prog(ctx, tl->links[i], args_off, retval_off, run_ctx_off, true);
+ for (i = 0; i < tn->nr_nodes; i++) {
+ invoke_bpf_prog(ctx, tn->nodes[i], args_off, retval_off, run_ctx_off, true);
emit_insn(ctx, ldd, LOONGARCH_GPR_T1, LOONGARCH_GPR_FP, -retval_off);
branches[i] = (u32 *)ctx->image + ctx->idx;
emit_insn(ctx, nop);
@@ -1611,7 +1611,7 @@ static void sign_extend(struct jit_ctx *ctx, int rd, int rj, u8 size, bool sign)
}
static int __arch_prepare_bpf_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
- const struct btf_func_model *m, struct bpf_tramp_links *tlinks,
+ const struct btf_func_model *m, struct bpf_tramp_nodes *tnodes,
void *func_addr, u32 flags)
{
int i, ret, save_ret;
@@ -1619,9 +1619,9 @@ static int __arch_prepare_bpf_trampoline(struct jit_ctx *ctx, struct bpf_tramp_i
int retval_off, args_off, nargs_off, ip_off, run_ctx_off, sreg_off, tcc_ptr_off;
bool is_struct_ops = flags & BPF_TRAMP_F_INDIRECT;
void *orig_call = func_addr;
- struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
- struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
- struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
+ struct bpf_tramp_nodes *fentry = &tnodes[BPF_TRAMP_FENTRY];
+ struct bpf_tramp_nodes *fexit = &tnodes[BPF_TRAMP_FEXIT];
+ struct bpf_tramp_nodes *fmod_ret = &tnodes[BPF_TRAMP_MODIFY_RETURN];
u32 **branches = NULL;
/*
@@ -1764,14 +1764,14 @@ static int __arch_prepare_bpf_trampoline(struct jit_ctx *ctx, struct bpf_tramp_i
return ret;
}
- for (i = 0; i < fentry->nr_links; i++) {
- ret = invoke_bpf_prog(ctx, fentry->links[i], args_off, retval_off,
+ for (i = 0; i < fentry->nr_nodes; i++) {
+ ret = invoke_bpf_prog(ctx, fentry->nodes[i], args_off, retval_off,
run_ctx_off, flags & BPF_TRAMP_F_RET_FENTRY_RET);
if (ret)
return ret;
}
- if (fmod_ret->nr_links) {
- branches = kcalloc(fmod_ret->nr_links, sizeof(u32 *), GFP_KERNEL);
+ if (fmod_ret->nr_nodes) {
+ branches = kcalloc(fmod_ret->nr_nodes, sizeof(u32 *), GFP_KERNEL);
if (!branches)
return -ENOMEM;
@@ -1795,13 +1795,13 @@ static int __arch_prepare_bpf_trampoline(struct jit_ctx *ctx, struct bpf_tramp_i
emit_insn(ctx, nop);
}
- for (i = 0; ctx->image && i < fmod_ret->nr_links; i++) {
+ for (i = 0; ctx->image && i < fmod_ret->nr_nodes; i++) {
int offset = (void *)(&ctx->image[ctx->idx]) - (void *)branches[i];
*branches[i] = larch_insn_gen_bne(LOONGARCH_GPR_T1, LOONGARCH_GPR_ZERO, offset);
}
- for (i = 0; i < fexit->nr_links; i++) {
- ret = invoke_bpf_prog(ctx, fexit->links[i], args_off, retval_off, run_ctx_off, false);
+ for (i = 0; i < fexit->nr_nodes; i++) {
+ ret = invoke_bpf_prog(ctx, fexit->nodes[i], args_off, retval_off, run_ctx_off, false);
if (ret)
goto out;
}
@@ -1869,7 +1869,7 @@ static int __arch_prepare_bpf_trampoline(struct jit_ctx *ctx, struct bpf_tramp_i
int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *ro_image,
void *ro_image_end, const struct btf_func_model *m,
- u32 flags, struct bpf_tramp_links *tlinks, void *func_addr)
+ u32 flags, struct bpf_tramp_nodes *tnodes, void *func_addr)
{
int ret, size;
void *image, *tmp;
@@ -1885,7 +1885,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *ro_image,
ctx.idx = 0;
jit_fill_hole(image, (unsigned int)(ro_image_end - ro_image));
- ret = __arch_prepare_bpf_trampoline(&ctx, im, m, tlinks, func_addr, flags);
+ ret = __arch_prepare_bpf_trampoline(&ctx, im, m, tnodes, func_addr, flags);
if (ret < 0)
goto out;
@@ -1906,7 +1906,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *ro_image,
}
int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks, void *func_addr)
+ struct bpf_tramp_nodes *tnodes, void *func_addr)
{
int ret;
struct jit_ctx ctx;
@@ -1915,7 +1915,7 @@ int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
ctx.image = NULL;
ctx.idx = 0;
- ret = __arch_prepare_bpf_trampoline(&ctx, &im, m, tlinks, func_addr, flags);
+ ret = __arch_prepare_bpf_trampoline(&ctx, &im, m, tnodes, func_addr, flags);
return ret < 0 ? ret : ret * LOONGARCH_INSN_SIZE;
}
diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
index a62a9a92b7b5..6a6199d8615e 100644
--- a/arch/powerpc/net/bpf_jit_comp.c
+++ b/arch/powerpc/net/bpf_jit_comp.c
@@ -512,22 +512,22 @@ int arch_protect_bpf_trampoline(void *image, unsigned int size)
}
static int invoke_bpf_prog(u32 *image, u32 *ro_image, struct codegen_context *ctx,
- struct bpf_tramp_link *l, int regs_off, int retval_off,
+ struct bpf_tramp_node *n, int regs_off, int retval_off,
int run_ctx_off, bool save_ret)
{
- struct bpf_prog *p = l->link.prog;
+ struct bpf_prog *p = n->link->prog;
ppc_inst_t branch_insn;
u32 jmp_idx;
int ret = 0;
/* Save cookie */
if (IS_ENABLED(CONFIG_PPC64)) {
- PPC_LI64(_R3, l->cookie);
+ PPC_LI64(_R3, n->cookie);
EMIT(PPC_RAW_STD(_R3, _R1, run_ctx_off + offsetof(struct bpf_tramp_run_ctx,
bpf_cookie)));
} else {
- PPC_LI32(_R3, l->cookie >> 32);
- PPC_LI32(_R4, l->cookie);
+ PPC_LI32(_R3, n->cookie >> 32);
+ PPC_LI32(_R4, n->cookie);
EMIT(PPC_RAW_STW(_R3, _R1,
run_ctx_off + offsetof(struct bpf_tramp_run_ctx, bpf_cookie)));
EMIT(PPC_RAW_STW(_R4, _R1,
@@ -594,7 +594,7 @@ static int invoke_bpf_prog(u32 *image, u32 *ro_image, struct codegen_context *ct
}
static int invoke_bpf_mod_ret(u32 *image, u32 *ro_image, struct codegen_context *ctx,
- struct bpf_tramp_links *tl, int regs_off, int retval_off,
+ struct bpf_tramp_nodes *tn, int regs_off, int retval_off,
int run_ctx_off, u32 *branches)
{
int i;
@@ -605,8 +605,8 @@ static int invoke_bpf_mod_ret(u32 *image, u32 *ro_image, struct codegen_context
*/
EMIT(PPC_RAW_LI(_R3, 0));
EMIT(PPC_RAW_STL(_R3, _R1, retval_off));
- for (i = 0; i < tl->nr_links; i++) {
- if (invoke_bpf_prog(image, ro_image, ctx, tl->links[i], regs_off, retval_off,
+ for (i = 0; i < tn->nr_nodes; i++) {
+ if (invoke_bpf_prog(image, ro_image, ctx, tn->nodes[i], regs_off, retval_off,
run_ctx_off, true))
return -EINVAL;
@@ -722,13 +722,13 @@ static void bpf_trampoline_restore_args_stack(u32 *image, struct codegen_context
static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_image,
void *rw_image_end, void *ro_image,
const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks,
+ struct bpf_tramp_nodes *tnodes,
void *func_addr)
{
int regs_off, nregs_off, ip_off, run_ctx_off, retval_off, nvr_off, alt_lr_off, r4_off = 0;
- struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
- struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
- struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
+ struct bpf_tramp_nodes *fmod_ret = &tnodes[BPF_TRAMP_MODIFY_RETURN];
+ struct bpf_tramp_nodes *fentry = &tnodes[BPF_TRAMP_FENTRY];
+ struct bpf_tramp_nodes *fexit = &tnodes[BPF_TRAMP_FEXIT];
int i, ret, nr_regs, retaddr_off, bpf_frame_size = 0;
struct codegen_context codegen_ctx, *ctx;
u32 *image = (u32 *)rw_image;
@@ -924,13 +924,13 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
return ret;
}
- for (i = 0; i < fentry->nr_links; i++)
- if (invoke_bpf_prog(image, ro_image, ctx, fentry->links[i], regs_off, retval_off,
+ for (i = 0; i < fentry->nr_nodes; i++)
+ if (invoke_bpf_prog(image, ro_image, ctx, fentry->nodes[i], regs_off, retval_off,
run_ctx_off, flags & BPF_TRAMP_F_RET_FENTRY_RET))
return -EINVAL;
- if (fmod_ret->nr_links) {
- branches = kcalloc(fmod_ret->nr_links, sizeof(u32), GFP_KERNEL);
+ if (fmod_ret->nr_nodes) {
+ branches = kcalloc(fmod_ret->nr_nodes, sizeof(u32), GFP_KERNEL);
if (!branches)
return -ENOMEM;
@@ -979,7 +979,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
}
/* Update branches saved in invoke_bpf_mod_ret with address of do_fexit */
- for (i = 0; i < fmod_ret->nr_links && image; i++) {
+ for (i = 0; i < fmod_ret->nr_nodes && image; i++) {
if (create_cond_branch(&branch_insn, &image[branches[i]],
(unsigned long)&image[ctx->idx], COND_NE << 16)) {
ret = -EINVAL;
@@ -989,8 +989,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
image[branches[i]] = ppc_inst_val(branch_insn);
}
- for (i = 0; i < fexit->nr_links; i++)
- if (invoke_bpf_prog(image, ro_image, ctx, fexit->links[i], regs_off, retval_off,
+ for (i = 0; i < fexit->nr_nodes; i++)
+ if (invoke_bpf_prog(image, ro_image, ctx, fexit->nodes[i], regs_off, retval_off,
run_ctx_off, false)) {
ret = -EINVAL;
goto cleanup;
@@ -1056,18 +1056,18 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
}
int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks, void *func_addr)
+ struct bpf_tramp_nodes *tnodes, void *func_addr)
{
struct bpf_tramp_image im;
int ret;
- ret = __arch_prepare_bpf_trampoline(&im, NULL, NULL, NULL, m, flags, tlinks, func_addr);
+ ret = __arch_prepare_bpf_trampoline(&im, NULL, NULL, NULL, m, flags, tnodes, func_addr);
return ret;
}
int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *image_end,
const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks,
+ struct bpf_tramp_nodes *tnodes,
void *func_addr)
{
u32 size = image_end - image;
@@ -1083,7 +1083,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
return -ENOMEM;
ret = __arch_prepare_bpf_trampoline(im, rw_image, rw_image + size, image, m,
- flags, tlinks, func_addr);
+ flags, tnodes, func_addr);
if (ret < 0)
goto out;
diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
index 2f1109dbf105..461b902a5f92 100644
--- a/arch/riscv/net/bpf_jit_comp64.c
+++ b/arch/riscv/net/bpf_jit_comp64.c
@@ -934,15 +934,15 @@ static void emit_store_stack_imm64(u8 reg, int stack_off, u64 imm64,
emit_sd(RV_REG_FP, stack_off, reg, ctx);
}
-static int invoke_bpf_prog(struct bpf_tramp_link *l, int args_off, int retval_off,
+static int invoke_bpf_prog(struct bpf_tramp_node *node, int args_off, int retval_off,
int run_ctx_off, bool save_ret, struct rv_jit_context *ctx)
{
int ret, branch_off;
- struct bpf_prog *p = l->link.prog;
+ struct bpf_prog *p = node->link->prog;
int cookie_off = offsetof(struct bpf_tramp_run_ctx, bpf_cookie);
- if (l->cookie)
- emit_store_stack_imm64(RV_REG_T1, -run_ctx_off + cookie_off, l->cookie, ctx);
+ if (node->cookie)
+ emit_store_stack_imm64(RV_REG_T1, -run_ctx_off + cookie_off, node->cookie, ctx);
else
emit_sd(RV_REG_FP, -run_ctx_off + cookie_off, RV_REG_ZERO, ctx);
@@ -996,22 +996,22 @@ static int invoke_bpf_prog(struct bpf_tramp_link *l, int args_off, int retval_of
return ret;
}
-static int invoke_bpf(struct bpf_tramp_links *tl, int args_off, int retval_off,
+static int invoke_bpf(struct bpf_tramp_nodes *tn, int args_off, int retval_off,
int run_ctx_off, int func_meta_off, bool save_ret, u64 func_meta,
int cookie_off, struct rv_jit_context *ctx)
{
int i, cur_cookie = (cookie_off - args_off) / 8;
- for (i = 0; i < tl->nr_links; i++) {
+ for (i = 0; i < tn->nr_nodes; i++) {
int err;
- if (bpf_prog_calls_session_cookie(tl->links[i])) {
+ if (bpf_prog_calls_session_cookie(tn->nodes[i])) {
u64 meta = func_meta | ((u64)cur_cookie << BPF_TRAMP_COOKIE_INDEX_SHIFT);
emit_store_stack_imm64(RV_REG_T1, -func_meta_off, meta, ctx);
cur_cookie--;
}
- err = invoke_bpf_prog(tl->links[i], args_off, retval_off, run_ctx_off,
+ err = invoke_bpf_prog(tn->nodes[i], args_off, retval_off, run_ctx_off,
save_ret, ctx);
if (err)
return err;
@@ -1021,7 +1021,7 @@ static int invoke_bpf(struct bpf_tramp_links *tl, int args_off, int retval_off,
static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
const struct btf_func_model *m,
- struct bpf_tramp_links *tlinks,
+ struct bpf_tramp_nodes *tnodes,
void *func_addr, u32 flags,
struct rv_jit_context *ctx)
{
@@ -1030,9 +1030,9 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
int stack_size = 0, nr_arg_slots = 0;
int retval_off, args_off, func_meta_off, ip_off, run_ctx_off, sreg_off, stk_arg_off;
int cookie_off, cookie_cnt;
- struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
- struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
- struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
+ struct bpf_tramp_nodes *fentry = &tnodes[BPF_TRAMP_FENTRY];
+ struct bpf_tramp_nodes *fexit = &tnodes[BPF_TRAMP_FEXIT];
+ struct bpf_tramp_nodes *fmod_ret = &tnodes[BPF_TRAMP_MODIFY_RETURN];
bool is_struct_ops = flags & BPF_TRAMP_F_INDIRECT;
void *orig_call = func_addr;
bool save_ret;
@@ -1115,7 +1115,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
ip_off = stack_size;
}
- cookie_cnt = bpf_fsession_cookie_cnt(tlinks);
+ cookie_cnt = bpf_fsession_cookie_cnt(tnodes);
/* room for session cookies */
stack_size += cookie_cnt * 8;
cookie_off = stack_size;
@@ -1172,7 +1172,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
store_args(nr_arg_slots, args_off, ctx);
- if (bpf_fsession_cnt(tlinks)) {
+ if (bpf_fsession_cnt(tnodes)) {
/* clear all session cookies' value */
for (i = 0; i < cookie_cnt; i++)
emit_sd(RV_REG_FP, -cookie_off + 8 * i, RV_REG_ZERO, ctx);
@@ -1187,22 +1187,22 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
return ret;
}
- if (fentry->nr_links) {
+ if (fentry->nr_nodes) {
ret = invoke_bpf(fentry, args_off, retval_off, run_ctx_off, func_meta_off,
flags & BPF_TRAMP_F_RET_FENTRY_RET, func_meta, cookie_off, ctx);
if (ret)
return ret;
}
- if (fmod_ret->nr_links) {
- branches_off = kzalloc_objs(int, fmod_ret->nr_links);
+ if (fmod_ret->nr_nodes) {
+ branches_off = kzalloc_objs(int, fmod_ret->nr_nodes);
if (!branches_off)
return -ENOMEM;
/* cleanup to avoid garbage return value confusion */
emit_sd(RV_REG_FP, -retval_off, RV_REG_ZERO, ctx);
- for (i = 0; i < fmod_ret->nr_links; i++) {
- ret = invoke_bpf_prog(fmod_ret->links[i], args_off, retval_off,
+ for (i = 0; i < fmod_ret->nr_nodes; i++) {
+ ret = invoke_bpf_prog(fmod_ret->nodes[i], args_off, retval_off,
run_ctx_off, true, ctx);
if (ret)
goto out;
@@ -1230,7 +1230,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
}
/* update branches saved in invoke_bpf_mod_ret with bnez */
- for (i = 0; ctx->insns && i < fmod_ret->nr_links; i++) {
+ for (i = 0; ctx->insns && i < fmod_ret->nr_nodes; i++) {
offset = ninsns_rvoff(ctx->ninsns - branches_off[i]);
insn = rv_bne(RV_REG_T1, RV_REG_ZERO, offset >> 1);
*(u32 *)(ctx->insns + branches_off[i]) = insn;
@@ -1238,10 +1238,10 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
/* set "is_return" flag for fsession */
func_meta |= (1ULL << BPF_TRAMP_IS_RETURN_SHIFT);
- if (bpf_fsession_cnt(tlinks))
+ if (bpf_fsession_cnt(tnodes))
emit_store_stack_imm64(RV_REG_T1, -func_meta_off, func_meta, ctx);
- if (fexit->nr_links) {
+ if (fexit->nr_nodes) {
ret = invoke_bpf(fexit, args_off, retval_off, run_ctx_off, func_meta_off,
false, func_meta, cookie_off, ctx);
if (ret)
@@ -1305,7 +1305,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
}
int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks, void *func_addr)
+ struct bpf_tramp_nodes *tnodes, void *func_addr)
{
struct bpf_tramp_image im;
struct rv_jit_context ctx;
@@ -1314,7 +1314,7 @@ int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
ctx.ninsns = 0;
ctx.insns = NULL;
ctx.ro_insns = NULL;
- ret = __arch_prepare_bpf_trampoline(&im, m, tlinks, func_addr, flags, &ctx);
+ ret = __arch_prepare_bpf_trampoline(&im, m, tnodes, func_addr, flags, &ctx);
return ret < 0 ? ret : ninsns_rvoff(ctx.ninsns);
}
@@ -1331,7 +1331,7 @@ void arch_free_bpf_trampoline(void *image, unsigned int size)
int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *ro_image,
void *ro_image_end, const struct btf_func_model *m,
- u32 flags, struct bpf_tramp_links *tlinks,
+ u32 flags, struct bpf_tramp_nodes *tnodes,
void *func_addr)
{
int ret;
@@ -1346,7 +1346,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *ro_image,
ctx.ninsns = 0;
ctx.insns = image;
ctx.ro_insns = ro_image;
- ret = __arch_prepare_bpf_trampoline(im, m, tlinks, func_addr, flags, &ctx);
+ ret = __arch_prepare_bpf_trampoline(im, m, tnodes, func_addr, flags, &ctx);
if (ret < 0)
goto out;
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index d08d159b6319..cfdba742660a 100644
--- a/arch/s390/net/bpf_jit_comp.c
+++ b/arch/s390/net/bpf_jit_comp.c
@@ -2531,19 +2531,19 @@ static void emit_store_stack_imm64(struct bpf_jit *jit, int tmp_reg, int stack_o
static int invoke_bpf_prog(struct bpf_tramp_jit *tjit,
const struct btf_func_model *m,
- struct bpf_tramp_link *tlink, bool save_ret)
+ struct bpf_tramp_node *node, bool save_ret)
{
struct bpf_jit *jit = &tjit->common;
int cookie_off = tjit->run_ctx_off +
offsetof(struct bpf_tramp_run_ctx, bpf_cookie);
- struct bpf_prog *p = tlink->link.prog;
+ struct bpf_prog *p = node->link->prog;
int patch;
/*
- * run_ctx.cookie = tlink->cookie;
+ * run_ctx.cookie = node->cookie;
*/
- emit_store_stack_imm64(jit, REG_W0, cookie_off, tlink->cookie);
+ emit_store_stack_imm64(jit, REG_W0, cookie_off, node->cookie);
/*
* if ((start = __bpf_prog_enter(p, &run_ctx)) == 0)
@@ -2603,20 +2603,20 @@ static int invoke_bpf_prog(struct bpf_tramp_jit *tjit,
static int invoke_bpf(struct bpf_tramp_jit *tjit,
const struct btf_func_model *m,
- struct bpf_tramp_links *tl, bool save_ret,
+ struct bpf_tramp_nodes *tn, bool save_ret,
u64 func_meta, int cookie_off)
{
int i, cur_cookie = (tjit->bpf_args_off - cookie_off) / sizeof(u64);
struct bpf_jit *jit = &tjit->common;
- for (i = 0; i < tl->nr_links; i++) {
- if (bpf_prog_calls_session_cookie(tl->links[i])) {
+ for (i = 0; i < tn->nr_nodes; i++) {
+ if (bpf_prog_calls_session_cookie(tn->nodes[i])) {
u64 meta = func_meta | ((u64)cur_cookie << BPF_TRAMP_COOKIE_INDEX_SHIFT);
emit_store_stack_imm64(jit, REG_0, tjit->func_meta_off, meta);
cur_cookie--;
}
- if (invoke_bpf_prog(tjit, m, tl->links[i], save_ret))
+ if (invoke_bpf_prog(tjit, m, tn->nodes[i], save_ret))
return -EINVAL;
}
@@ -2645,12 +2645,12 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
struct bpf_tramp_jit *tjit,
const struct btf_func_model *m,
u32 flags,
- struct bpf_tramp_links *tlinks,
+ struct bpf_tramp_nodes *tnodes,
void *func_addr)
{
- struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
- struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
- struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
+ struct bpf_tramp_nodes *fmod_ret = &tnodes[BPF_TRAMP_MODIFY_RETURN];
+ struct bpf_tramp_nodes *fentry = &tnodes[BPF_TRAMP_FENTRY];
+ struct bpf_tramp_nodes *fexit = &tnodes[BPF_TRAMP_FEXIT];
int nr_bpf_args, nr_reg_args, nr_stack_args;
int cookie_cnt, cookie_off, fsession_cnt;
struct bpf_jit *jit = &tjit->common;
@@ -2687,8 +2687,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
return -ENOTSUPP;
}
- cookie_cnt = bpf_fsession_cookie_cnt(tlinks);
- fsession_cnt = bpf_fsession_cnt(tlinks);
+ cookie_cnt = bpf_fsession_cookie_cnt(tnodes);
+ fsession_cnt = bpf_fsession_cnt(tnodes);
/*
* Calculate the stack layout.
@@ -2823,7 +2823,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
func_meta, cookie_off))
return -EINVAL;
- if (fmod_ret->nr_links) {
+ if (fmod_ret->nr_nodes) {
/*
* retval = 0;
*/
@@ -2832,8 +2832,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
_EMIT6(0xd707f000 | tjit->retval_off,
0xf000 | tjit->retval_off);
- for (i = 0; i < fmod_ret->nr_links; i++) {
- if (invoke_bpf_prog(tjit, m, fmod_ret->links[i], true))
+ for (i = 0; i < fmod_ret->nr_nodes; i++) {
+ if (invoke_bpf_prog(tjit, m, fmod_ret->nodes[i], true))
return -EINVAL;
/*
@@ -2958,7 +2958,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
}
int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks, void *orig_call)
+ struct bpf_tramp_nodes *tnodes, void *orig_call)
{
struct bpf_tramp_image im;
struct bpf_tramp_jit tjit;
@@ -2967,14 +2967,14 @@ int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
memset(&tjit, 0, sizeof(tjit));
ret = __arch_prepare_bpf_trampoline(&im, &tjit, m, flags,
- tlinks, orig_call);
+ tnodes, orig_call);
return ret < 0 ? ret : tjit.common.prg;
}
int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image,
void *image_end, const struct btf_func_model *m,
- u32 flags, struct bpf_tramp_links *tlinks,
+ u32 flags, struct bpf_tramp_nodes *tnodes,
void *func_addr)
{
struct bpf_tramp_jit tjit;
@@ -2983,7 +2983,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image,
/* Compute offsets, check whether the code fits. */
memset(&tjit, 0, sizeof(tjit));
ret = __arch_prepare_bpf_trampoline(im, &tjit, m, flags,
- tlinks, func_addr);
+ tnodes, func_addr);
if (ret < 0)
return ret;
@@ -2997,7 +2997,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image,
tjit.common.prg = 0;
tjit.common.prg_buf = image;
ret = __arch_prepare_bpf_trampoline(im, &tjit, m, flags,
- tlinks, func_addr);
+ tnodes, func_addr);
return ret < 0 ? ret : tjit.common.prg;
}
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index e9b78040d703..dc3f2e8d5ca7 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -2969,15 +2969,15 @@ static void restore_regs(const struct btf_func_model *m, u8 **prog,
}
static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
- struct bpf_tramp_link *l, int stack_size,
+ struct bpf_tramp_node *node, int stack_size,
int run_ctx_off, bool save_ret,
void *image, void *rw_image)
{
u8 *prog = *pprog;
u8 *jmp_insn;
int ctx_cookie_off = offsetof(struct bpf_tramp_run_ctx, bpf_cookie);
- struct bpf_prog *p = l->link.prog;
- u64 cookie = l->cookie;
+ struct bpf_prog *p = node->link->prog;
+ u64 cookie = node->cookie;
/* mov rdi, cookie */
emit_mov_imm64(&prog, BPF_REG_1, (long) cookie >> 32, (u32) (long) cookie);
@@ -3084,7 +3084,7 @@ static int emit_cond_near_jump(u8 **pprog, void *func, void *ip, u8 jmp_cond)
}
static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
- struct bpf_tramp_links *tl, int stack_size,
+ struct bpf_tramp_nodes *tl, int stack_size,
int run_ctx_off, int func_meta_off, bool save_ret,
void *image, void *rw_image, u64 func_meta,
int cookie_off)
@@ -3092,13 +3092,13 @@ static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
int i, cur_cookie = (cookie_off - stack_size) / 8;
u8 *prog = *pprog;
- for (i = 0; i < tl->nr_links; i++) {
- if (tl->links[i]->link.prog->call_session_cookie) {
+ for (i = 0; i < tl->nr_nodes; i++) {
+ if (tl->nodes[i]->link->prog->call_session_cookie) {
emit_store_stack_imm64(&prog, BPF_REG_0, -func_meta_off,
func_meta | (cur_cookie << BPF_TRAMP_COOKIE_INDEX_SHIFT));
cur_cookie--;
}
- if (invoke_bpf_prog(m, &prog, tl->links[i], stack_size,
+ if (invoke_bpf_prog(m, &prog, tl->nodes[i], stack_size,
run_ctx_off, save_ret, image, rw_image))
return -EINVAL;
}
@@ -3107,7 +3107,7 @@ static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
}
static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
- struct bpf_tramp_links *tl, int stack_size,
+ struct bpf_tramp_nodes *tl, int stack_size,
int run_ctx_off, u8 **branches,
void *image, void *rw_image)
{
@@ -3119,8 +3119,8 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
*/
emit_mov_imm32(&prog, false, BPF_REG_0, 0);
emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
- for (i = 0; i < tl->nr_links; i++) {
- if (invoke_bpf_prog(m, &prog, tl->links[i], stack_size, run_ctx_off, true,
+ for (i = 0; i < tl->nr_nodes; i++) {
+ if (invoke_bpf_prog(m, &prog, tl->nodes[i], stack_size, run_ctx_off, true,
image, rw_image))
return -EINVAL;
@@ -3211,14 +3211,14 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_image,
void *rw_image_end, void *image,
const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks,
+ struct bpf_tramp_nodes *tnodes,
void *func_addr)
{
int i, ret, nr_regs = m->nr_args, stack_size = 0;
int regs_off, func_meta_off, ip_off, run_ctx_off, arg_stack_off, rbx_off;
- struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
- struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
- struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
+ struct bpf_tramp_nodes *fentry = &tnodes[BPF_TRAMP_FENTRY];
+ struct bpf_tramp_nodes *fexit = &tnodes[BPF_TRAMP_FEXIT];
+ struct bpf_tramp_nodes *fmod_ret = &tnodes[BPF_TRAMP_MODIFY_RETURN];
void *orig_call = func_addr;
int cookie_off, cookie_cnt;
u8 **branches = NULL;
@@ -3290,7 +3290,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
ip_off = stack_size;
- cookie_cnt = bpf_fsession_cookie_cnt(tlinks);
+ cookie_cnt = bpf_fsession_cookie_cnt(tnodes);
/* room for session cookies */
stack_size += cookie_cnt * 8;
cookie_off = stack_size;
@@ -3383,7 +3383,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
}
}
- if (bpf_fsession_cnt(tlinks)) {
+ if (bpf_fsession_cnt(tnodes)) {
/* clear all the session cookies' value */
for (int i = 0; i < cookie_cnt; i++)
emit_store_stack_imm64(&prog, BPF_REG_0, -cookie_off + 8 * i, 0);
@@ -3391,15 +3391,15 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
emit_store_stack_imm64(&prog, BPF_REG_0, -8, 0);
}
- if (fentry->nr_links) {
+ if (fentry->nr_nodes) {
if (invoke_bpf(m, &prog, fentry, regs_off, run_ctx_off, func_meta_off,
flags & BPF_TRAMP_F_RET_FENTRY_RET, image, rw_image,
func_meta, cookie_off))
return -EINVAL;
}
- if (fmod_ret->nr_links) {
- branches = kcalloc(fmod_ret->nr_links, sizeof(u8 *),
+ if (fmod_ret->nr_nodes) {
+ branches = kcalloc(fmod_ret->nr_nodes, sizeof(u8 *),
GFP_KERNEL);
if (!branches)
return -ENOMEM;
@@ -3438,7 +3438,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
emit_nops(&prog, X86_PATCH_SIZE);
}
- if (fmod_ret->nr_links) {
+ if (fmod_ret->nr_nodes) {
/* From Intel 64 and IA-32 Architectures Optimization
* Reference Manual, 3.4.1.4 Code Alignment, Assembly/Compiler
* Coding Rule 11: All branch targets should be 16-byte
@@ -3448,7 +3448,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
/* Update the branches saved in invoke_bpf_mod_ret with the
* aligned address of do_fexit.
*/
- for (i = 0; i < fmod_ret->nr_links; i++) {
+ for (i = 0; i < fmod_ret->nr_nodes; i++) {
emit_cond_near_jump(&branches[i], image + (prog - (u8 *)rw_image),
image + (branches[i] - (u8 *)rw_image), X86_JNE);
}
@@ -3456,10 +3456,10 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
/* set the "is_return" flag for fsession */
func_meta |= (1ULL << BPF_TRAMP_IS_RETURN_SHIFT);
- if (bpf_fsession_cnt(tlinks))
+ if (bpf_fsession_cnt(tnodes))
emit_store_stack_imm64(&prog, BPF_REG_0, -func_meta_off, func_meta);
- if (fexit->nr_links) {
+ if (fexit->nr_nodes) {
if (invoke_bpf(m, &prog, fexit, regs_off, run_ctx_off, func_meta_off,
false, image, rw_image, func_meta, cookie_off)) {
ret = -EINVAL;
@@ -3533,7 +3533,7 @@ int arch_protect_bpf_trampoline(void *image, unsigned int size)
int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *image_end,
const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks,
+ struct bpf_tramp_nodes *tnodes,
void *func_addr)
{
void *rw_image, *tmp;
@@ -3548,7 +3548,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
return -ENOMEM;
ret = __arch_prepare_bpf_trampoline(im, rw_image, rw_image + size, image, m,
- flags, tlinks, func_addr);
+ flags, tnodes, func_addr);
if (ret < 0)
goto out;
@@ -3561,7 +3561,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
}
int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks, void *func_addr)
+ struct bpf_tramp_nodes *tnodes, void *func_addr)
{
struct bpf_tramp_image im;
void *image;
@@ -3579,7 +3579,7 @@ int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
return -ENOMEM;
ret = __arch_prepare_bpf_trampoline(&im, image, image + PAGE_SIZE, image,
- m, flags, tlinks, func_addr);
+ m, flags, tnodes, func_addr);
bpf_jit_free_exec(image);
return ret;
}
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 1d900f49aff5..f97aa34ee4c2 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1233,9 +1233,9 @@ enum {
#define BPF_TRAMP_COOKIE_INDEX_SHIFT 8
#define BPF_TRAMP_IS_RETURN_SHIFT 63
-struct bpf_tramp_links {
- struct bpf_tramp_link *links[BPF_MAX_TRAMP_LINKS];
- int nr_links;
+struct bpf_tramp_nodes {
+ struct bpf_tramp_node *nodes[BPF_MAX_TRAMP_LINKS];
+ int nr_nodes;
};
struct bpf_tramp_run_ctx;
@@ -1263,13 +1263,13 @@ struct bpf_tramp_run_ctx;
struct bpf_tramp_image;
int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *image_end,
const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks,
+ struct bpf_tramp_nodes *tnodes,
void *func_addr);
void *arch_alloc_bpf_trampoline(unsigned int size);
void arch_free_bpf_trampoline(void *image, unsigned int size);
int __must_check arch_protect_bpf_trampoline(void *image, unsigned int size);
int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks, void *func_addr);
+ struct bpf_tramp_nodes *tnodes, void *func_addr);
u64 notrace __bpf_prog_enter_sleepable_recur(struct bpf_prog *prog,
struct bpf_tramp_run_ctx *run_ctx);
@@ -1453,10 +1453,10 @@ static inline int bpf_dynptr_check_off_len(const struct bpf_dynptr_kern *ptr, u6
}
#ifdef CONFIG_BPF_JIT
-int bpf_trampoline_link_prog(struct bpf_tramp_link *link,
+int bpf_trampoline_link_prog(struct bpf_tramp_node *node,
struct bpf_trampoline *tr,
struct bpf_prog *tgt_prog);
-int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
+int bpf_trampoline_unlink_prog(struct bpf_tramp_node *node,
struct bpf_trampoline *tr,
struct bpf_prog *tgt_prog);
struct bpf_trampoline *bpf_trampoline_get(u64 key,
@@ -1540,13 +1540,13 @@ int bpf_jit_charge_modmem(u32 size);
void bpf_jit_uncharge_modmem(u32 size);
bool bpf_prog_has_trampoline(const struct bpf_prog *prog);
#else
-static inline int bpf_trampoline_link_prog(struct bpf_tramp_link *link,
+static inline int bpf_trampoline_link_prog(struct bpf_tramp_node *node,
struct bpf_trampoline *tr,
struct bpf_prog *tgt_prog)
{
return -ENOTSUPP;
}
-static inline int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
+static inline int bpf_trampoline_unlink_prog(struct bpf_tramp_node *node,
struct bpf_trampoline *tr,
struct bpf_prog *tgt_prog)
{
@@ -1865,12 +1865,17 @@ struct bpf_link_ops {
__poll_t (*poll)(struct file *file, struct poll_table_struct *pts);
};
-struct bpf_tramp_link {
- struct bpf_link link;
+struct bpf_tramp_node {
+ struct bpf_link *link;
struct hlist_node tramp_hlist;
u64 cookie;
};
+struct bpf_tramp_link {
+ struct bpf_link link;
+ struct bpf_tramp_node node;
+};
+
struct bpf_shim_tramp_link {
struct bpf_tramp_link link;
struct bpf_trampoline *trampoline;
@@ -2088,8 +2093,8 @@ void bpf_struct_ops_put(const void *kdata);
int bpf_struct_ops_supported(const struct bpf_struct_ops *st_ops, u32 moff);
int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key,
void *value);
-int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks,
- struct bpf_tramp_link *link,
+int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_nodes *tnodes,
+ struct bpf_tramp_node *node,
const struct btf_func_model *model,
void *stub_func,
void **image, u32 *image_off,
@@ -2181,31 +2186,31 @@ static inline void bpf_struct_ops_desc_release(struct bpf_struct_ops_desc *st_op
#endif
-static inline int bpf_fsession_cnt(struct bpf_tramp_links *links)
+static inline int bpf_fsession_cnt(struct bpf_tramp_nodes *nodes)
{
- struct bpf_tramp_links fentries = links[BPF_TRAMP_FENTRY];
+ struct bpf_tramp_nodes fentries = nodes[BPF_TRAMP_FENTRY];
int cnt = 0;
- for (int i = 0; i < links[BPF_TRAMP_FENTRY].nr_links; i++) {
- if (fentries.links[i]->link.prog->expected_attach_type == BPF_TRACE_FSESSION)
+ for (int i = 0; i < nodes[BPF_TRAMP_FENTRY].nr_nodes; i++) {
+ if (fentries.nodes[i]->link->prog->expected_attach_type == BPF_TRACE_FSESSION)
cnt++;
}
return cnt;
}
-static inline bool bpf_prog_calls_session_cookie(struct bpf_tramp_link *link)
+static inline bool bpf_prog_calls_session_cookie(struct bpf_tramp_node *node)
{
- return link->link.prog->call_session_cookie;
+ return node->link->prog->call_session_cookie;
}
-static inline int bpf_fsession_cookie_cnt(struct bpf_tramp_links *links)
+static inline int bpf_fsession_cookie_cnt(struct bpf_tramp_nodes *nodes)
{
- struct bpf_tramp_links fentries = links[BPF_TRAMP_FENTRY];
+ struct bpf_tramp_nodes fentries = nodes[BPF_TRAMP_FENTRY];
int cnt = 0;
- for (int i = 0; i < links[BPF_TRAMP_FENTRY].nr_links; i++) {
- if (bpf_prog_calls_session_cookie(fentries.links[i]))
+ for (int i = 0; i < nodes[BPF_TRAMP_FENTRY].nr_nodes; i++) {
+ if (bpf_prog_calls_session_cookie(fentries.nodes[i]))
cnt++;
}
@@ -2758,6 +2763,9 @@ void bpf_link_init(struct bpf_link *link, enum bpf_link_type type,
void bpf_link_init_sleepable(struct bpf_link *link, enum bpf_link_type type,
const struct bpf_link_ops *ops, struct bpf_prog *prog,
enum bpf_attach_type attach_type, bool sleepable);
+void bpf_tramp_link_init(struct bpf_tramp_link *link, enum bpf_link_type type,
+ const struct bpf_link_ops *ops, struct bpf_prog *prog,
+ enum bpf_attach_type attach_type, u64 cookie);
int bpf_link_prime(struct bpf_link *link, struct bpf_link_primer *primer);
int bpf_link_settle(struct bpf_link_primer *primer);
void bpf_link_cleanup(struct bpf_link_primer *primer);
@@ -3123,6 +3131,12 @@ static inline void bpf_link_init_sleepable(struct bpf_link *link, enum bpf_link_
{
}
+static inline void bpf_tramp_link_init(struct bpf_tramp_link *link, enum bpf_link_type type,
+ const struct bpf_link_ops *ops, struct bpf_prog *prog,
+ enum bpf_attach_type attach_type, u64 cookie)
+{
+}
+
static inline int bpf_link_prime(struct bpf_link *link,
struct bpf_link_primer *primer)
{
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index 05b366b821c3..10a9301615ba 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -594,8 +594,8 @@ const struct bpf_link_ops bpf_struct_ops_link_lops = {
.dealloc = bpf_struct_ops_link_dealloc,
};
-int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks,
- struct bpf_tramp_link *link,
+int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_nodes *tnodes,
+ struct bpf_tramp_node *node,
const struct btf_func_model *model,
void *stub_func,
void **_image, u32 *_image_off,
@@ -605,13 +605,13 @@ int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks,
void *image = *_image;
int size;
- tlinks[BPF_TRAMP_FENTRY].links[0] = link;
- tlinks[BPF_TRAMP_FENTRY].nr_links = 1;
+ tnodes[BPF_TRAMP_FENTRY].nodes[0] = node;
+ tnodes[BPF_TRAMP_FENTRY].nr_nodes = 1;
if (model->ret_size > 0)
flags |= BPF_TRAMP_F_RET_FENTRY_RET;
- size = arch_bpf_trampoline_size(model, flags, tlinks, stub_func);
+ size = arch_bpf_trampoline_size(model, flags, tnodes, stub_func);
if (size <= 0)
return size ? : -EFAULT;
@@ -628,7 +628,7 @@ int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks,
size = arch_prepare_bpf_trampoline(NULL, image + image_off,
image + image_off + size,
- model, flags, tlinks, stub_func);
+ model, flags, tnodes, stub_func);
if (size <= 0) {
if (image != *_image)
bpf_struct_ops_image_free(image);
@@ -693,7 +693,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
const struct btf_type *module_type;
const struct btf_member *member;
const struct btf_type *t = st_ops_desc->type;
- struct bpf_tramp_links *tlinks;
+ struct bpf_tramp_nodes *tnodes;
void *udata, *kdata;
int prog_fd, err;
u32 i, trampoline_start, image_off = 0;
@@ -720,8 +720,8 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
if (uvalue->common.state || refcount_read(&uvalue->common.refcnt))
return -EINVAL;
- tlinks = kzalloc_objs(*tlinks, BPF_TRAMP_MAX);
- if (!tlinks)
+ tnodes = kzalloc_objs(*tnodes, BPF_TRAMP_MAX);
+ if (!tnodes)
return -ENOMEM;
uvalue = (struct bpf_struct_ops_value *)st_map->uvalue;
@@ -820,8 +820,9 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
err = -ENOMEM;
goto reset_unlock;
}
- bpf_link_init(&link->link, BPF_LINK_TYPE_STRUCT_OPS,
- &bpf_struct_ops_link_lops, prog, prog->expected_attach_type);
+ bpf_tramp_link_init(link, BPF_LINK_TYPE_STRUCT_OPS,
+ &bpf_struct_ops_link_lops, prog, prog->expected_attach_type, 0);
+
*plink++ = &link->link;
ksym = kzalloc_obj(*ksym, GFP_USER);
@@ -832,7 +833,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
*pksym++ = ksym;
trampoline_start = image_off;
- err = bpf_struct_ops_prepare_trampoline(tlinks, link,
+ err = bpf_struct_ops_prepare_trampoline(tnodes, &link->node,
&st_ops->func_models[i],
*(void **)(st_ops->cfi_stubs + moff),
&image, &image_off,
@@ -910,7 +911,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
memset(uvalue, 0, map->value_size);
memset(kvalue, 0, map->value_size);
unlock:
- kfree(tlinks);
+ kfree(tnodes);
mutex_unlock(&st_map->lock);
if (!err)
bpf_struct_ops_map_add_ksyms(st_map);
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 274039e36465..6db6d1e74379 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -3209,6 +3209,15 @@ void bpf_link_init(struct bpf_link *link, enum bpf_link_type type,
bpf_link_init_sleepable(link, type, ops, prog, attach_type, false);
}
+void bpf_tramp_link_init(struct bpf_tramp_link *link, enum bpf_link_type type,
+ const struct bpf_link_ops *ops, struct bpf_prog *prog,
+ enum bpf_attach_type attach_type, u64 cookie)
+{
+ bpf_link_init(&link->link, type, ops, prog, attach_type);
+ link->node.link = &link->link;
+ link->node.cookie = cookie;
+}
+
static void bpf_link_free_id(int id)
{
if (!id)
@@ -3502,7 +3511,7 @@ static void bpf_tracing_link_release(struct bpf_link *link)
struct bpf_tracing_link *tr_link =
container_of(link, struct bpf_tracing_link, link.link);
- WARN_ON_ONCE(bpf_trampoline_unlink_prog(&tr_link->link,
+ WARN_ON_ONCE(bpf_trampoline_unlink_prog(&tr_link->link.node,
tr_link->trampoline,
tr_link->tgt_prog));
@@ -3515,8 +3524,7 @@ static void bpf_tracing_link_release(struct bpf_link *link)
static void bpf_tracing_link_dealloc(struct bpf_link *link)
{
- struct bpf_tracing_link *tr_link =
- container_of(link, struct bpf_tracing_link, link.link);
+ struct bpf_tracing_link *tr_link = container_of(link, struct bpf_tracing_link, link.link);
kfree(tr_link);
}
@@ -3524,8 +3532,8 @@ static void bpf_tracing_link_dealloc(struct bpf_link *link)
static void bpf_tracing_link_show_fdinfo(const struct bpf_link *link,
struct seq_file *seq)
{
- struct bpf_tracing_link *tr_link =
- container_of(link, struct bpf_tracing_link, link.link);
+ struct bpf_tracing_link *tr_link = container_of(link, struct bpf_tracing_link, link.link);
+
u32 target_btf_id, target_obj_id;
bpf_trampoline_unpack_key(tr_link->trampoline->key,
@@ -3538,17 +3546,16 @@ static void bpf_tracing_link_show_fdinfo(const struct bpf_link *link,
link->attach_type,
target_obj_id,
target_btf_id,
- tr_link->link.cookie);
+ tr_link->link.node.cookie);
}
static int bpf_tracing_link_fill_link_info(const struct bpf_link *link,
struct bpf_link_info *info)
{
- struct bpf_tracing_link *tr_link =
- container_of(link, struct bpf_tracing_link, link.link);
+ struct bpf_tracing_link *tr_link = container_of(link, struct bpf_tracing_link, link.link);
info->tracing.attach_type = link->attach_type;
- info->tracing.cookie = tr_link->link.cookie;
+ info->tracing.cookie = tr_link->link.node.cookie;
bpf_trampoline_unpack_key(tr_link->trampoline->key,
&info->tracing.target_obj_id,
&info->tracing.target_btf_id);
@@ -3635,9 +3642,9 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
fslink = kzalloc_obj(*fslink, GFP_USER);
if (fslink) {
- bpf_link_init(&fslink->fexit.link, BPF_LINK_TYPE_TRACING,
- &bpf_tracing_link_lops, prog, attach_type);
- fslink->fexit.cookie = bpf_cookie;
+ bpf_tramp_link_init(&fslink->fexit, BPF_LINK_TYPE_TRACING,
+ &bpf_tracing_link_lops, prog, attach_type,
+ bpf_cookie);
link = &fslink->link;
} else {
link = NULL;
@@ -3649,10 +3656,8 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
err = -ENOMEM;
goto out_put_prog;
}
- bpf_link_init(&link->link.link, BPF_LINK_TYPE_TRACING,
- &bpf_tracing_link_lops, prog, attach_type);
-
- link->link.cookie = bpf_cookie;
+ bpf_tramp_link_init(&link->link, BPF_LINK_TYPE_TRACING,
+ &bpf_tracing_link_lops, prog, attach_type, bpf_cookie);
mutex_lock(&prog->aux->dst_mutex);
@@ -3738,7 +3743,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
if (err)
goto out_unlock;
- err = bpf_trampoline_link_prog(&link->link, tr, tgt_prog);
+ err = bpf_trampoline_link_prog(&link->link.node, tr, tgt_prog);
if (err) {
bpf_link_cleanup(&link_primer);
link = NULL;
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index a76e093c9092..1a2c99bf491e 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -482,30 +482,29 @@ static struct bpf_trampoline_ops trampoline_ops = {
.modify_fentry = modify_fentry,
};
-static struct bpf_tramp_links *
+static struct bpf_tramp_nodes *
bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total, bool *ip_arg)
{
- struct bpf_tramp_link *link;
- struct bpf_tramp_links *tlinks;
- struct bpf_tramp_link **links;
+ struct bpf_tramp_node *node, **nodes;
+ struct bpf_tramp_nodes *tnodes;
int kind;
*total = 0;
- tlinks = kzalloc_objs(*tlinks, BPF_TRAMP_MAX);
- if (!tlinks)
+ tnodes = kzalloc_objs(*tnodes, BPF_TRAMP_MAX);
+ if (!tnodes)
return ERR_PTR(-ENOMEM);
for (kind = 0; kind < BPF_TRAMP_MAX; kind++) {
- tlinks[kind].nr_links = tr->progs_cnt[kind];
+ tnodes[kind].nr_nodes = tr->progs_cnt[kind];
*total += tr->progs_cnt[kind];
- links = tlinks[kind].links;
+ nodes = tnodes[kind].nodes;
- hlist_for_each_entry(link, &tr->progs_hlist[kind], tramp_hlist) {
- *ip_arg |= link->link.prog->call_get_func_ip;
- *links++ = link;
+ hlist_for_each_entry(node, &tr->progs_hlist[kind], tramp_hlist) {
+ *ip_arg |= node->link->prog->call_get_func_ip;
+ *nodes++ = node;
}
}
- return tlinks;
+ return tnodes;
}
static void bpf_tramp_image_free(struct bpf_tramp_image *im)
@@ -653,14 +652,14 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
struct bpf_trampoline_ops *ops, void *data)
{
struct bpf_tramp_image *im;
- struct bpf_tramp_links *tlinks;
+ struct bpf_tramp_nodes *tnodes;
u32 orig_flags = tr->flags;
bool ip_arg = false;
int err, total, size;
- tlinks = bpf_trampoline_get_progs(tr, &total, &ip_arg);
- if (IS_ERR(tlinks))
- return PTR_ERR(tlinks);
+ tnodes = bpf_trampoline_get_progs(tr, &total, &ip_arg);
+ if (IS_ERR(tnodes))
+ return PTR_ERR(tnodes);
if (total == 0) {
err = ops->unregister_fentry(tr, orig_flags, tr->cur_image->image, data);
@@ -672,8 +671,8 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
/* clear all bits except SHARE_IPMODIFY and TAIL_CALL_CTX */
tr->flags &= (BPF_TRAMP_F_SHARE_IPMODIFY | BPF_TRAMP_F_TAIL_CALL_CTX);
- if (tlinks[BPF_TRAMP_FEXIT].nr_links ||
- tlinks[BPF_TRAMP_MODIFY_RETURN].nr_links) {
+ if (tnodes[BPF_TRAMP_FEXIT].nr_nodes ||
+ tnodes[BPF_TRAMP_MODIFY_RETURN].nr_nodes) {
/* NOTE: BPF_TRAMP_F_RESTORE_REGS and BPF_TRAMP_F_SKIP_FRAME
* should not be set together.
*/
@@ -704,7 +703,7 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
#endif
size = arch_bpf_trampoline_size(&tr->func.model, tr->flags,
- tlinks, tr->func.addr);
+ tnodes, tr->func.addr);
if (size < 0) {
err = size;
goto out;
@@ -722,7 +721,7 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
}
err = arch_prepare_bpf_trampoline(im, im->image, im->image + size,
- &tr->func.model, tr->flags, tlinks,
+ &tr->func.model, tr->flags, tnodes,
tr->func.addr);
if (err < 0)
goto out_free;
@@ -760,7 +759,7 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
/* If any error happens, restore previous flags */
if (err)
tr->flags = orig_flags;
- kfree(tlinks);
+ kfree(tnodes);
return err;
out_free:
@@ -810,7 +809,7 @@ static int bpf_freplace_check_tgt_prog(struct bpf_prog *tgt_prog)
return 0;
}
-static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
+static int __bpf_trampoline_link_prog(struct bpf_tramp_node *node,
struct bpf_trampoline *tr,
struct bpf_prog *tgt_prog,
struct bpf_trampoline_ops *ops,
@@ -818,12 +817,12 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
{
struct bpf_fsession_link *fslink = NULL;
enum bpf_tramp_prog_type kind;
- struct bpf_tramp_link *link_exiting;
+ struct bpf_tramp_node *node_existing;
struct hlist_head *prog_list;
int err = 0;
int cnt = 0, i;
- kind = bpf_attach_type_to_tramp(link->link.prog);
+ kind = bpf_attach_type_to_tramp(node->link->prog);
if (tr->extension_prog)
/* cannot attach fentry/fexit if extension prog is attached.
* cannot overwrite extension prog either.
@@ -840,10 +839,10 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
err = bpf_freplace_check_tgt_prog(tgt_prog);
if (err)
return err;
- tr->extension_prog = link->link.prog;
+ tr->extension_prog = node->link->prog;
return bpf_arch_text_poke(tr->func.addr, BPF_MOD_NOP,
BPF_MOD_JUMP, NULL,
- link->link.prog->bpf_func);
+ node->link->prog->bpf_func);
}
if (kind == BPF_TRAMP_FSESSION) {
prog_list = &tr->progs_hlist[BPF_TRAMP_FENTRY];
@@ -853,31 +852,31 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
}
if (cnt >= BPF_MAX_TRAMP_LINKS)
return -E2BIG;
- if (!hlist_unhashed(&link->tramp_hlist))
+ if (!hlist_unhashed(&node->tramp_hlist))
/* prog already linked */
return -EBUSY;
- hlist_for_each_entry(link_exiting, prog_list, tramp_hlist) {
- if (link_exiting->link.prog != link->link.prog)
+ hlist_for_each_entry(node_existing, prog_list, tramp_hlist) {
+ if (node_existing->link->prog != node->link->prog)
continue;
/* prog already linked */
return -EBUSY;
}
- hlist_add_head(&link->tramp_hlist, prog_list);
+ hlist_add_head(&node->tramp_hlist, prog_list);
if (kind == BPF_TRAMP_FSESSION) {
tr->progs_cnt[BPF_TRAMP_FENTRY]++;
- fslink = container_of(link, struct bpf_fsession_link, link.link);
- hlist_add_head(&fslink->fexit.tramp_hlist, &tr->progs_hlist[BPF_TRAMP_FEXIT]);
+ fslink = container_of(node, struct bpf_fsession_link, link.link.node);
+ hlist_add_head(&fslink->fexit.node.tramp_hlist, &tr->progs_hlist[BPF_TRAMP_FEXIT]);
tr->progs_cnt[BPF_TRAMP_FEXIT]++;
} else {
tr->progs_cnt[kind]++;
}
err = bpf_trampoline_update(tr, true /* lock_direct_mutex */, ops, data);
if (err) {
- hlist_del_init(&link->tramp_hlist);
+ hlist_del_init(&node->tramp_hlist);
if (kind == BPF_TRAMP_FSESSION) {
tr->progs_cnt[BPF_TRAMP_FENTRY]--;
- hlist_del_init(&fslink->fexit.tramp_hlist);
+ hlist_del_init(&fslink->fexit.node.tramp_hlist);
tr->progs_cnt[BPF_TRAMP_FEXIT]--;
} else {
tr->progs_cnt[kind]--;
@@ -886,19 +885,19 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
return err;
}
-int bpf_trampoline_link_prog(struct bpf_tramp_link *link,
+int bpf_trampoline_link_prog(struct bpf_tramp_node *node,
struct bpf_trampoline *tr,
struct bpf_prog *tgt_prog)
{
int err;
trampoline_lock(tr);
- err = __bpf_trampoline_link_prog(link, tr, tgt_prog, &trampoline_ops, NULL);
+ err = __bpf_trampoline_link_prog(node, tr, tgt_prog, &trampoline_ops, NULL);
trampoline_unlock(tr);
return err;
}
-static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
+static int __bpf_trampoline_unlink_prog(struct bpf_tramp_node *node,
struct bpf_trampoline *tr,
struct bpf_prog *tgt_prog,
struct bpf_trampoline_ops *ops,
@@ -907,7 +906,7 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
enum bpf_tramp_prog_type kind;
int err;
- kind = bpf_attach_type_to_tramp(link->link.prog);
+ kind = bpf_attach_type_to_tramp(node->link->prog);
if (kind == BPF_TRAMP_REPLACE) {
WARN_ON_ONCE(!tr->extension_prog);
err = bpf_arch_text_poke(tr->func.addr, BPF_MOD_JUMP,
@@ -919,26 +918,26 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
return err;
} else if (kind == BPF_TRAMP_FSESSION) {
struct bpf_fsession_link *fslink =
- container_of(link, struct bpf_fsession_link, link.link);
+ container_of(node, struct bpf_fsession_link, link.link.node);
- hlist_del_init(&fslink->fexit.tramp_hlist);
+ hlist_del_init(&fslink->fexit.node.tramp_hlist);
tr->progs_cnt[BPF_TRAMP_FEXIT]--;
kind = BPF_TRAMP_FENTRY;
}
- hlist_del_init(&link->tramp_hlist);
+ hlist_del_init(&node->tramp_hlist);
tr->progs_cnt[kind]--;
return bpf_trampoline_update(tr, true /* lock_direct_mutex */, ops, data);
}
/* bpf_trampoline_unlink_prog() should never fail. */
-int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
+int bpf_trampoline_unlink_prog(struct bpf_tramp_node *node,
struct bpf_trampoline *tr,
struct bpf_prog *tgt_prog)
{
int err;
trampoline_lock(tr);
- err = __bpf_trampoline_unlink_prog(link, tr, tgt_prog, &trampoline_ops, NULL);
+ err = __bpf_trampoline_unlink_prog(node, tr, tgt_prog, &trampoline_ops, NULL);
trampoline_unlock(tr);
return err;
}
@@ -953,7 +952,7 @@ static void bpf_shim_tramp_link_release(struct bpf_link *link)
if (!shim_link->trampoline)
return;
- WARN_ON_ONCE(bpf_trampoline_unlink_prog(&shim_link->link, shim_link->trampoline, NULL));
+ WARN_ON_ONCE(bpf_trampoline_unlink_prog(&shim_link->link.node, shim_link->trampoline, NULL));
bpf_trampoline_put(shim_link->trampoline);
}
@@ -999,8 +998,8 @@ static struct bpf_shim_tramp_link *cgroup_shim_alloc(const struct bpf_prog *prog
p->type = BPF_PROG_TYPE_LSM;
p->expected_attach_type = BPF_LSM_MAC;
bpf_prog_inc(p);
- bpf_link_init(&shim_link->link.link, BPF_LINK_TYPE_UNSPEC,
- &bpf_shim_tramp_link_lops, p, attach_type);
+ bpf_tramp_link_init(&shim_link->link, BPF_LINK_TYPE_UNSPEC,
+ &bpf_shim_tramp_link_lops, p, attach_type, 0);
bpf_cgroup_atype_get(p->aux->attach_btf_id, cgroup_atype);
return shim_link;
@@ -1009,15 +1008,15 @@ static struct bpf_shim_tramp_link *cgroup_shim_alloc(const struct bpf_prog *prog
static struct bpf_shim_tramp_link *cgroup_shim_find(struct bpf_trampoline *tr,
bpf_func_t bpf_func)
{
- struct bpf_tramp_link *link;
+ struct bpf_tramp_node *node;
int kind;
for (kind = 0; kind < BPF_TRAMP_MAX; kind++) {
- hlist_for_each_entry(link, &tr->progs_hlist[kind], tramp_hlist) {
- struct bpf_prog *p = link->link.prog;
+ hlist_for_each_entry(node, &tr->progs_hlist[kind], tramp_hlist) {
+ struct bpf_prog *p = node->link->prog;
if (p->bpf_func == bpf_func)
- return container_of(link, struct bpf_shim_tramp_link, link);
+ return container_of(node, struct bpf_shim_tramp_link, link.node);
}
}
@@ -1067,7 +1066,7 @@ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog,
goto err;
}
- err = __bpf_trampoline_link_prog(&shim_link->link, tr, NULL, &trampoline_ops, NULL);
+ err = __bpf_trampoline_link_prog(&shim_link->link.node, tr, NULL, &trampoline_ops, NULL);
if (err)
goto err;
@@ -1382,7 +1381,7 @@ bpf_trampoline_exit_t bpf_trampoline_exit(const struct bpf_prog *prog)
int __weak
arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *image_end,
const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks,
+ struct bpf_tramp_nodes *tnodes,
void *func_addr)
{
return -ENOTSUPP;
@@ -1416,7 +1415,7 @@ int __weak arch_protect_bpf_trampoline(void *image, unsigned int size)
}
int __weak arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
- struct bpf_tramp_links *tlinks, void *func_addr)
+ struct bpf_tramp_nodes *tnodes, void *func_addr)
{
return -ENOTSUPP;
}
diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c
index ae5a54c350b9..191a6b3ee254 100644
--- a/net/bpf/bpf_dummy_struct_ops.c
+++ b/net/bpf/bpf_dummy_struct_ops.c
@@ -132,7 +132,7 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
const struct bpf_struct_ops *st_ops = &bpf_bpf_dummy_ops;
const struct btf_type *func_proto;
struct bpf_dummy_ops_test_args *args;
- struct bpf_tramp_links *tlinks = NULL;
+ struct bpf_tramp_nodes *tnodes = NULL;
struct bpf_tramp_link *link = NULL;
void *image = NULL;
unsigned int op_idx;
@@ -158,8 +158,8 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
if (err)
goto out;
- tlinks = kzalloc_objs(*tlinks, BPF_TRAMP_MAX);
- if (!tlinks) {
+ tnodes = kzalloc_objs(*tnodes, BPF_TRAMP_MAX);
+ if (!tnodes) {
err = -ENOMEM;
goto out;
}
@@ -171,11 +171,11 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
}
/* prog doesn't take the ownership of the reference from caller */
bpf_prog_inc(prog);
- bpf_link_init(&link->link, BPF_LINK_TYPE_STRUCT_OPS, &bpf_struct_ops_link_lops, prog,
- prog->expected_attach_type);
+ bpf_tramp_link_init(link, BPF_LINK_TYPE_STRUCT_OPS, &bpf_struct_ops_link_lops,
+ prog, prog->expected_attach_type, 0);
op_idx = prog->expected_attach_type;
- err = bpf_struct_ops_prepare_trampoline(tlinks, link,
+ err = bpf_struct_ops_prepare_trampoline(tnodes, &link->node,
&st_ops->func_models[op_idx],
&dummy_ops_test_ret_function,
&image, &image_off,
@@ -198,7 +198,7 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
bpf_struct_ops_image_free(image);
if (link)
bpf_link_put(&link->link);
- kfree(tlinks);
+ kfree(tnodes);
return err;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* [PATCHv4 bpf-next 06/25] bpf: Factor fsession link to use struct bpf_tramp_node
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (4 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 05/25] bpf: Add struct bpf_tramp_node object Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 07/25] bpf: Add multi tracing attach types Jiri Olsa
` (19 subsequent siblings)
25 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Now that we split trampoline attachment object (bpf_tramp_node) from
the link object (bpf_tramp_link) we can use bpf_tramp_node as fsession's
fexit attachment object and get rid of the bpf_fsession_link object.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/bpf.h | 6 +-----
kernel/bpf/syscall.c | 21 ++++++---------------
kernel/bpf/trampoline.c | 14 +++++++-------
3 files changed, 14 insertions(+), 27 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index f97aa34ee4c2..d536640aef41 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1883,15 +1883,11 @@ struct bpf_shim_tramp_link {
struct bpf_tracing_link {
struct bpf_tramp_link link;
+ struct bpf_tramp_node fexit;
struct bpf_trampoline *trampoline;
struct bpf_prog *tgt_prog;
};
-struct bpf_fsession_link {
- struct bpf_tracing_link link;
- struct bpf_tramp_link fexit;
-};
-
struct bpf_raw_tp_link {
struct bpf_link link;
struct bpf_raw_event_map *btp;
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 6db6d1e74379..003ad95940c9 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -3637,21 +3637,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
key = bpf_trampoline_compute_key(tgt_prog, NULL, btf_id);
}
- if (prog->expected_attach_type == BPF_TRACE_FSESSION) {
- struct bpf_fsession_link *fslink;
-
- fslink = kzalloc_obj(*fslink, GFP_USER);
- if (fslink) {
- bpf_tramp_link_init(&fslink->fexit, BPF_LINK_TYPE_TRACING,
- &bpf_tracing_link_lops, prog, attach_type,
- bpf_cookie);
- link = &fslink->link;
- } else {
- link = NULL;
- }
- } else {
- link = kzalloc_obj(*link, GFP_USER);
- }
+ link = kzalloc_obj(*link, GFP_USER);
if (!link) {
err = -ENOMEM;
goto out_put_prog;
@@ -3659,6 +3645,11 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
bpf_tramp_link_init(&link->link, BPF_LINK_TYPE_TRACING,
&bpf_tracing_link_lops, prog, attach_type, bpf_cookie);
+ if (prog->expected_attach_type == BPF_TRACE_FSESSION) {
+ link->fexit.link = &link->link.link;
+ link->fexit.cookie = bpf_cookie;
+ }
+
mutex_lock(&prog->aux->dst_mutex);
/* There are a few possible cases here:
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 1a2c99bf491e..de4598984e3e 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -815,7 +815,7 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_node *node,
struct bpf_trampoline_ops *ops,
void *data)
{
- struct bpf_fsession_link *fslink = NULL;
+ struct bpf_tracing_link *tr_link = NULL;
enum bpf_tramp_prog_type kind;
struct bpf_tramp_node *node_existing;
struct hlist_head *prog_list;
@@ -865,8 +865,8 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_node *node,
hlist_add_head(&node->tramp_hlist, prog_list);
if (kind == BPF_TRAMP_FSESSION) {
tr->progs_cnt[BPF_TRAMP_FENTRY]++;
- fslink = container_of(node, struct bpf_fsession_link, link.link.node);
- hlist_add_head(&fslink->fexit.node.tramp_hlist, &tr->progs_hlist[BPF_TRAMP_FEXIT]);
+ tr_link = container_of(node, struct bpf_tracing_link, link.node);
+ hlist_add_head(&tr_link->fexit.tramp_hlist, &tr->progs_hlist[BPF_TRAMP_FEXIT]);
tr->progs_cnt[BPF_TRAMP_FEXIT]++;
} else {
tr->progs_cnt[kind]++;
@@ -876,7 +876,7 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_node *node,
hlist_del_init(&node->tramp_hlist);
if (kind == BPF_TRAMP_FSESSION) {
tr->progs_cnt[BPF_TRAMP_FENTRY]--;
- hlist_del_init(&fslink->fexit.node.tramp_hlist);
+ hlist_del_init(&tr_link->fexit.tramp_hlist);
tr->progs_cnt[BPF_TRAMP_FEXIT]--;
} else {
tr->progs_cnt[kind]--;
@@ -917,10 +917,10 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_node *node,
tgt_prog->aux->is_extended = false;
return err;
} else if (kind == BPF_TRAMP_FSESSION) {
- struct bpf_fsession_link *fslink =
- container_of(node, struct bpf_fsession_link, link.link.node);
+ struct bpf_tracing_link *tr_link =
+ container_of(node, struct bpf_tracing_link, link.node);
- hlist_del_init(&fslink->fexit.node.tramp_hlist);
+ hlist_del_init(&tr_link->fexit.tramp_hlist);
tr->progs_cnt[BPF_TRAMP_FEXIT]--;
kind = BPF_TRAMP_FENTRY;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* [PATCHv4 bpf-next 07/25] bpf: Add multi tracing attach types
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (5 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 06/25] bpf: Factor fsession link to use struct bpf_tramp_node Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 08/25] bpf: Move sleepable verification code to btf_id_allow_sleepable Jiri Olsa
` (18 subsequent siblings)
25 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding new program attach types multi tracing attachment:
BPF_TRACE_FENTRY_MULTI
BPF_TRACE_FEXIT_MULTI
and their base support in verifier code.
Programs with such attach type will use specific link attachment
interface coming in following changes.
This was suggested by Andrii some (long) time ago and turned out
to be easier than having special program flag for that.
Bpf programs with such types have 'bpf_multi_func' function set as
their attach_btf_id and keep module reference when it's specified
by attach_prog_fd.
They are also accepted as sleepable programs during verification,
and the real validation for specific BTF_IDs/functions will happen
during the multi link attachment in following changes.
Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/bpf.h | 5 +++++
include/linux/btf_ids.h | 1 +
include/uapi/linux/bpf.h | 2 ++
kernel/bpf/btf.c | 2 ++
kernel/bpf/syscall.c | 33 ++++++++++++++++++++++++----
kernel/bpf/trampoline.c | 5 ++++-
kernel/bpf/verifier.c | 40 +++++++++++++++++++++++++++++++++-
net/bpf/test_run.c | 2 ++
tools/include/uapi/linux/bpf.h | 2 ++
tools/lib/bpf/libbpf.c | 2 ++
10 files changed, 88 insertions(+), 6 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index d536640aef41..4628f2bf3a5b 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2069,6 +2069,11 @@ static inline void bpf_prog_put_recursion_context(struct bpf_prog *prog)
#endif
}
+static inline bool is_tracing_multi(enum bpf_attach_type type)
+{
+ return type == BPF_TRACE_FENTRY_MULTI || type == BPF_TRACE_FEXIT_MULTI;
+}
+
#if defined(CONFIG_BPF_JIT) && defined(CONFIG_BPF_SYSCALL)
/* This macro helps developer to register a struct_ops type and generate
* type information correctly. Developers should use this macro to register
diff --git a/include/linux/btf_ids.h b/include/linux/btf_ids.h
index 139bdececdcf..eb2c4432856d 100644
--- a/include/linux/btf_ids.h
+++ b/include/linux/btf_ids.h
@@ -284,5 +284,6 @@ extern u32 bpf_cgroup_btf_id[];
extern u32 bpf_local_storage_map_btf_id[];
extern u32 btf_bpf_map_id[];
extern u32 bpf_kmem_cache_btf_id[];
+extern u32 bpf_multi_func_btf_id[];
#endif
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index c8d400b7680a..68600972a778 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1154,6 +1154,8 @@ enum bpf_attach_type {
BPF_TRACE_KPROBE_SESSION,
BPF_TRACE_UPROBE_SESSION,
BPF_TRACE_FSESSION,
+ BPF_TRACE_FENTRY_MULTI,
+ BPF_TRACE_FEXIT_MULTI,
__MAX_BPF_ATTACH_TYPE
};
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index c5876c91face..60bef23e8b06 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6230,6 +6230,8 @@ static int btf_validate_prog_ctx_type(struct bpf_verifier_log *log, const struct
case BPF_TRACE_FEXIT:
case BPF_MODIFY_RETURN:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FENTRY_MULTI:
+ case BPF_TRACE_FEXIT_MULTI:
/* allow u64* as ctx */
if (btf_is_int(t) && t->size == 8)
return 0;
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 003ad95940c9..2680740e9c09 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -41,6 +41,7 @@
#include <linux/overflow.h>
#include <linux/cookie.h>
#include <linux/verification.h>
+#include <linux/btf_ids.h>
#include <net/netfilter/nf_bpf_link.h>
#include <net/netkit.h>
@@ -2653,7 +2654,8 @@ static int
bpf_prog_load_check_attach(enum bpf_prog_type prog_type,
enum bpf_attach_type expected_attach_type,
struct btf *attach_btf, u32 btf_id,
- struct bpf_prog *dst_prog)
+ struct bpf_prog *dst_prog,
+ bool multi_func)
{
if (btf_id) {
if (btf_id > BTF_MAX_TYPE)
@@ -2673,6 +2675,14 @@ bpf_prog_load_check_attach(enum bpf_prog_type prog_type,
}
}
+ if (multi_func) {
+ if (prog_type != BPF_PROG_TYPE_TRACING)
+ return -EINVAL;
+ if (!attach_btf || btf_id)
+ return -EINVAL;
+ return 0;
+ }
+
if (attach_btf && (!btf_id || dst_prog))
return -EINVAL;
@@ -2865,6 +2875,16 @@ static int bpf_prog_mark_insn_arrays_ready(struct bpf_prog *prog)
return 0;
}
+#define DEFINE_BPF_MULTI_FUNC(args...) \
+ extern int bpf_multi_func(args); \
+ int __init bpf_multi_func(args) { return 0; }
+
+DEFINE_BPF_MULTI_FUNC(unsigned long a1, unsigned long a2,
+ unsigned long a3, unsigned long a4,
+ unsigned long a5, unsigned long a6)
+
+BTF_ID_LIST_GLOBAL_SINGLE(bpf_multi_func_btf_id, func, bpf_multi_func)
+
/* last field in 'union bpf_attr' used by this command */
#define BPF_PROG_LOAD_LAST_FIELD keyring_id
@@ -2877,6 +2897,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
bool bpf_cap;
int err;
char license[128];
+ bool multi_func;
if (CHECK_ATTR(BPF_PROG_LOAD))
return -EINVAL;
@@ -2943,6 +2964,8 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
if (is_perfmon_prog_type(type) && !bpf_token_capable(token, CAP_PERFMON))
goto put_token;
+ multi_func = is_tracing_multi(attr->expected_attach_type);
+
/* attach_prog_fd/attach_btf_obj_fd can specify fd of either bpf_prog
* or btf, we need to check which one it is
*/
@@ -2964,7 +2987,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
goto put_token;
}
}
- } else if (attr->attach_btf_id) {
+ } else if (attr->attach_btf_id || multi_func) {
/* fall back to vmlinux BTF, if BTF type ID is specified */
attach_btf = bpf_get_btf_vmlinux();
if (IS_ERR(attach_btf)) {
@@ -2980,7 +3003,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
if (bpf_prog_load_check_attach(type, attr->expected_attach_type,
attach_btf, attr->attach_btf_id,
- dst_prog)) {
+ dst_prog, multi_func)) {
if (dst_prog)
bpf_prog_put(dst_prog);
if (attach_btf)
@@ -3003,7 +3026,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
prog->expected_attach_type = attr->expected_attach_type;
prog->sleepable = !!(attr->prog_flags & BPF_F_SLEEPABLE);
prog->aux->attach_btf = attach_btf;
- prog->aux->attach_btf_id = attr->attach_btf_id;
+ prog->aux->attach_btf_id = multi_func ? bpf_multi_func_btf_id[0] : attr->attach_btf_id;
prog->aux->dst_prog = dst_prog;
prog->aux->dev_bound = !!attr->prog_ifindex;
prog->aux->xdp_has_frags = attr->prog_flags & BPF_F_XDP_HAS_FRAGS;
@@ -4365,6 +4388,8 @@ attach_type_to_prog_type(enum bpf_attach_type attach_type)
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FENTRY_MULTI:
+ case BPF_TRACE_FEXIT_MULTI:
case BPF_MODIFY_RETURN:
return BPF_PROG_TYPE_TRACING;
case BPF_LSM_MAC:
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index de4598984e3e..a9e328c0a1b3 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -182,7 +182,8 @@ bool bpf_prog_has_trampoline(const struct bpf_prog *prog)
switch (ptype) {
case BPF_PROG_TYPE_TRACING:
if (eatype == BPF_TRACE_FENTRY || eatype == BPF_TRACE_FEXIT ||
- eatype == BPF_MODIFY_RETURN || eatype == BPF_TRACE_FSESSION)
+ eatype == BPF_MODIFY_RETURN || eatype == BPF_TRACE_FSESSION ||
+ eatype == BPF_TRACE_FENTRY_MULTI || eatype == BPF_TRACE_FEXIT_MULTI)
return true;
return false;
case BPF_PROG_TYPE_LSM:
@@ -771,10 +772,12 @@ static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(struct bpf_prog *prog)
{
switch (prog->expected_attach_type) {
case BPF_TRACE_FENTRY:
+ case BPF_TRACE_FENTRY_MULTI:
return BPF_TRAMP_FENTRY;
case BPF_MODIFY_RETURN:
return BPF_TRAMP_MODIFY_RETURN;
case BPF_TRACE_FEXIT:
+ case BPF_TRACE_FEXIT_MULTI:
return BPF_TRAMP_FEXIT;
case BPF_TRACE_FSESSION:
return BPF_TRAMP_FSESSION;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index cd008b146ee5..498132da12e7 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -17981,6 +17981,8 @@ static bool return_retval_range(struct bpf_verifier_env *env, struct bpf_retval_
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FENTRY_MULTI:
+ case BPF_TRACE_FEXIT_MULTI:
*range = retval_range(0, 0);
break;
case BPF_TRACE_RAW_TP:
@@ -24174,6 +24176,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
insn->imm == BPF_FUNC_get_func_ret) {
if (eatype == BPF_TRACE_FEXIT ||
eatype == BPF_TRACE_FSESSION ||
+ eatype == BPF_TRACE_FEXIT_MULTI ||
eatype == BPF_MODIFY_RETURN) {
/* Load nr_args from ctx - 8 */
insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
@@ -25079,6 +25082,11 @@ static int check_attach_modify_return(unsigned long addr, const char *func_name)
#endif /* CONFIG_FUNCTION_ERROR_INJECTION */
+static bool is_tracing_multi_id(const struct bpf_prog *prog, u32 btf_id)
+{
+ return is_tracing_multi(prog->expected_attach_type) && bpf_multi_func_btf_id[0] == btf_id;
+}
+
int bpf_check_attach_target(struct bpf_verifier_log *log,
const struct bpf_prog *prog,
const struct bpf_prog *tgt_prog,
@@ -25201,6 +25209,8 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
prog_extension &&
(tgt_prog->expected_attach_type == BPF_TRACE_FENTRY ||
tgt_prog->expected_attach_type == BPF_TRACE_FEXIT ||
+ tgt_prog->expected_attach_type == BPF_TRACE_FENTRY_MULTI ||
+ tgt_prog->expected_attach_type == BPF_TRACE_FEXIT_MULTI ||
tgt_prog->expected_attach_type == BPF_TRACE_FSESSION)) {
/* Program extensions can extend all program types
* except fentry/fexit. The reason is the following.
@@ -25301,6 +25311,8 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FENTRY_MULTI:
+ case BPF_TRACE_FEXIT_MULTI:
if (prog->expected_attach_type == BPF_TRACE_FSESSION &&
!bpf_jit_supports_fsession()) {
bpf_log(log, "JIT does not support fsession\n");
@@ -25330,7 +25342,17 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
if (ret < 0)
return ret;
- if (tgt_prog) {
+ /* *.multi programs don't need an address during program
+ * verification, we just take the module ref if needed.
+ */
+ if (is_tracing_multi_id(prog, btf_id)) {
+ if (btf_is_module(btf)) {
+ mod = btf_try_get_module(btf);
+ if (!mod)
+ return -ENOENT;
+ }
+ addr = 0;
+ } else if (tgt_prog) {
if (subprog == 0)
addr = (long) tgt_prog->bpf_func;
else
@@ -25358,6 +25380,12 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
ret = -EINVAL;
switch (prog->type) {
case BPF_PROG_TYPE_TRACING:
+ /* *.multi sleepable programs will pass initial sleepable check,
+ * the actual attached btf ids are checked later during the link
+ * attachment.
+ */
+ if (is_tracing_multi_id(prog, btf_id))
+ ret = 0;
if (!check_attach_sleepable(btf_id, addr, tname))
ret = 0;
/* fentry/fexit/fmod_ret progs can also be sleepable if they are
@@ -25467,6 +25495,8 @@ static bool can_be_sleepable(struct bpf_prog *prog)
case BPF_MODIFY_RETURN:
case BPF_TRACE_ITER:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FENTRY_MULTI:
+ case BPF_TRACE_FEXIT_MULTI:
return true;
default:
return false;
@@ -25556,6 +25586,14 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
return -EINVAL;
}
+ /*
+ * We don't get trampoline for tracing_multi programs at this point,
+ * it's done when tracing_multi link is created.
+ */
+ if (prog->type == BPF_PROG_TYPE_TRACING &&
+ is_tracing_multi(prog->expected_attach_type))
+ return 0;
+
key = bpf_trampoline_compute_key(tgt_prog, prog->aux->attach_btf, btf_id);
tr = bpf_trampoline_get(key, &tgt_info);
if (!tr)
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index 56bc8dc1e281..df7ae2c28a3b 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -686,6 +686,8 @@ int bpf_prog_test_run_tracing(struct bpf_prog *prog,
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FENTRY_MULTI:
+ case BPF_TRACE_FEXIT_MULTI:
if (bpf_fentry_test1(1) != 2 ||
bpf_fentry_test2(2, 3) != 5 ||
bpf_fentry_test3(4, 5, 6) != 15 ||
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 5e38b4887de6..61f0fe5bc0aa 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1154,6 +1154,8 @@ enum bpf_attach_type {
BPF_TRACE_KPROBE_SESSION,
BPF_TRACE_UPROBE_SESSION,
BPF_TRACE_FSESSION,
+ BPF_TRACE_FENTRY_MULTI,
+ BPF_TRACE_FEXIT_MULTI,
__MAX_BPF_ATTACH_TYPE
};
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 1eaa7527d4da..0f035e0db2a0 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -136,6 +136,8 @@ static const char * const attach_type_name[] = {
[BPF_NETKIT_PEER] = "netkit_peer",
[BPF_TRACE_KPROBE_SESSION] = "trace_kprobe_session",
[BPF_TRACE_UPROBE_SESSION] = "trace_uprobe_session",
+ [BPF_TRACE_FENTRY_MULTI] = "trace_fentry_multi",
+ [BPF_TRACE_FEXIT_MULTI] = "trace_fexit_multi",
};
static const char * const link_type_name[] = {
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* [PATCHv4 bpf-next 08/25] bpf: Move sleepable verification code to btf_id_allow_sleepable
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (6 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 07/25] bpf: Add multi tracing attach types Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 09/25] bpf: Add bpf_trampoline_multi_attach/detach functions Jiri Olsa
` (17 subsequent siblings)
25 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Move sleepable verification code to btf_id_allow_sleepable function.
It will be used in following changes.
Adding code to retrieve type's name instead of passing it from
bpf_check_attach_target function, because this function will be
called from another place in following changes and it's easier
to retrieve the name directly in here.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/bpf_verifier.h | 3 ++
kernel/bpf/verifier.c | 79 +++++++++++++++++++++---------------
2 files changed, 50 insertions(+), 32 deletions(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 090aa26d1c98..186726fcf52a 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -932,6 +932,9 @@ static inline void bpf_trampoline_unpack_key(u64 key, u32 *obj_id, u32 *btf_id)
*btf_id = key & 0x7FFFFFFF;
}
+int btf_id_allow_sleepable(u32 btf_id, unsigned long addr, const struct bpf_prog *prog,
+ const struct btf *btf);
+
int bpf_check_attach_target(struct bpf_verifier_log *log,
const struct bpf_prog *prog,
const struct bpf_prog *tgt_prog,
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 498132da12e7..0be54f500c66 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -25087,6 +25087,52 @@ static bool is_tracing_multi_id(const struct bpf_prog *prog, u32 btf_id)
return is_tracing_multi(prog->expected_attach_type) && bpf_multi_func_btf_id[0] == btf_id;
}
+int btf_id_allow_sleepable(u32 btf_id, unsigned long addr, const struct bpf_prog *prog,
+ const struct btf *btf)
+{
+ const struct btf_type *t;
+ const char *tname;
+
+ switch (prog->type) {
+ case BPF_PROG_TYPE_TRACING:
+ t = btf_type_by_id(btf, btf_id);
+ if (!t)
+ return -EINVAL;
+ tname = btf_name_by_offset(btf, t->name_off);
+ if (!tname)
+ return -EINVAL;
+
+ /* *.multi sleepable programs will pass initial sleepable check,
+ * the actual attached btf ids are checked later during the link
+ * attachment.
+ */
+ if (is_tracing_multi_id(prog, btf_id))
+ return 0;
+ if (!check_attach_sleepable(btf_id, addr, tname))
+ return 0;
+ /* fentry/fexit/fmod_ret progs can also be sleepable if they are
+ * in the fmodret id set with the KF_SLEEPABLE flag.
+ */
+ else {
+ u32 *flags = btf_kfunc_is_modify_return(btf, btf_id, prog);
+
+ if (flags && (*flags & KF_SLEEPABLE))
+ return 0;
+ }
+ break;
+ case BPF_PROG_TYPE_LSM:
+ /* LSM progs check that they are attached to bpf_lsm_*() funcs.
+ * Only some of them are sleepable.
+ */
+ if (bpf_lsm_is_sleepable_hook(btf_id))
+ return 0;
+ break;
+ default:
+ break;
+ }
+ return -EINVAL;
+}
+
int bpf_check_attach_target(struct bpf_verifier_log *log,
const struct bpf_prog *prog,
const struct bpf_prog *tgt_prog,
@@ -25377,38 +25423,7 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
}
if (prog->sleepable) {
- ret = -EINVAL;
- switch (prog->type) {
- case BPF_PROG_TYPE_TRACING:
- /* *.multi sleepable programs will pass initial sleepable check,
- * the actual attached btf ids are checked later during the link
- * attachment.
- */
- if (is_tracing_multi_id(prog, btf_id))
- ret = 0;
- if (!check_attach_sleepable(btf_id, addr, tname))
- ret = 0;
- /* fentry/fexit/fmod_ret progs can also be sleepable if they are
- * in the fmodret id set with the KF_SLEEPABLE flag.
- */
- else {
- u32 *flags = btf_kfunc_is_modify_return(btf, btf_id,
- prog);
-
- if (flags && (*flags & KF_SLEEPABLE))
- ret = 0;
- }
- break;
- case BPF_PROG_TYPE_LSM:
- /* LSM progs check that they are attached to bpf_lsm_*() funcs.
- * Only some of them are sleepable.
- */
- if (bpf_lsm_is_sleepable_hook(btf_id))
- ret = 0;
- break;
- default:
- break;
- }
+ ret = btf_id_allow_sleepable(btf_id, addr, prog, btf);
if (ret) {
module_put(mod);
bpf_log(log, "%s is not sleepable\n", tname);
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* [PATCHv4 bpf-next 09/25] bpf: Add bpf_trampoline_multi_attach/detach functions
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (7 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 08/25] bpf: Move sleepable verification code to btf_id_allow_sleepable Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:58 ` bot+bpf-ci
2026-03-27 4:18 ` kernel test robot
2026-03-24 8:18 ` [PATCHv4 bpf-next 10/25] bpf: Add support for tracing multi link Jiri Olsa
` (16 subsequent siblings)
25 siblings, 2 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding bpf_trampoline_multi_attach/detach functions that allows to
attach/detach tracing program to multiple functions/trampolines.
The attachment is defined with bpf_program and array of BTF ids of
functions to attach the bpf program to.
Adding bpf_tracing_multi_link object that holds all the attached
trampolines and is initialized in attach and used in detach.
The attachment allocates or uses currently existing trampoline
for each function to attach and links it with the bpf program.
The attach works as follows:
- we get all the needed trampolines
- lock them and add the bpf program to each (__bpf_trampoline_link_prog)
- the trampoline_multi_ops passed in __bpf_trampoline_link_prog gathers
ftrace_hash (ip -> trampoline) objects
- we call update_ftrace_direct_add/mod to update needed locations
- we unlock all the trampolines
The detach works as follows:
- we lock all the needed trampolines
- remove the program from each (__bpf_trampoline_unlink_prog)
- the trampoline_multi_ops passed in __bpf_trampoline_unlink_prog gathers
ftrace_hash (ip -> trampoline) objects
- we call update_ftrace_direct_del/mod to update needed locations
- we unlock and put all the trampolines
Adding trampoline_(un)lock_all functions to (un)lock all trampolines
to gate the tracing_multi attachment.
Note this is supported only for archs (x86_64) with ftrace direct and
have single ops support.
CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS &&
CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS
It also needs CONFIG_BPF_SYSCALL enabled.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/bpf.h | 28 +++++
kernel/bpf/trampoline.c | 265 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 293 insertions(+)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 4628f2bf3a5b..113c9eb7a207 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1452,6 +1452,8 @@ static inline int bpf_dynptr_check_off_len(const struct bpf_dynptr_kern *ptr, u6
return 0;
}
+struct bpf_tracing_multi_link;
+
#ifdef CONFIG_BPF_JIT
int bpf_trampoline_link_prog(struct bpf_tramp_node *node,
struct bpf_trampoline *tr,
@@ -1464,6 +1466,11 @@ struct bpf_trampoline *bpf_trampoline_get(u64 key,
void bpf_trampoline_put(struct bpf_trampoline *tr);
int arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_funcs);
+int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
+ struct bpf_tracing_multi_link *link);
+int bpf_trampoline_multi_detach(struct bpf_prog *prog,
+ struct bpf_tracing_multi_link *link);
+
/*
* When the architecture supports STATIC_CALL replace the bpf_dispatcher_fn
* indirection with a direct call to the bpf program. If the architecture does
@@ -1573,6 +1580,16 @@ static inline bool bpf_prog_has_trampoline(const struct bpf_prog *prog)
{
return false;
}
+static inline int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
+ struct bpf_tracing_multi_link *link)
+{
+ return -ENOTSUPP;
+}
+static inline int bpf_trampoline_multi_detach(struct bpf_prog *prog,
+ struct bpf_tracing_multi_link *link)
+{
+ return -ENOTSUPP;
+}
#endif
struct bpf_func_info_aux {
@@ -1888,6 +1905,17 @@ struct bpf_tracing_link {
struct bpf_prog *tgt_prog;
};
+struct bpf_tracing_multi_node {
+ struct bpf_tramp_node node;
+ struct bpf_trampoline *trampoline;
+};
+
+struct bpf_tracing_multi_link {
+ struct bpf_link link;
+ int nodes_cnt;
+ struct bpf_tracing_multi_node nodes[] __counted_by(nodes_cnt);
+};
+
struct bpf_raw_tp_link {
struct bpf_link link;
struct bpf_raw_event_map *btp;
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index a9e328c0a1b3..2986e5cac743 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -88,6 +88,22 @@ static struct bpf_trampoline *direct_ops_ip_lookup(struct ftrace_ops *ops, unsig
mutex_unlock(&trampoline_mutex);
return tr;
}
+
+static void trampoline_lock_all(void)
+{
+ int i;
+
+ for (i = 0; i < TRAMPOLINE_LOCKS_TABLE_SIZE; i++)
+ mutex_lock(&trampoline_locks[i].mutex);
+}
+
+static void trampoline_unlock_all(void)
+{
+ int i;
+
+ for (i = 0; i < TRAMPOLINE_LOCKS_TABLE_SIZE; i++)
+ mutex_unlock(&trampoline_locks[i].mutex);
+}
#else
static struct bpf_trampoline *direct_ops_ip_lookup(struct ftrace_ops *ops, unsigned long ip)
{
@@ -1423,6 +1439,255 @@ int __weak arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
return -ENOTSUPP;
}
+#if defined(CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS) && \
+ defined(CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS) && \
+ defined(CONFIG_BPF_SYSCALL)
+
+struct fentry_multi_data {
+ struct ftrace_hash *unreg;
+ struct ftrace_hash *modify;
+ struct ftrace_hash *reg;
+};
+
+static void free_fentry_multi_data(struct fentry_multi_data *data)
+{
+ free_ftrace_hash(data->reg);
+ free_ftrace_hash(data->unreg);
+ free_ftrace_hash(data->modify);
+}
+
+static int register_fentry_multi(struct bpf_trampoline *tr, void *new_addr, void *ptr)
+{
+ unsigned long addr = (unsigned long) new_addr;
+ unsigned long ip = ftrace_location(tr->ip);
+ struct fentry_multi_data *data = ptr;
+
+ if (bpf_trampoline_use_jmp(tr->flags))
+ addr = ftrace_jmp_set(addr);
+ return add_ftrace_hash_entry_direct(data->reg, ip, addr) ? 0 : -ENOMEM;
+}
+
+static int unregister_fentry_multi(struct bpf_trampoline *tr, u32 orig_flags, void *old_addr,
+ void *ptr)
+{
+ unsigned long addr = (unsigned long) old_addr;
+ unsigned long ip = ftrace_location(tr->ip);
+ struct fentry_multi_data *data = ptr;
+
+ if (bpf_trampoline_use_jmp(tr->flags))
+ addr = ftrace_jmp_set(addr);
+ return add_ftrace_hash_entry_direct(data->unreg, ip, addr) ? 0 : -ENOMEM;
+}
+
+static int modify_fentry_multi(struct bpf_trampoline *tr, u32 orig_flags, void *old_addr,
+ void *new_addr, bool lock_direct_mutex, void *ptr)
+{
+ unsigned long addr = (unsigned long) new_addr;
+ unsigned long ip = ftrace_location(tr->ip);
+ struct fentry_multi_data *data = ptr;
+
+ if (bpf_trampoline_use_jmp(tr->flags))
+ addr = ftrace_jmp_set(addr);
+ return add_ftrace_hash_entry_direct(data->modify, ip, addr) ? 0 : -ENOMEM;
+}
+
+static struct bpf_trampoline_ops trampoline_multi_ops = {
+ .register_fentry = register_fentry_multi,
+ .unregister_fentry = unregister_fentry_multi,
+ .modify_fentry = modify_fentry_multi,
+};
+
+static int bpf_get_btf_id_target(struct btf *btf, struct bpf_prog *prog, u32 btf_id,
+ struct bpf_attach_target_info *tgt_info)
+{
+ const struct btf_type *t;
+ unsigned long addr;
+ const char *tname;
+ int err;
+
+ if (!btf_id || !btf)
+ return -EINVAL;
+ t = btf_type_by_id(btf, btf_id);
+ if (!t)
+ return -EINVAL;
+ tname = btf_name_by_offset(btf, t->name_off);
+ if (!tname)
+ return -EINVAL;
+ if (!btf_type_is_func(t))
+ return -EINVAL;
+ t = btf_type_by_id(btf, t->type);
+ if (!btf_type_is_func_proto(t))
+ return -EINVAL;
+ err = btf_distill_func_proto(NULL, btf, t, tname, &tgt_info->fmodel);
+ if (err < 0)
+ return err;
+ if (btf_is_module(btf)) {
+ /* The bpf program already holds reference to module. */
+ if (WARN_ON_ONCE(!prog->aux->mod))
+ return -EINVAL;
+ addr = find_kallsyms_symbol_value(prog->aux->mod, tname);
+ } else {
+ addr = kallsyms_lookup_name(tname);
+ }
+ if (!addr || !ftrace_location(addr))
+ return -ENOENT;
+ tgt_info->tgt_addr = addr;
+ return 0;
+}
+
+int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
+ struct bpf_tracing_multi_link *link)
+{
+ struct bpf_attach_target_info tgt_info = {};
+ struct btf *btf = prog->aux->attach_btf;
+ struct bpf_tracing_multi_node *mnode;
+ int j, i, err, cnt = link->nodes_cnt;
+ struct fentry_multi_data data;
+ struct bpf_trampoline *tr;
+ u32 btf_id;
+ u64 key;
+
+ data.reg = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
+ data.unreg = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
+ data.modify = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
+
+ if (!data.reg || !data.unreg || !data.modify) {
+ free_fentry_multi_data(&data);
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < cnt; i++) {
+ btf_id = ids[i];
+
+ err = bpf_get_btf_id_target(btf, prog, btf_id, &tgt_info);
+ if (err)
+ goto rollback_put;
+
+ if (prog->sleepable) {
+ err = btf_id_allow_sleepable(btf_id, tgt_info.tgt_addr, prog, btf);
+ if (err)
+ goto rollback_put;
+ }
+
+ key = bpf_trampoline_compute_key(NULL, btf, btf_id);
+
+ tr = bpf_trampoline_get(key, &tgt_info);
+ if (!tr) {
+ err = -ENOMEM;
+ goto rollback_put;
+ }
+
+ mnode = &link->nodes[i];
+ mnode->trampoline = tr;
+ mnode->node.link = &link->link;
+ }
+
+ trampoline_lock_all();
+
+ for (i = 0; i < cnt; i++) {
+ mnode = &link->nodes[i];
+ err = __bpf_trampoline_link_prog(&mnode->node, mnode->trampoline, NULL,
+ &trampoline_multi_ops, &data);
+ if (err)
+ goto rollback_unlink;
+ }
+
+ if (ftrace_hash_count(data.reg)) {
+ err = update_ftrace_direct_add(&direct_ops, data.reg);
+ if (err)
+ goto rollback_unlink;
+ }
+
+ if (ftrace_hash_count(data.modify)) {
+ err = update_ftrace_direct_mod(&direct_ops, data.modify, true);
+ if (WARN_ON_ONCE(err)) {
+ update_ftrace_direct_del(&direct_ops, data.reg);
+ goto rollback_unlink;
+ }
+ }
+
+ trampoline_unlock_all();
+
+ free_fentry_multi_data(&data);
+ return 0;
+
+rollback_unlink:
+ /*
+ * We can end up in here from 3 points from above code:
+ *
+ * - __bpf_trampoline_link_prog or update_ftrace_direct_add failed and
+ * we have some portion of linked trampolines without ftrace update
+ *
+ * - update_ftrace_direct_mod failed and we have all trampolines linked
+ * plus we already un-attached all new trampolines
+ *
+ * In both cases we need to unlink all trampolines from the new program
+ * and update modified (data.modify) sites, because those have previously
+ * some programs attached and the new trampoline needs to get attached.
+ */
+ ftrace_hash_clear(data.modify);
+
+ for (j = 0; j < i; j++) {
+ mnode = &link->nodes[j];
+ WARN_ON_ONCE(__bpf_trampoline_unlink_prog(&mnode->node, mnode->trampoline,
+ NULL, &trampoline_multi_ops, &data));
+ }
+
+ if (ftrace_hash_count(data.modify))
+ WARN_ON_ONCE(update_ftrace_direct_mod(&direct_ops, data.modify, true));
+
+ trampoline_unlock_all();
+
+ i = cnt;
+
+rollback_put:
+ for (j = 0; j < i; j++)
+ bpf_trampoline_put(link->nodes[j].trampoline);
+
+ free_fentry_multi_data(&data);
+ return err;
+}
+
+int bpf_trampoline_multi_detach(struct bpf_prog *prog, struct bpf_tracing_multi_link *link)
+{
+ struct bpf_tracing_multi_node *mnode;
+ struct fentry_multi_data data = {};
+ int i, cnt = link->nodes_cnt;
+
+ data.unreg = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
+ data.modify = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
+
+ if (!data.unreg || !data.modify) {
+ free_fentry_multi_data(&data);
+ return -ENOMEM;
+ }
+
+ trampoline_lock_all();
+
+ for (i = 0; i < cnt; i++) {
+ mnode = &link->nodes[i];
+ WARN_ON_ONCE(__bpf_trampoline_unlink_prog(&mnode->node, mnode->trampoline,
+ NULL, &trampoline_multi_ops, &data));
+ }
+
+ if (ftrace_hash_count(data.unreg))
+ WARN_ON_ONCE(update_ftrace_direct_del(&direct_ops, data.unreg));
+ if (ftrace_hash_count(data.modify))
+ WARN_ON_ONCE(update_ftrace_direct_mod(&direct_ops, data.modify, true));
+
+ trampoline_unlock_all();
+
+ for (i = 0; i < cnt; i++)
+ bpf_trampoline_put(link->nodes[i].trampoline);
+
+ free_fentry_multi_data(&data);
+ return 0;
+}
+
+#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS &&
+ CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS &&
+ CONFIG_BPF_SYSCALL */
+
static int __init init_trampolines(void)
{
int i;
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* Re: [PATCHv4 bpf-next 09/25] bpf: Add bpf_trampoline_multi_attach/detach functions
2026-03-24 8:18 ` [PATCHv4 bpf-next 09/25] bpf: Add bpf_trampoline_multi_attach/detach functions Jiri Olsa
@ 2026-03-24 8:58 ` bot+bpf-ci
2026-03-24 14:29 ` Jiri Olsa
2026-03-27 4:18 ` kernel test robot
1 sibling, 1 reply; 41+ messages in thread
From: bot+bpf-ci @ 2026-03-24 8:58 UTC (permalink / raw)
To: jolsa, ast, daniel, andrii
Cc: bpf, linux-trace-kernel, kafai, eddyz87, songliubraving, yhs,
menglong8.dong, rostedt, ast, andrii, daniel, martin.lau, eddyz87,
yonghong.song, clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 2425 bytes --]
> diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> index a9e328c0a1b3..2986e5cac743 100644
> --- a/kernel/bpf/trampoline.c
> +++ b/kernel/bpf/trampoline.c
[ ... ]
> +int bpf_trampoline_multi_detach(struct bpf_prog *prog, struct bpf_tracing_multi_link *link)
> +{
> + struct bpf_tracing_multi_node *mnode;
> + struct fentry_multi_data data = {};
> + int i, cnt = link->nodes_cnt;
> +
> + data.unreg = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
> + data.modify = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
> +
> + if (!data.unreg || !data.modify) {
> + free_fentry_multi_data(&data);
> + return -ENOMEM;
> + }
If alloc_ftrace_hash() fails here under memory pressure, the function
returns -ENOMEM without detaching any trampolines or releasing their
refcounts.
The caller in the next patch (bpf_tracing_multi_link_release) does:
WARN_ON_ONCE(bpf_trampoline_multi_detach(link->prog, tr_link));
This drops the error on the floor. The link struct is then freed by
bpf_tracing_multi_link_dealloc(), but the trampolines remain attached
with elevated refcounts, permanently leaking them and their ftrace
hooks.
Would it make sense to pre-allocate these hashes during
bpf_trampoline_multi_attach() and store them in the link struct so
that the detach path cannot fail?
> +
> + trampoline_lock_all();
[ ... ]
> +rollback_unlink:
> + /*
> + * We can end up in here from 3 points from above code:
> + *
> + * - __bpf_trampoline_link_prog or update_ftrace_direct_add failed and
> + * we have some portion of linked trampolines without ftrace update
> + *
> + * - update_ftrace_direct_mod failed and we have all trampolines linked
> + * plus we already un-attached all new trampolines
> + *
> + * In both cases we need to unlink all trampolines from the new program
> + * and update modified (data.modify) sites, because those have previously
> + * some programs attached and the new trampoline needs to get attached.
> + */
This isn't a bug, but the comment says "3 points" and then describes
two scenarios in two bullet points. Maybe say "2 scenarios" or split
the first bullet into two to match?
> + ftrace_hash_clear(data.modify);
[ ... ]
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/23480161822
^ permalink raw reply [flat|nested] 41+ messages in thread* Re: [PATCHv4 bpf-next 09/25] bpf: Add bpf_trampoline_multi_attach/detach functions
2026-03-24 8:58 ` bot+bpf-ci
@ 2026-03-24 14:29 ` Jiri Olsa
0 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 14:29 UTC (permalink / raw)
To: bot+bpf-ci
Cc: ast, daniel, andrii, bpf, linux-trace-kernel, kafai, eddyz87,
songliubraving, yhs, menglong8.dong, rostedt, martin.lau,
yonghong.song, clm, ihor.solodrai
On Tue, Mar 24, 2026 at 08:58:41AM +0000, bot+bpf-ci@kernel.org wrote:
> > diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> > index a9e328c0a1b3..2986e5cac743 100644
> > --- a/kernel/bpf/trampoline.c
> > +++ b/kernel/bpf/trampoline.c
>
> [ ... ]
>
> > +int bpf_trampoline_multi_detach(struct bpf_prog *prog, struct bpf_tracing_multi_link *link)
> > +{
> > + struct bpf_tracing_multi_node *mnode;
> > + struct fentry_multi_data data = {};
> > + int i, cnt = link->nodes_cnt;
> > +
> > + data.unreg = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
> > + data.modify = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
> > +
> > + if (!data.unreg || !data.modify) {
> > + free_fentry_multi_data(&data);
> > + return -ENOMEM;
> > + }
>
> If alloc_ftrace_hash() fails here under memory pressure, the function
> returns -ENOMEM without detaching any trampolines or releasing their
> refcounts.
>
> The caller in the next patch (bpf_tracing_multi_link_release) does:
>
> WARN_ON_ONCE(bpf_trampoline_multi_detach(link->prog, tr_link));
>
> This drops the error on the floor. The link struct is then freed by
> bpf_tracing_multi_link_dealloc(), but the trampolines remain attached
> with elevated refcounts, permanently leaking them and their ftrace
> hooks.
>
> Would it make sense to pre-allocate these hashes during
> bpf_trampoline_multi_attach() and store them in the link struct so
> that the detach path cannot fail?
it's not just hash objects but we will need allocate also entries,
I can do follow up with this if others think it's needed
>
> > +
> > + trampoline_lock_all();
>
> [ ... ]
>
> > +rollback_unlink:
> > + /*
> > + * We can end up in here from 3 points from above code:
> > + *
> > + * - __bpf_trampoline_link_prog or update_ftrace_direct_add failed and
> > + * we have some portion of linked trampolines without ftrace update
> > + *
> > + * - update_ftrace_direct_mod failed and we have all trampolines linked
> > + * plus we already un-attached all new trampolines
> > + *
> > + * In both cases we need to unlink all trampolines from the new program
> > + * and update modified (data.modify) sites, because those have previously
> > + * some programs attached and the new trampoline needs to get attached.
> > + */
>
> This isn't a bug, but the comment says "3 points" and then describes
> two scenarios in two bullet points. Maybe say "2 scenarios" or split
> the first bullet into two to match?
first two points are covered in the first bullets, I think it's ok
jirka
>
> > + ftrace_hash_clear(data.modify);
>
> [ ... ]
>
>
> ---
> AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
> See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
>
> CI run summary: https://github.com/kernel-patches/bpf/actions/runs/23480161822
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCHv4 bpf-next 09/25] bpf: Add bpf_trampoline_multi_attach/detach functions
2026-03-24 8:18 ` [PATCHv4 bpf-next 09/25] bpf: Add bpf_trampoline_multi_attach/detach functions Jiri Olsa
2026-03-24 8:58 ` bot+bpf-ci
@ 2026-03-27 4:18 ` kernel test robot
1 sibling, 0 replies; 41+ messages in thread
From: kernel test robot @ 2026-03-27 4:18 UTC (permalink / raw)
To: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: oe-kbuild-all, bpf, linux-trace-kernel, Martin KaFai Lau,
Eduard Zingerman, Song Liu, Yonghong Song, Menglong Dong,
Steven Rostedt
Hi Jiri,
kernel test robot noticed the following build warnings:
[auto build test WARNING on bpf-next/master]
url: https://github.com/intel-lab-lkp/linux/commits/Jiri-Olsa/ftrace-Add-ftrace_hash_count-function/20260326-101836
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link: https://lore.kernel.org/r/20260324081846.2334094-10-jolsa%40kernel.org
patch subject: [PATCHv4 bpf-next 09/25] bpf: Add bpf_trampoline_multi_attach/detach functions
config: x86_64-randconfig-015-20260327 (https://download.01.org/0day-ci/archive/20260327/202603271242.rKGaiSYu-lkp@intel.com/config)
compiler: gcc-14 (Debian 14.2.0-19) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260327/202603271242.rKGaiSYu-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202603271242.rKGaiSYu-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> kernel/bpf/trampoline.c:100:13: warning: 'trampoline_unlock_all' defined but not used [-Wunused-function]
100 | static void trampoline_unlock_all(void)
| ^~~~~~~~~~~~~~~~~~~~~
>> kernel/bpf/trampoline.c:92:13: warning: 'trampoline_lock_all' defined but not used [-Wunused-function]
92 | static void trampoline_lock_all(void)
| ^~~~~~~~~~~~~~~~~~~
vim +/trampoline_unlock_all +100 kernel/bpf/trampoline.c
91
> 92 static void trampoline_lock_all(void)
93 {
94 int i;
95
96 for (i = 0; i < TRAMPOLINE_LOCKS_TABLE_SIZE; i++)
97 mutex_lock(&trampoline_locks[i].mutex);
98 }
99
> 100 static void trampoline_unlock_all(void)
101 {
102 int i;
103
104 for (i = 0; i < TRAMPOLINE_LOCKS_TABLE_SIZE; i++)
105 mutex_unlock(&trampoline_locks[i].mutex);
106 }
107 #else
108 static struct bpf_trampoline *direct_ops_ip_lookup(struct ftrace_ops *ops, unsigned long ip)
109 {
110 return ops->private;
111 }
112 #endif /* CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS */
113
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCHv4 bpf-next 10/25] bpf: Add support for tracing multi link
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (8 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 09/25] bpf: Add bpf_trampoline_multi_attach/detach functions Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 11/25] bpf: Add support for tracing_multi link cookies Jiri Olsa
` (15 subsequent siblings)
25 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding new link to allow to attach program to multiple function
BTF IDs. The link is represented by struct bpf_tracing_multi_link.
To configure the link, new fields are added to bpf_attr::link_create
to pass array of BTF IDs;
struct {
__aligned_u64 ids;
__u32 cnt;
} tracing_multi;
Each BTF ID represents function (BTF_KIND_FUNC) that the link will
attach bpf program to.
We use previously added bpf_trampoline_multi_attach/detach functions
to attach/detach the link.
The linkinfo/fdinfo callbacks will be implemented in following changes.
Note this is supported only for archs (x86_64) with ftrace direct and
have single ops support.
CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS &&
CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/bpf_types.h | 1 +
include/linux/trace_events.h | 6 +++
include/uapi/linux/bpf.h | 5 ++
kernel/bpf/syscall.c | 2 +
kernel/trace/bpf_trace.c | 90 ++++++++++++++++++++++++++++++++++
tools/include/uapi/linux/bpf.h | 6 +++
tools/lib/bpf/libbpf.c | 1 +
7 files changed, 111 insertions(+)
diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
index b13de31e163f..96575b5b563e 100644
--- a/include/linux/bpf_types.h
+++ b/include/linux/bpf_types.h
@@ -155,3 +155,4 @@ BPF_LINK_TYPE(BPF_LINK_TYPE_PERF_EVENT, perf)
BPF_LINK_TYPE(BPF_LINK_TYPE_KPROBE_MULTI, kprobe_multi)
BPF_LINK_TYPE(BPF_LINK_TYPE_STRUCT_OPS, struct_ops)
BPF_LINK_TYPE(BPF_LINK_TYPE_UPROBE_MULTI, uprobe_multi)
+BPF_LINK_TYPE(BPF_LINK_TYPE_TRACING_MULTI, tracing_multi)
diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 37eb2f0f3dd8..b6d4c745bdac 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -783,6 +783,7 @@ int bpf_get_perf_event_info(const struct perf_event *event, u32 *prog_id,
unsigned long *missed);
int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog);
int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog);
+int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr);
#else
static inline unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx)
{
@@ -835,6 +836,11 @@ bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
{
return -EOPNOTSUPP;
}
+static inline int
+bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
+{
+ return -EOPNOTSUPP;
+}
#endif
enum {
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 68600972a778..7f5c51f27a36 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1180,6 +1180,7 @@ enum bpf_link_type {
BPF_LINK_TYPE_UPROBE_MULTI = 12,
BPF_LINK_TYPE_NETKIT = 13,
BPF_LINK_TYPE_SOCKMAP = 14,
+ BPF_LINK_TYPE_TRACING_MULTI = 15,
__MAX_BPF_LINK_TYPE,
};
@@ -1863,6 +1864,10 @@ union bpf_attr {
};
__u64 expected_revision;
} cgroup;
+ struct {
+ __aligned_u64 ids;
+ __u32 cnt;
+ } tracing_multi;
};
} link_create;
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 2680740e9c09..94c6a9c81ef0 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -5749,6 +5749,8 @@ static int link_create(union bpf_attr *attr, bpfptr_t uattr)
ret = bpf_iter_link_attach(attr, uattr, prog);
else if (prog->expected_attach_type == BPF_LSM_CGROUP)
ret = cgroup_bpf_link_attach(attr, prog);
+ else if (is_tracing_multi(prog->expected_attach_type))
+ ret = bpf_tracing_multi_attach(prog, attr);
else
ret = bpf_tracing_prog_attach(prog,
attr->link_create.target_fd,
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 0b040a417442..4a92d47d1eaf 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -42,6 +42,7 @@
#define MAX_UPROBE_MULTI_CNT (1U << 20)
#define MAX_KPROBE_MULTI_CNT (1U << 20)
+#define MAX_TRACING_MULTI_CNT (1U << 20)
#ifdef CONFIG_MODULES
struct bpf_trace_module {
@@ -3594,3 +3595,92 @@ __bpf_kfunc int bpf_copy_from_user_task_str_dynptr(struct bpf_dynptr *dptr, u64
}
__bpf_kfunc_end_defs();
+
+#if defined(CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS) && \
+ defined(CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS)
+
+static void bpf_tracing_multi_link_release(struct bpf_link *link)
+{
+ struct bpf_tracing_multi_link *tr_link =
+ container_of(link, struct bpf_tracing_multi_link, link);
+
+ WARN_ON_ONCE(bpf_trampoline_multi_detach(link->prog, tr_link));
+}
+
+static void bpf_tracing_multi_link_dealloc(struct bpf_link *link)
+{
+ struct bpf_tracing_multi_link *tr_link =
+ container_of(link, struct bpf_tracing_multi_link, link);
+
+ kvfree(tr_link);
+}
+
+static const struct bpf_link_ops bpf_tracing_multi_link_lops = {
+ .release = bpf_tracing_multi_link_release,
+ .dealloc_deferred = bpf_tracing_multi_link_dealloc,
+};
+
+int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
+{
+ struct bpf_tracing_multi_link *link = NULL;
+ struct bpf_link_primer link_primer;
+ u32 cnt, *ids = NULL;
+ u32 __user *uids;
+ int err;
+
+ uids = u64_to_user_ptr(attr->link_create.tracing_multi.ids);
+ cnt = attr->link_create.tracing_multi.cnt;
+
+ if (!cnt || !uids)
+ return -EINVAL;
+ if (cnt > MAX_TRACING_MULTI_CNT)
+ return -E2BIG;
+ if (attr->link_create.flags)
+ return -EINVAL;
+
+ ids = kvmalloc_objs(*ids, cnt);
+ if (!ids)
+ return -ENOMEM;
+
+ if (copy_from_user(ids, uids, cnt * sizeof(*ids))) {
+ err = -EFAULT;
+ goto error;
+ }
+
+ link = kvzalloc_flex(*link, nodes, cnt);
+ if (!link) {
+ err = -ENOMEM;
+ goto error;
+ }
+
+ bpf_link_init(&link->link, BPF_LINK_TYPE_TRACING_MULTI,
+ &bpf_tracing_multi_link_lops, prog, prog->expected_attach_type);
+
+ err = bpf_link_prime(&link->link, &link_primer);
+ if (err)
+ goto error;
+
+ link->nodes_cnt = cnt;
+
+ err = bpf_trampoline_multi_attach(prog, ids, link);
+ kvfree(ids);
+ if (err) {
+ bpf_link_cleanup(&link_primer);
+ return err;
+ }
+ return bpf_link_settle(&link_primer);
+
+error:
+ kvfree(ids);
+ kvfree(link);
+ return err;
+}
+
+#else
+
+int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
+{
+ return -EOPNOTSUPP;
+}
+
+#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS && CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS */
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 61f0fe5bc0aa..7f5c51f27a36 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1180,6 +1180,7 @@ enum bpf_link_type {
BPF_LINK_TYPE_UPROBE_MULTI = 12,
BPF_LINK_TYPE_NETKIT = 13,
BPF_LINK_TYPE_SOCKMAP = 14,
+ BPF_LINK_TYPE_TRACING_MULTI = 15,
__MAX_BPF_LINK_TYPE,
};
@@ -1863,6 +1864,10 @@ union bpf_attr {
};
__u64 expected_revision;
} cgroup;
+ struct {
+ __aligned_u64 ids;
+ __u32 cnt;
+ } tracing_multi;
};
} link_create;
@@ -7236,6 +7241,7 @@ enum {
TCP_BPF_SOCK_OPS_CB_FLAGS = 1008, /* Get or Set TCP sock ops flags */
SK_BPF_CB_FLAGS = 1009, /* Get or set sock ops flags in socket */
SK_BPF_BYPASS_PROT_MEM = 1010, /* Get or Set sk->sk_bypass_prot_mem */
+
};
enum {
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 0f035e0db2a0..9ef3bfffeb07 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -156,6 +156,7 @@ static const char * const link_type_name[] = {
[BPF_LINK_TYPE_UPROBE_MULTI] = "uprobe_multi",
[BPF_LINK_TYPE_NETKIT] = "netkit",
[BPF_LINK_TYPE_SOCKMAP] = "sockmap",
+ [BPF_LINK_TYPE_TRACING_MULTI] = "tracing_multi",
};
static const char * const map_type_name[] = {
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* [PATCHv4 bpf-next 11/25] bpf: Add support for tracing_multi link cookies
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (9 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 10/25] bpf: Add support for tracing multi link Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 12/25] bpf: Add support for tracing_multi link session Jiri Olsa
` (14 subsequent siblings)
25 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Add support to specify cookies for tracing_multi link.
Cookies are provided in array where each value is paired with provided
BTF ID value with the same array index.
Such cookie can be retrieved by bpf program with bpf_get_attach_cookie
helper call.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/bpf.h | 1 +
include/uapi/linux/bpf.h | 1 +
kernel/bpf/trampoline.c | 1 +
kernel/trace/bpf_trace.c | 18 ++++++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
5 files changed, 22 insertions(+)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 113c9eb7a207..4a501eb12951 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1912,6 +1912,7 @@ struct bpf_tracing_multi_node {
struct bpf_tracing_multi_link {
struct bpf_link link;
+ u64 *cookies;
int nodes_cnt;
struct bpf_tracing_multi_node nodes[] __counted_by(nodes_cnt);
};
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 7f5c51f27a36..e28722ddeb5b 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1866,6 +1866,7 @@ union bpf_attr {
} cgroup;
struct {
__aligned_u64 ids;
+ __aligned_u64 cookies;
__u32 cnt;
} tracing_multi;
};
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 2986e5cac743..85a3b8c340e0 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -1580,6 +1580,7 @@ int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
mnode = &link->nodes[i];
mnode->trampoline = tr;
mnode->node.link = &link->link;
+ mnode->node.cookie = link->cookies ? link->cookies[i] : 0;
}
trampoline_lock_all();
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 4a92d47d1eaf..5e3ff9ffc0ab 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -3612,6 +3612,7 @@ static void bpf_tracing_multi_link_dealloc(struct bpf_link *link)
struct bpf_tracing_multi_link *tr_link =
container_of(link, struct bpf_tracing_multi_link, link);
+ kvfree(tr_link->cookies);
kvfree(tr_link);
}
@@ -3625,6 +3626,8 @@ int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
struct bpf_tracing_multi_link *link = NULL;
struct bpf_link_primer link_primer;
u32 cnt, *ids = NULL;
+ u64 __user *ucookies;
+ u64 *cookies = NULL;
u32 __user *uids;
int err;
@@ -3647,6 +3650,19 @@ int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
goto error;
}
+ ucookies = u64_to_user_ptr(attr->link_create.tracing_multi.cookies);
+ if (ucookies) {
+ cookies = kvmalloc_objs(*cookies, cnt);
+ if (!cookies) {
+ err = -ENOMEM;
+ goto error;
+ }
+ if (copy_from_user(cookies, ucookies, cnt * sizeof(*cookies))) {
+ err = -EFAULT;
+ goto error;
+ }
+ }
+
link = kvzalloc_flex(*link, nodes, cnt);
if (!link) {
err = -ENOMEM;
@@ -3661,6 +3677,7 @@ int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
goto error;
link->nodes_cnt = cnt;
+ link->cookies = cookies;
err = bpf_trampoline_multi_attach(prog, ids, link);
kvfree(ids);
@@ -3671,6 +3688,7 @@ int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
return bpf_link_settle(&link_primer);
error:
+ kvfree(cookies);
kvfree(ids);
kvfree(link);
return err;
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 7f5c51f27a36..e28722ddeb5b 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1866,6 +1866,7 @@ union bpf_attr {
} cgroup;
struct {
__aligned_u64 ids;
+ __aligned_u64 cookies;
__u32 cnt;
} tracing_multi;
};
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* [PATCHv4 bpf-next 12/25] bpf: Add support for tracing_multi link session
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (10 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 11/25] bpf: Add support for tracing_multi link cookies Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 13/25] bpf: Add support for tracing_multi link fdinfo Jiri Olsa
` (13 subsequent siblings)
25 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding support to use session attachment with tracing_multi link.
Adding new BPF_TRACE_FSESSION_MULTI program attach type, that follows
the BPF_TRACE_FSESSION behaviour but on the tracing_multi link.
Such program is called on entry and exit of the attached function
and allows to pass cookie value from entry to exit execution.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/bpf.h | 6 ++++-
include/uapi/linux/bpf.h | 1 +
kernel/bpf/btf.c | 2 ++
kernel/bpf/syscall.c | 1 +
kernel/bpf/trampoline.c | 45 +++++++++++++++++++++++++++-------
kernel/bpf/verifier.c | 17 ++++++++++---
kernel/trace/bpf_trace.c | 15 +++++++++++-
net/bpf/test_run.c | 1 +
tools/include/uapi/linux/bpf.h | 1 +
tools/lib/bpf/libbpf.c | 1 +
10 files changed, 75 insertions(+), 15 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 4a501eb12951..151596df2d39 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1913,6 +1913,7 @@ struct bpf_tracing_multi_node {
struct bpf_tracing_multi_link {
struct bpf_link link;
u64 *cookies;
+ struct bpf_tramp_node *fexits;
int nodes_cnt;
struct bpf_tracing_multi_node nodes[] __counted_by(nodes_cnt);
};
@@ -2100,7 +2101,8 @@ static inline void bpf_prog_put_recursion_context(struct bpf_prog *prog)
static inline bool is_tracing_multi(enum bpf_attach_type type)
{
- return type == BPF_TRACE_FENTRY_MULTI || type == BPF_TRACE_FEXIT_MULTI;
+ return type == BPF_TRACE_FENTRY_MULTI || type == BPF_TRACE_FEXIT_MULTI ||
+ type == BPF_TRACE_FSESSION_MULTI;
}
#if defined(CONFIG_BPF_JIT) && defined(CONFIG_BPF_SYSCALL)
@@ -2224,6 +2226,8 @@ static inline int bpf_fsession_cnt(struct bpf_tramp_nodes *nodes)
for (int i = 0; i < nodes[BPF_TRAMP_FENTRY].nr_nodes; i++) {
if (fentries.nodes[i]->link->prog->expected_attach_type == BPF_TRACE_FSESSION)
cnt++;
+ if (fentries.nodes[i]->link->prog->expected_attach_type == BPF_TRACE_FSESSION_MULTI)
+ cnt++;
}
return cnt;
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index e28722ddeb5b..4520830fda06 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1156,6 +1156,7 @@ enum bpf_attach_type {
BPF_TRACE_FSESSION,
BPF_TRACE_FENTRY_MULTI,
BPF_TRACE_FEXIT_MULTI,
+ BPF_TRACE_FSESSION_MULTI,
__MAX_BPF_ATTACH_TYPE
};
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 60bef23e8b06..10165614db4f 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6230,6 +6230,7 @@ static int btf_validate_prog_ctx_type(struct bpf_verifier_log *log, const struct
case BPF_TRACE_FEXIT:
case BPF_MODIFY_RETURN:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FSESSION_MULTI:
case BPF_TRACE_FENTRY_MULTI:
case BPF_TRACE_FEXIT_MULTI:
/* allow u64* as ctx */
@@ -6834,6 +6835,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
case BPF_LSM_CGROUP:
case BPF_TRACE_FEXIT:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FSESSION_MULTI:
/* When LSM programs are attached to void LSM hooks
* they use FEXIT trampolines and when attached to
* int LSM hooks, they use MODIFY_RETURN trampolines.
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 94c6a9c81ef0..c13cb812a1d3 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -4388,6 +4388,7 @@ attach_type_to_prog_type(enum bpf_attach_type attach_type)
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FSESSION_MULTI:
case BPF_TRACE_FENTRY_MULTI:
case BPF_TRACE_FEXIT_MULTI:
case BPF_MODIFY_RETURN:
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 85a3b8c340e0..8405d0f92847 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -199,7 +199,8 @@ bool bpf_prog_has_trampoline(const struct bpf_prog *prog)
case BPF_PROG_TYPE_TRACING:
if (eatype == BPF_TRACE_FENTRY || eatype == BPF_TRACE_FEXIT ||
eatype == BPF_MODIFY_RETURN || eatype == BPF_TRACE_FSESSION ||
- eatype == BPF_TRACE_FENTRY_MULTI || eatype == BPF_TRACE_FEXIT_MULTI)
+ eatype == BPF_TRACE_FENTRY_MULTI || eatype == BPF_TRACE_FEXIT_MULTI ||
+ eatype == BPF_TRACE_FSESSION_MULTI)
return true;
return false;
case BPF_PROG_TYPE_LSM:
@@ -796,6 +797,7 @@ static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(struct bpf_prog *prog)
case BPF_TRACE_FEXIT_MULTI:
return BPF_TRAMP_FEXIT;
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FSESSION_MULTI:
return BPF_TRAMP_FSESSION;
case BPF_LSM_MAC:
if (!prog->aux->attach_func_proto->type)
@@ -828,15 +830,32 @@ static int bpf_freplace_check_tgt_prog(struct bpf_prog *tgt_prog)
return 0;
}
+static struct bpf_tramp_node *fsession_exit(struct bpf_tramp_node *node)
+{
+ if (node->link->type == BPF_LINK_TYPE_TRACING) {
+ struct bpf_tracing_link *link;
+
+ link = container_of(node->link, struct bpf_tracing_link, link.link);
+ return &link->fexit;
+ } else if (node->link->type == BPF_LINK_TYPE_TRACING_MULTI) {
+ struct bpf_tracing_multi_link *link;
+ struct bpf_tracing_multi_node *mnode;
+
+ link = container_of(node->link, struct bpf_tracing_multi_link, link);
+ mnode = container_of(node, struct bpf_tracing_multi_node, node);
+ return &link->fexits[mnode - link->nodes];
+ }
+ return NULL;
+}
+
static int __bpf_trampoline_link_prog(struct bpf_tramp_node *node,
struct bpf_trampoline *tr,
struct bpf_prog *tgt_prog,
struct bpf_trampoline_ops *ops,
void *data)
{
- struct bpf_tracing_link *tr_link = NULL;
enum bpf_tramp_prog_type kind;
- struct bpf_tramp_node *node_existing;
+ struct bpf_tramp_node *node_existing, *fexit;
struct hlist_head *prog_list;
int err = 0;
int cnt = 0, i;
@@ -884,8 +903,10 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_node *node,
hlist_add_head(&node->tramp_hlist, prog_list);
if (kind == BPF_TRAMP_FSESSION) {
tr->progs_cnt[BPF_TRAMP_FENTRY]++;
- tr_link = container_of(node, struct bpf_tracing_link, link.node);
- hlist_add_head(&tr_link->fexit.tramp_hlist, &tr->progs_hlist[BPF_TRAMP_FEXIT]);
+ fexit = fsession_exit(node);
+ if (WARN_ON_ONCE(!fexit))
+ return -EINVAL;
+ hlist_add_head(&fexit->tramp_hlist, &tr->progs_hlist[BPF_TRAMP_FEXIT]);
tr->progs_cnt[BPF_TRAMP_FEXIT]++;
} else {
tr->progs_cnt[kind]++;
@@ -895,7 +916,7 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_node *node,
hlist_del_init(&node->tramp_hlist);
if (kind == BPF_TRAMP_FSESSION) {
tr->progs_cnt[BPF_TRAMP_FENTRY]--;
- hlist_del_init(&tr_link->fexit.tramp_hlist);
+ hlist_del_init(&fexit->tramp_hlist);
tr->progs_cnt[BPF_TRAMP_FEXIT]--;
} else {
tr->progs_cnt[kind]--;
@@ -936,10 +957,11 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_node *node,
tgt_prog->aux->is_extended = false;
return err;
} else if (kind == BPF_TRAMP_FSESSION) {
- struct bpf_tracing_link *tr_link =
- container_of(node, struct bpf_tracing_link, link.node);
+ struct bpf_tramp_node *fexit = fsession_exit(node);
- hlist_del_init(&tr_link->fexit.tramp_hlist);
+ if (WARN_ON_ONCE(!fexit))
+ return -EINVAL;
+ hlist_del_init(&fexit->tramp_hlist);
tr->progs_cnt[BPF_TRAMP_FEXIT]--;
kind = BPF_TRAMP_FENTRY;
}
@@ -1581,6 +1603,11 @@ int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
mnode->trampoline = tr;
mnode->node.link = &link->link;
mnode->node.cookie = link->cookies ? link->cookies[i] : 0;
+
+ if (prog->expected_attach_type == BPF_TRACE_FSESSION_MULTI) {
+ link->fexits[i].link = &link->link;
+ link->fexits[i].cookie = link->cookies ? link->cookies[i] : 0;
+ }
}
trampoline_lock_all();
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 0be54f500c66..83cc3f832287 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -17983,6 +17983,7 @@ static bool return_retval_range(struct bpf_verifier_env *env, struct bpf_retval_
case BPF_TRACE_FSESSION:
case BPF_TRACE_FENTRY_MULTI:
case BPF_TRACE_FEXIT_MULTI:
+ case BPF_TRACE_FSESSION_MULTI:
*range = retval_range(0, 0);
break;
case BPF_TRACE_RAW_TP:
@@ -23376,7 +23377,8 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
insn_buf[0] = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1);
*cnt = 1;
} else if (desc->func_id == special_kfunc_list[KF_bpf_session_is_return] &&
- env->prog->expected_attach_type == BPF_TRACE_FSESSION) {
+ (env->prog->expected_attach_type == BPF_TRACE_FSESSION ||
+ env->prog->expected_attach_type == BPF_TRACE_FSESSION_MULTI)) {
/*
* inline the bpf_session_is_return() for fsession:
* bool bpf_session_is_return(void *ctx)
@@ -23389,7 +23391,8 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
insn_buf[2] = BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1);
*cnt = 3;
} else if (desc->func_id == special_kfunc_list[KF_bpf_session_cookie] &&
- env->prog->expected_attach_type == BPF_TRACE_FSESSION) {
+ (env->prog->expected_attach_type == BPF_TRACE_FSESSION ||
+ env->prog->expected_attach_type == BPF_TRACE_FSESSION_MULTI)) {
/*
* inline bpf_session_cookie() for fsession:
* __u64 *bpf_session_cookie(void *ctx)
@@ -24177,6 +24180,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
if (eatype == BPF_TRACE_FEXIT ||
eatype == BPF_TRACE_FSESSION ||
eatype == BPF_TRACE_FEXIT_MULTI ||
+ eatype == BPF_TRACE_FSESSION_MULTI ||
eatype == BPF_MODIFY_RETURN) {
/* Load nr_args from ctx - 8 */
insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
@@ -25257,7 +25261,8 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
tgt_prog->expected_attach_type == BPF_TRACE_FEXIT ||
tgt_prog->expected_attach_type == BPF_TRACE_FENTRY_MULTI ||
tgt_prog->expected_attach_type == BPF_TRACE_FEXIT_MULTI ||
- tgt_prog->expected_attach_type == BPF_TRACE_FSESSION)) {
+ tgt_prog->expected_attach_type == BPF_TRACE_FSESSION ||
+ tgt_prog->expected_attach_type == BPF_TRACE_FSESSION_MULTI)) {
/* Program extensions can extend all program types
* except fentry/fexit. The reason is the following.
* The fentry/fexit programs are used for performance
@@ -25357,9 +25362,11 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FSESSION_MULTI:
case BPF_TRACE_FENTRY_MULTI:
case BPF_TRACE_FEXIT_MULTI:
- if (prog->expected_attach_type == BPF_TRACE_FSESSION &&
+ if ((prog->expected_attach_type == BPF_TRACE_FSESSION ||
+ prog->expected_attach_type == BPF_TRACE_FSESSION_MULTI) &&
!bpf_jit_supports_fsession()) {
bpf_log(log, "JIT does not support fsession\n");
return -EOPNOTSUPP;
@@ -25510,6 +25517,7 @@ static bool can_be_sleepable(struct bpf_prog *prog)
case BPF_MODIFY_RETURN:
case BPF_TRACE_ITER:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_FSESSION_MULTI:
case BPF_TRACE_FENTRY_MULTI:
case BPF_TRACE_FEXIT_MULTI:
return true;
@@ -25594,6 +25602,7 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
return -EINVAL;
} else if ((prog->expected_attach_type == BPF_TRACE_FEXIT ||
prog->expected_attach_type == BPF_TRACE_FSESSION ||
+ prog->expected_attach_type == BPF_TRACE_FSESSION_MULTI ||
prog->expected_attach_type == BPF_MODIFY_RETURN) &&
btf_id_set_contains(&noreturn_deny, btf_id)) {
verbose(env, "Attaching fexit/fsession/fmod_ret to __noreturn function '%s' is rejected.\n",
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 5e3ff9ffc0ab..761501ce3a5f 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1306,7 +1306,8 @@ static inline bool is_uprobe_session(const struct bpf_prog *prog)
static inline bool is_trace_fsession(const struct bpf_prog *prog)
{
return prog->type == BPF_PROG_TYPE_TRACING &&
- prog->expected_attach_type == BPF_TRACE_FSESSION;
+ (prog->expected_attach_type == BPF_TRACE_FSESSION ||
+ prog->expected_attach_type == BPF_TRACE_FSESSION_MULTI);
}
static const struct bpf_func_proto *
@@ -3612,6 +3613,7 @@ static void bpf_tracing_multi_link_dealloc(struct bpf_link *link)
struct bpf_tracing_multi_link *tr_link =
container_of(link, struct bpf_tracing_multi_link, link);
+ kvfree(tr_link->fexits);
kvfree(tr_link->cookies);
kvfree(tr_link);
}
@@ -3624,6 +3626,7 @@ static const struct bpf_link_ops bpf_tracing_multi_link_lops = {
int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
{
struct bpf_tracing_multi_link *link = NULL;
+ struct bpf_tramp_node *fexits = NULL;
struct bpf_link_primer link_primer;
u32 cnt, *ids = NULL;
u64 __user *ucookies;
@@ -3663,6 +3666,14 @@ int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
}
}
+ if (prog->expected_attach_type == BPF_TRACE_FSESSION_MULTI) {
+ fexits = kvmalloc_objs(*fexits, cnt);
+ if (!fexits) {
+ err = -ENOMEM;
+ goto error;
+ }
+ }
+
link = kvzalloc_flex(*link, nodes, cnt);
if (!link) {
err = -ENOMEM;
@@ -3678,6 +3689,7 @@ int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
link->nodes_cnt = cnt;
link->cookies = cookies;
+ link->fexits = fexits;
err = bpf_trampoline_multi_attach(prog, ids, link);
kvfree(ids);
@@ -3688,6 +3700,7 @@ int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
return bpf_link_settle(&link_primer);
error:
+ kvfree(fexits);
kvfree(cookies);
kvfree(ids);
kvfree(link);
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index df7ae2c28a3b..fad293f03e0c 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -688,6 +688,7 @@ int bpf_prog_test_run_tracing(struct bpf_prog *prog,
case BPF_TRACE_FSESSION:
case BPF_TRACE_FENTRY_MULTI:
case BPF_TRACE_FEXIT_MULTI:
+ case BPF_TRACE_FSESSION_MULTI:
if (bpf_fentry_test1(1) != 2 ||
bpf_fentry_test2(2, 3) != 5 ||
bpf_fentry_test3(4, 5, 6) != 15 ||
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index e28722ddeb5b..4520830fda06 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1156,6 +1156,7 @@ enum bpf_attach_type {
BPF_TRACE_FSESSION,
BPF_TRACE_FENTRY_MULTI,
BPF_TRACE_FEXIT_MULTI,
+ BPF_TRACE_FSESSION_MULTI,
__MAX_BPF_ATTACH_TYPE
};
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 9ef3bfffeb07..bbdfc9160199 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -138,6 +138,7 @@ static const char * const attach_type_name[] = {
[BPF_TRACE_UPROBE_SESSION] = "trace_uprobe_session",
[BPF_TRACE_FENTRY_MULTI] = "trace_fentry_multi",
[BPF_TRACE_FEXIT_MULTI] = "trace_fexit_multi",
+ [BPF_TRACE_FSESSION_MULTI] = "trace_fsession_multi",
};
static const char * const link_type_name[] = {
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* [PATCHv4 bpf-next 13/25] bpf: Add support for tracing_multi link fdinfo
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (11 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 12/25] bpf: Add support for tracing_multi link session Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-25 6:43 ` Leon Hwang
2026-03-24 8:18 ` [PATCHv4 bpf-next 14/25] libbpf: Add bpf_object_cleanup_btf function Jiri Olsa
` (12 subsequent siblings)
25 siblings, 1 reply; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding tracing_multi link fdinfo support with following output:
pos: 0
flags: 02000000
mnt_id: 19
ino: 3091
link_type: tracing_multi
link_id: 382
prog_tag: 62073a1123f07ef7
prog_id: 715
cnt: 10
cookie BTF-id func
8 91203 bpf_fentry_test1+0x4/0x10
9 91205 bpf_fentry_test2+0x4/0x10
7 91206 bpf_fentry_test3+0x4/0x20
5 91207 bpf_fentry_test4+0x4/0x20
4 91208 bpf_fentry_test5+0x4/0x20
2 91209 bpf_fentry_test6+0x4/0x20
3 91210 bpf_fentry_test7+0x4/0x10
1 91211 bpf_fentry_test8+0x4/0x10
10 91212 bpf_fentry_test9+0x4/0x10
6 91204 bpf_fentry_test10+0x4/0x10
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
kernel/trace/bpf_trace.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 761501ce3a5f..41b691e83dc4 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -3618,9 +3618,35 @@ static void bpf_tracing_multi_link_dealloc(struct bpf_link *link)
kvfree(tr_link);
}
+#ifdef CONFIG_PROC_FS
+static void bpf_tracing_multi_show_fdinfo(const struct bpf_link *link,
+ struct seq_file *seq)
+{
+ struct bpf_tracing_multi_link *tr_link =
+ container_of(link, struct bpf_tracing_multi_link, link);
+ bool has_cookies = !!tr_link->cookies;
+
+ seq_printf(seq, "cnt:\t%u\n", tr_link->nodes_cnt);
+
+ seq_printf(seq, "%s\t %s\t %s\n", "cookie", "BTF-id", "func");
+ for (int i = 0; i < tr_link->nodes_cnt; i++) {
+ struct bpf_tracing_multi_node *mnode = &tr_link->nodes[i];
+ u32 btf_id;
+
+ bpf_trampoline_unpack_key(mnode->trampoline->key, NULL, &btf_id);
+ seq_printf(seq, "%llu\t %u\t %pS\n",
+ has_cookies ? tr_link->cookies[i] : 0,
+ btf_id, (void *) mnode->trampoline->ip);
+ }
+}
+#endif
+
static const struct bpf_link_ops bpf_tracing_multi_link_lops = {
.release = bpf_tracing_multi_link_release,
.dealloc_deferred = bpf_tracing_multi_link_dealloc,
+#ifdef CONFIG_PROC_FS
+ .show_fdinfo = bpf_tracing_multi_show_fdinfo,
+#endif
};
int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* Re: [PATCHv4 bpf-next 13/25] bpf: Add support for tracing_multi link fdinfo
2026-03-24 8:18 ` [PATCHv4 bpf-next 13/25] bpf: Add support for tracing_multi link fdinfo Jiri Olsa
@ 2026-03-25 6:43 ` Leon Hwang
2026-03-25 21:49 ` Jiri Olsa
0 siblings, 1 reply; 41+ messages in thread
From: Leon Hwang @ 2026-03-25 6:43 UTC (permalink / raw)
To: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
On 24/3/26 16:18, Jiri Olsa wrote:
> Adding tracing_multi link fdinfo support with following output:
>
> pos: 0
> flags: 02000000
> mnt_id: 19
> ino: 3091
> link_type: tracing_multi
> link_id: 382
Would better to add attach_type?
attach_type: [fentry,fexit,fsession]_multi
Thanks,
Leon
> prog_tag: 62073a1123f07ef7
> prog_id: 715
> cnt: 10
> cookie BTF-id func
> 8 91203 bpf_fentry_test1+0x4/0x10
> 9 91205 bpf_fentry_test2+0x4/0x10
> 7 91206 bpf_fentry_test3+0x4/0x20
> 5 91207 bpf_fentry_test4+0x4/0x20
> 4 91208 bpf_fentry_test5+0x4/0x20
> 2 91209 bpf_fentry_test6+0x4/0x20
> 3 91210 bpf_fentry_test7+0x4/0x10
> 1 91211 bpf_fentry_test8+0x4/0x10
> 10 91212 bpf_fentry_test9+0x4/0x10
> 6 91204 bpf_fentry_test10+0x4/0x10
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
> kernel/trace/bpf_trace.c | 26 ++++++++++++++++++++++++++
> 1 file changed, 26 insertions(+)
>
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 761501ce3a5f..41b691e83dc4 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -3618,9 +3618,35 @@ static void bpf_tracing_multi_link_dealloc(struct bpf_link *link)
> kvfree(tr_link);
> }
>
> +#ifdef CONFIG_PROC_FS
> +static void bpf_tracing_multi_show_fdinfo(const struct bpf_link *link,
> + struct seq_file *seq)
> +{
> + struct bpf_tracing_multi_link *tr_link =
> + container_of(link, struct bpf_tracing_multi_link, link);
> + bool has_cookies = !!tr_link->cookies;
> +
> + seq_printf(seq, "cnt:\t%u\n", tr_link->nodes_cnt);
> +
> + seq_printf(seq, "%s\t %s\t %s\n", "cookie", "BTF-id", "func");
> + for (int i = 0; i < tr_link->nodes_cnt; i++) {
> + struct bpf_tracing_multi_node *mnode = &tr_link->nodes[i];
> + u32 btf_id;
> +
> + bpf_trampoline_unpack_key(mnode->trampoline->key, NULL, &btf_id);
> + seq_printf(seq, "%llu\t %u\t %pS\n",
> + has_cookies ? tr_link->cookies[i] : 0,
> + btf_id, (void *) mnode->trampoline->ip);
> + }
> +}
> +#endif
> +
> static const struct bpf_link_ops bpf_tracing_multi_link_lops = {
> .release = bpf_tracing_multi_link_release,
> .dealloc_deferred = bpf_tracing_multi_link_dealloc,
> +#ifdef CONFIG_PROC_FS
> + .show_fdinfo = bpf_tracing_multi_show_fdinfo,
> +#endif
> };
>
> int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
^ permalink raw reply [flat|nested] 41+ messages in thread* Re: [PATCHv4 bpf-next 13/25] bpf: Add support for tracing_multi link fdinfo
2026-03-25 6:43 ` Leon Hwang
@ 2026-03-25 21:49 ` Jiri Olsa
0 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-25 21:49 UTC (permalink / raw)
To: Leon Hwang
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
Yonghong Song, Menglong Dong, Steven Rostedt
On Wed, Mar 25, 2026 at 02:43:19PM +0800, Leon Hwang wrote:
> On 24/3/26 16:18, Jiri Olsa wrote:
> > Adding tracing_multi link fdinfo support with following output:
> >
> > pos: 0
> > flags: 02000000
> > mnt_id: 19
> > ino: 3091
> > link_type: tracing_multi
> > link_id: 382
>
> Would better to add attach_type?
>
> attach_type: [fentry,fexit,fsession]_multi
that's seems ok, will add
thanks,
jirka
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCHv4 bpf-next 14/25] libbpf: Add bpf_object_cleanup_btf function
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (12 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 13/25] bpf: Add support for tracing_multi link fdinfo Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 15/25] libbpf: Add bpf_link_create support for tracing_multi link Jiri Olsa
` (11 subsequent siblings)
25 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding bpf_object_cleanup_btf function to cleanup btf objects.
It will be used in following changes.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
tools/lib/bpf/libbpf.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index bbdfc9160199..c6cd6ccb870b 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -8885,13 +8885,10 @@ static void bpf_object_unpin(struct bpf_object *obj)
bpf_map__unpin(&obj->maps[i], NULL);
}
-static void bpf_object_post_load_cleanup(struct bpf_object *obj)
+static void bpf_object_cleanup_btf(struct bpf_object *obj)
{
int i;
- /* clean up fd_array */
- zfree(&obj->fd_array);
-
/* clean up module BTFs */
for (i = 0; i < obj->btf_module_cnt; i++) {
close(obj->btf_modules[i].fd);
@@ -8899,6 +8896,8 @@ static void bpf_object_post_load_cleanup(struct bpf_object *obj)
free(obj->btf_modules[i].name);
}
obj->btf_module_cnt = 0;
+ obj->btf_module_cap = 0;
+ obj->btf_modules_loaded = false;
zfree(&obj->btf_modules);
/* clean up vmlinux BTF */
@@ -8906,6 +8905,15 @@ static void bpf_object_post_load_cleanup(struct bpf_object *obj)
obj->btf_vmlinux = NULL;
}
+static void bpf_object_post_load_cleanup(struct bpf_object *obj)
+{
+ /* clean up fd_array */
+ zfree(&obj->fd_array);
+
+ /* clean up BTF */
+ bpf_object_cleanup_btf(obj);
+}
+
static int bpf_object_prepare(struct bpf_object *obj, const char *target_btf_path)
{
int err;
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* [PATCHv4 bpf-next 15/25] libbpf: Add bpf_link_create support for tracing_multi link
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (13 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 14/25] libbpf: Add bpf_object_cleanup_btf function Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 16/25] libbpf: Add btf_type_is_traceable_func function Jiri Olsa
` (10 subsequent siblings)
25 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding bpf_link_create support for tracing_multi link with
new tracing_multi record in struct bpf_link_create_opts.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
tools/lib/bpf/bpf.c | 9 +++++++++
tools/lib/bpf/bpf.h | 5 +++++
2 files changed, 14 insertions(+)
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index 5846de364209..ad4c94b6758d 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -790,6 +790,15 @@ int bpf_link_create(int prog_fd, int target_fd,
if (!OPTS_ZEROED(opts, uprobe_multi))
return libbpf_err(-EINVAL);
break;
+ case BPF_TRACE_FENTRY_MULTI:
+ case BPF_TRACE_FEXIT_MULTI:
+ case BPF_TRACE_FSESSION_MULTI:
+ attr.link_create.tracing_multi.ids = ptr_to_u64(OPTS_GET(opts, tracing_multi.ids, 0));
+ attr.link_create.tracing_multi.cookies = ptr_to_u64(OPTS_GET(opts, tracing_multi.cookies, 0));
+ attr.link_create.tracing_multi.cnt = OPTS_GET(opts, tracing_multi.cnt, 0);
+ if (!OPTS_ZEROED(opts, tracing_multi))
+ return libbpf_err(-EINVAL);
+ break;
case BPF_TRACE_RAW_TP:
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index 2c8e88ddb674..bc3b7bc5275e 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -454,6 +454,11 @@ struct bpf_link_create_opts {
__u32 relative_id;
__u64 expected_revision;
} cgroup;
+ struct {
+ const __u32 *ids;
+ const __u64 *cookies;
+ __u32 cnt;
+ } tracing_multi;
};
size_t :0;
};
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* [PATCHv4 bpf-next 16/25] libbpf: Add btf_type_is_traceable_func function
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (14 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 15/25] libbpf: Add bpf_link_create support for tracing_multi link Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:58 ` bot+bpf-ci
2026-03-24 8:18 ` [PATCHv4 bpf-next 17/25] libbpf: Add support to create tracing multi link Jiri Olsa
` (9 subsequent siblings)
25 siblings, 1 reply; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding btf_type_is_traceable_func function to perform same checks
as the kernel's btf_distill_func_proto function to prevent attachment
on some of the functions.
Exporting the function via libbpf_internal.h because it will be used
by benchmark test in following changes.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
tools/lib/bpf/libbpf.c | 54 +++++++++++++++++++++++++++++++++
tools/lib/bpf/libbpf_internal.h | 1 +
2 files changed, 55 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index c6cd6ccb870b..139df8484edb 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -12353,6 +12353,60 @@ static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, stru
return ret;
}
+#define MAX_BPF_FUNC_ARGS 12
+
+static bool btf_type_is_modifier(const struct btf_type *t)
+{
+ switch (BTF_INFO_KIND(t->info)) {
+ case BTF_KIND_TYPEDEF:
+ case BTF_KIND_VOLATILE:
+ case BTF_KIND_CONST:
+ case BTF_KIND_RESTRICT:
+ case BTF_KIND_TYPE_TAG:
+ return true;
+ default:
+ return false;
+ }
+}
+
+bool btf_type_is_traceable_func(const struct btf *btf, const struct btf_type *t)
+{
+ const struct btf_type *proto;
+ const struct btf_param *args;
+ __u32 i, nargs;
+ __s64 ret;
+
+ proto = btf_type_by_id(btf, t->type);
+ if (BTF_INFO_KIND(proto->info) != BTF_KIND_FUNC_PROTO)
+ return false;
+
+ args = (const struct btf_param *)(proto + 1);
+ nargs = btf_vlen(proto);
+ if (nargs > MAX_BPF_FUNC_ARGS)
+ return false;
+
+ /* No support for struct/union return argument type. */
+ t = btf__type_by_id(btf, proto->type);
+ while (t && btf_type_is_modifier(t))
+ t = btf__type_by_id(btf, t->type);
+
+ if (btf_is_struct(t) || btf_is_union(t))
+ return false;
+
+ for (i = 0; i < nargs; i++) {
+ /* No support for variable args. */
+ if (i == nargs - 1 && args[i].type == 0)
+ return false;
+
+ /* No support of struct argument size greater than 16 bytes. */
+ ret = btf__resolve_size(btf, args[i].type);
+ if (ret < 0 || ret > 16)
+ return false;
+ }
+
+ return true;
+}
+
static inline int add_uprobe_event_legacy(const char *probe_name, bool retprobe,
const char *binary_path, size_t offset)
{
diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
index 4bcb6ca69bb1..1b539c901898 100644
--- a/tools/lib/bpf/libbpf_internal.h
+++ b/tools/lib/bpf/libbpf_internal.h
@@ -250,6 +250,7 @@ const struct btf_type *skip_mods_and_typedefs(const struct btf *btf, __u32 id, _
const struct btf_header *btf_header(const struct btf *btf);
void btf_set_base_btf(struct btf *btf, const struct btf *base_btf);
int btf_relocate(struct btf *btf, const struct btf *base_btf, __u32 **id_map);
+bool btf_type_is_traceable_func(const struct btf *btf, const struct btf_type *t);
static inline enum btf_func_linkage btf_func_linkage(const struct btf_type *t)
{
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* Re: [PATCHv4 bpf-next 16/25] libbpf: Add btf_type_is_traceable_func function
2026-03-24 8:18 ` [PATCHv4 bpf-next 16/25] libbpf: Add btf_type_is_traceable_func function Jiri Olsa
@ 2026-03-24 8:58 ` bot+bpf-ci
2026-03-24 14:29 ` Jiri Olsa
0 siblings, 1 reply; 41+ messages in thread
From: bot+bpf-ci @ 2026-03-24 8:58 UTC (permalink / raw)
To: jolsa, ast, daniel, andrii
Cc: bpf, linux-trace-kernel, kafai, eddyz87, songliubraving, yhs,
menglong8.dong, rostedt, ast, andrii, daniel, martin.lau, eddyz87,
yonghong.song, clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 1406 bytes --]
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index c6cd6ccb870b..139df8484edb 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
[ ... ]
> +bool btf_type_is_traceable_func(const struct btf *btf, const struct btf_type *t)
> +{
[ ... ]
> + /* No support for struct/union return argument type. */
> + t = btf__type_by_id(btf, proto->type);
> + while (t && btf_type_is_modifier(t))
> + t = btf__type_by_id(btf, t->type);
> +
> + if (btf_is_struct(t) || btf_is_union(t))
> + return false;
It looks like t can be NULL after the while loop exits, since
btf__type_by_id() returns NULL for out-of-range type ids.
btf_is_struct() and btf_is_union() would then dereference it.
The kernel's __get_type_size(), which btf_distill_func_proto()
calls for the same purpose, has this after an identical while loop:
kernel/bpf/btf.c:__get_type_size() {
...
while (t && btf_type_is_modifier(t))
t = btf_type_by_id(btf, t->type);
if (!t)
return -EINVAL;
...
}
Should there be a similar NULL check here before calling
btf_is_struct(t)?
> +
> + for (i = 0; i < nargs; i++) {
[ ... ]
> + return true;
> +}
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/23480161822
^ permalink raw reply [flat|nested] 41+ messages in thread* Re: [PATCHv4 bpf-next 16/25] libbpf: Add btf_type_is_traceable_func function
2026-03-24 8:58 ` bot+bpf-ci
@ 2026-03-24 14:29 ` Jiri Olsa
0 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 14:29 UTC (permalink / raw)
To: bot+bpf-ci
Cc: ast, daniel, andrii, bpf, linux-trace-kernel, kafai, eddyz87,
songliubraving, yhs, menglong8.dong, rostedt, martin.lau,
yonghong.song, clm, ihor.solodrai
On Tue, Mar 24, 2026 at 08:58:39AM +0000, bot+bpf-ci@kernel.org wrote:
> > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > index c6cd6ccb870b..139df8484edb 100644
> > --- a/tools/lib/bpf/libbpf.c
> > +++ b/tools/lib/bpf/libbpf.c
>
> [ ... ]
>
> > +bool btf_type_is_traceable_func(const struct btf *btf, const struct btf_type *t)
> > +{
>
> [ ... ]
>
> > + /* No support for struct/union return argument type. */
> > + t = btf__type_by_id(btf, proto->type);
> > + while (t && btf_type_is_modifier(t))
> > + t = btf__type_by_id(btf, t->type);
> > +
> > + if (btf_is_struct(t) || btf_is_union(t))
> > + return false;
>
> It looks like t can be NULL after the while loop exits, since
> btf__type_by_id() returns NULL for out-of-range type ids.
> btf_is_struct() and btf_is_union() would then dereference it.
>
> The kernel's __get_type_size(), which btf_distill_func_proto()
> calls for the same purpose, has this after an identical while loop:
>
> kernel/bpf/btf.c:__get_type_size() {
> ...
> while (t && btf_type_is_modifier(t))
> t = btf_type_by_id(btf, t->type);
> if (!t)
> return -EINVAL;
> ...
> }
>
> Should there be a similar NULL check here before calling
> btf_is_struct(t)?
I don't think so, __get_type_size has btf_id as argument, so it needs
to be cautios, but while loop in here takes type from proto->type id
which must exist unless we have broken BTF data
jirka
>
> > +
> > + for (i = 0; i < nargs; i++) {
>
> [ ... ]
>
> > + return true;
> > +}
>
>
> ---
> AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
> See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
>
> CI run summary: https://github.com/kernel-patches/bpf/actions/runs/23480161822
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCHv4 bpf-next 17/25] libbpf: Add support to create tracing multi link
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (15 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 16/25] libbpf: Add btf_type_is_traceable_func function Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 18/25] selftests/bpf: Add tracing multi skel/pattern/ids attach tests Jiri Olsa
` (8 subsequent siblings)
25 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding bpf_program__attach_tracing_multi function for attaching
tracing program to multiple functions.
struct bpf_link *
bpf_program__attach_tracing_multi(const struct bpf_program *prog,
const char *pattern,
const struct bpf_tracing_multi_opts *opts);
User can specify functions to attach with 'pattern' argument that
allows wildcards (*?' supported) or provide BTF ids of functions
in array directly via opts argument. These options are mutually
exclusive.
When using BTF ids, user can also provide cookie value for each
provided id/function, that can be retrieved later in bpf program
with bpf_get_attach_cookie helper. Each cookie value is paired with
provided BTF id with the same array index.
Adding support to auto attach programs with following sections:
fsession.multi/<pattern>
fsession.multi.s/<pattern>
fentry.multi/<pattern>
fexit.multi/<pattern>
fentry.multi.s/<pattern>
fexit.multi.s/<pattern>
The provided <pattern> is used as 'pattern' argument in
bpf_program__attach_kprobe_multi_opts function.
The <pattern> allows to specify optional kernel module name with
following syntax:
<module>:<function_pattern>
In order to attach tracing_multi link to a module functions:
- program must be loaded with 'module' btf fd
(in attr::attach_btf_obj_fd)
- bpf_program__attach_tracing_multi must either have
pattern with module spec or BTF ids from the module
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
tools/lib/bpf/libbpf.c | 263 +++++++++++++++++++++++++++++++++++++++
tools/lib/bpf/libbpf.h | 15 +++
tools/lib/bpf/libbpf.map | 1 +
3 files changed, 279 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 139df8484edb..0c40ad51a380 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -7716,6 +7716,69 @@ static int bpf_object__sanitize_prog(struct bpf_object *obj, struct bpf_program
static int libbpf_find_attach_btf_id(struct bpf_program *prog, const char *attach_name,
int *btf_obj_fd, int *btf_type_id);
+static inline bool is_tracing_multi(enum bpf_attach_type type)
+{
+ return type == BPF_TRACE_FENTRY_MULTI || type == BPF_TRACE_FEXIT_MULTI ||
+ type == BPF_TRACE_FSESSION_MULTI;
+}
+
+static const struct module_btf *find_attach_module(struct bpf_object *obj, const char *attach)
+{
+ const char *sep, *mod_name = NULL;
+ int i, mod_len, err;
+
+ /*
+ * We expect attach string in the form of either
+ * - function_pattern or
+ * - <module>:function_pattern
+ */
+ sep = strchr(attach, ':');
+ if (sep) {
+ mod_name = attach;
+ mod_len = sep - mod_name;
+ }
+ if (!mod_name)
+ return NULL;
+
+ err = load_module_btfs(obj);
+ if (err)
+ return NULL;
+
+ for (i = 0; i < obj->btf_module_cnt; i++) {
+ const struct module_btf *mod = &obj->btf_modules[i];
+
+ if (strncmp(mod->name, mod_name, mod_len) == 0 && mod->name[mod_len] == '\0')
+ return mod;
+ }
+ return NULL;
+}
+
+static int tracing_multi_mod_fd(struct bpf_program *prog, int *btf_obj_fd)
+{
+ const char *attach_name, *sep;
+ const struct module_btf *mod;
+
+ *btf_obj_fd = 0;
+ attach_name = strchr(prog->sec_name, '/');
+
+ /* Program with no details in spec, using kernel btf. */
+ if (!attach_name)
+ return 0;
+
+ /* Program with no module section, using kernel btf. */
+ sep = strchr(++attach_name, ':');
+ if (!sep)
+ return 0;
+
+ /* Program with module specified, get its btf fd. */
+ mod = find_attach_module(prog->obj, attach_name);
+ if (!mod)
+ return -EINVAL;
+
+ *btf_obj_fd = mod->fd;
+ return 0;
+}
+
/* this is called as prog->sec_def->prog_prepare_load_fn for libbpf-supported sec_defs */
static int libbpf_prepare_prog_load(struct bpf_program *prog,
struct bpf_prog_load_opts *opts, long cookie)
@@ -7779,6 +7842,18 @@ static int libbpf_prepare_prog_load(struct bpf_program *prog,
opts->attach_btf_obj_fd = btf_obj_fd;
opts->attach_btf_id = btf_type_id;
}
+
+ if (is_tracing_multi(prog->expected_attach_type)) {
+ int err, btf_obj_fd = 0;
+
+ err = tracing_multi_mod_fd(prog, &btf_obj_fd);
+ if (err < 0)
+ return err;
+
+ prog->attach_btf_obj_fd = btf_obj_fd;
+ opts->attach_btf_obj_fd = btf_obj_fd;
+ }
+
return 0;
}
@@ -9913,6 +9988,7 @@ static int attach_kprobe_session(const struct bpf_program *prog, long cookie, st
static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link);
static int attach_lsm(const struct bpf_program *prog, long cookie, struct bpf_link **link);
static int attach_iter(const struct bpf_program *prog, long cookie, struct bpf_link **link);
+static int attach_tracing_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link);
static const struct bpf_sec_def section_defs[] = {
SEC_DEF("socket", SOCKET_FILTER, 0, SEC_NONE),
@@ -9961,6 +10037,12 @@ static const struct bpf_sec_def section_defs[] = {
SEC_DEF("fexit.s+", TRACING, BPF_TRACE_FEXIT, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace),
SEC_DEF("fsession+", TRACING, BPF_TRACE_FSESSION, SEC_ATTACH_BTF, attach_trace),
SEC_DEF("fsession.s+", TRACING, BPF_TRACE_FSESSION, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace),
+ SEC_DEF("fsession.multi+", TRACING, BPF_TRACE_FSESSION_MULTI, 0, attach_tracing_multi),
+ SEC_DEF("fsession.multi.s+", TRACING, BPF_TRACE_FSESSION_MULTI, SEC_SLEEPABLE, attach_tracing_multi),
+ SEC_DEF("fentry.multi+", TRACING, BPF_TRACE_FENTRY_MULTI, 0, attach_tracing_multi),
+ SEC_DEF("fexit.multi+", TRACING, BPF_TRACE_FEXIT_MULTI, 0, attach_tracing_multi),
+ SEC_DEF("fentry.multi.s+", TRACING, BPF_TRACE_FENTRY_MULTI, SEC_SLEEPABLE, attach_tracing_multi),
+ SEC_DEF("fexit.multi.s+", TRACING, BPF_TRACE_FEXIT_MULTI, SEC_SLEEPABLE, attach_tracing_multi),
SEC_DEF("freplace+", EXT, 0, SEC_ATTACH_BTF, attach_trace),
SEC_DEF("lsm+", LSM, BPF_LSM_MAC, SEC_ATTACH_BTF, attach_lsm),
SEC_DEF("lsm.s+", LSM, BPF_LSM_MAC, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_lsm),
@@ -12407,6 +12489,187 @@ bool btf_type_is_traceable_func(const struct btf *btf, const struct btf_type *t)
return true;
}
+static int
+collect_btf_func_ids_by_glob(const struct btf *btf, const char *pattern, __u32 **ids)
+{
+ __u32 type_id, nr_types = btf__type_cnt(btf);
+ size_t cap = 0, cnt = 0;
+
+ if (!pattern)
+ return -EINVAL;
+
+ for (type_id = 1; type_id < nr_types; type_id++) {
+ const struct btf_type *t = btf__type_by_id(btf, type_id);
+ const char *name;
+ int err;
+
+ if (btf_kind(t) != BTF_KIND_FUNC)
+ continue;
+ name = btf__name_by_offset(btf, t->name_off);
+ if (!name)
+ continue;
+
+ if (!glob_match(name, pattern))
+ continue;
+ if (!btf_type_is_traceable_func(btf, t))
+ continue;
+
+ err = libbpf_ensure_mem((void **) ids, &cap, sizeof(**ids), cnt + 1);
+ if (err) {
+ free(*ids);
+ return -ENOMEM;
+ }
+ (*ids)[cnt++] = type_id;
+ }
+
+ return cnt;
+}
+
+static int collect_func_ids_by_glob(struct bpf_object *obj, const char *pattern, __u32 **ids)
+{
+ const struct module_btf *mod;
+ struct btf *btf = NULL;
+ const char *sep;
+ int err;
+
+ err = bpf_object__load_vmlinux_btf(obj, true);
+ if (err)
+ return err;
+
+ /* In case we have module specified, we will find its btf and use that. */
+ sep = strchr(pattern, ':');
+ if (sep) {
+ mod = find_attach_module(obj, pattern);
+ if (!mod) {
+ err = -EINVAL;
+ goto cleanup;
+ }
+ btf = mod->btf;
+ pattern = sep + 1;
+ } else {
+ btf = obj->btf_vmlinux;
+ }
+
+ err = collect_btf_func_ids_by_glob(btf, pattern, ids);
+
+cleanup:
+ bpf_object_cleanup_btf(obj);
+ return err;
+}
+
+struct bpf_link *
+bpf_program__attach_tracing_multi(const struct bpf_program *prog, const char *pattern,
+ const struct bpf_tracing_multi_opts *opts)
+{
+ LIBBPF_OPTS(bpf_link_create_opts, lopts);
+ int prog_fd, link_fd, err, cnt;
+ __u32 *ids, *free_ids = NULL;
+ struct bpf_link *link;
+ __u64 *cookies;
+
+ if (!OPTS_VALID(opts, bpf_tracing_multi_opts))
+ return libbpf_err_ptr(-EINVAL);
+
+ cnt = OPTS_GET(opts, cnt, 0);
+ ids = OPTS_GET(opts, ids, NULL);
+ cookies = OPTS_GET(opts, cookies, NULL);
+
+ if (!!ids != !!cnt)
+ return libbpf_err_ptr(-EINVAL);
+ if (pattern && (ids || cookies))
+ return libbpf_err_ptr(-EINVAL);
+ if (!pattern && !ids)
+ return libbpf_err_ptr(-EINVAL);
+
+ if (pattern) {
+ cnt = collect_func_ids_by_glob(prog->obj, pattern, &ids);
+ if (cnt < 0)
+ return libbpf_err_ptr(cnt);
+ if (cnt == 0)
+ return libbpf_err_ptr(-EINVAL);
+ free_ids = ids;
+ }
+
+ lopts.tracing_multi.ids = ids;
+ lopts.tracing_multi.cookies = cookies;
+ lopts.tracing_multi.cnt = cnt;
+
+ link = calloc(1, sizeof(*link));
+ if (!link) {
+ err = -ENOMEM;
+ goto error;
+ }
+ link->detach = &bpf_link__detach_fd;
+
+ prog_fd = bpf_program__fd(prog);
+ link_fd = bpf_link_create(prog_fd, 0, prog->expected_attach_type, &lopts);
+ if (link_fd < 0) {
+ err = -errno;
+ pr_warn("prog '%s': failed to attach: %s\n", prog->name, errstr(err));
+ goto error;
+ }
+ link->fd = link_fd;
+ free(free_ids);
+ return link;
+
+error:
+ free(link);
+ free(free_ids);
+ return libbpf_err_ptr(err);
+}
+
+static int attach_tracing_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link)
+{
+ static const char *const prefixes[] = {
+ "fentry.multi",
+ "fexit.multi",
+ "fsession.multi",
+ "fentry.multi.s",
+ "fexit.multi.s",
+ "fsession.multi.s",
+ };
+ const char *spec = NULL;
+ char *pattern;
+ size_t i;
+ int n;
+
+ *link = NULL;
+
+ for (i = 0; i < ARRAY_SIZE(prefixes); i++) {
+ size_t pfx_len;
+
+ if (!str_has_pfx(prog->sec_name, prefixes[i]))
+ continue;
+
+ pfx_len = strlen(prefixes[i]);
+ /* no auto-attach case of, e.g., SEC("fentry.multi") */
+ if (prog->sec_name[pfx_len] == '\0')
+ return 0;
+
+ if (prog->sec_name[pfx_len] != '/')
+ continue;
+
+ spec = prog->sec_name + pfx_len + 1;
+ break;
+ }
+
+ if (!spec) {
+ pr_warn("prog '%s': invalid section name '%s'\n",
+ prog->name, prog->sec_name);
+ return -EINVAL;
+ }
+
+ n = sscanf(spec, "%m[a-zA-Z0-9_.*?:]", &pattern);
+ if (n < 1) {
+ pr_warn("tracing multi pattern is invalid: %s\n", spec);
+ return -EINVAL;
+ }
+
+ *link = bpf_program__attach_tracing_multi(prog, pattern, NULL);
+ free(pattern);
+ return libbpf_get_error(*link);
+}
+
static inline int add_uprobe_event_legacy(const char *probe_name, bool retprobe,
const char *binary_path, size_t offset)
{
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 0be34852350f..6b17cafd0709 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -701,6 +701,21 @@ bpf_program__attach_ksyscall(const struct bpf_program *prog,
const char *syscall_name,
const struct bpf_ksyscall_opts *opts);
+struct bpf_tracing_multi_opts {
+ /* size of this struct, for forward/backward compatibility */
+ size_t sz;
+ __u32 *ids;
+ __u64 *cookies;
+ size_t cnt;
+ size_t :0;
+};
+
+#define bpf_tracing_multi_opts__last_field cnt
+
+LIBBPF_API struct bpf_link *
+bpf_program__attach_tracing_multi(const struct bpf_program *prog, const char *pattern,
+ const struct bpf_tracing_multi_opts *opts);
+
struct bpf_uprobe_opts {
/* size of this struct, for forward/backward compatibility */
size_t sz;
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 5828040f178a..043973f28ec7 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -452,6 +452,7 @@ LIBBPF_1.7.0 {
bpf_map__set_exclusive_program;
bpf_map__exclusive_program;
bpf_prog_assoc_struct_ops;
+ bpf_program__attach_tracing_multi;
bpf_program__clone;
bpf_program__assoc_struct_ops;
btf__permute;
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* [PATCHv4 bpf-next 18/25] selftests/bpf: Add tracing multi skel/pattern/ids attach tests
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (16 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 17/25] libbpf: Add support to create tracing multi link Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 19/25] selftests/bpf: Add tracing multi skel/pattern/ids module " Jiri Olsa
` (7 subsequent siblings)
25 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding tests for tracing_multi link attachment via all possible
libbpf apis - skeleton, function pattern and btf ids.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
tools/testing/selftests/bpf/Makefile | 3 +-
.../selftests/bpf/prog_tests/tracing_multi.c | 252 ++++++++++++++++++
.../bpf/progs/tracing_multi_attach.c | 39 +++
.../selftests/bpf/progs/tracing_multi_check.c | 149 +++++++++++
4 files changed, 442 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/tracing_multi.c
create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_attach.c
create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_check.c
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index f75c4f52c028..308c085bad08 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -493,7 +493,7 @@ SKEL_BLACKLIST := btf__% test_pinning_invalid.c test_sk_assign.c
LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \
linked_vars.skel.h linked_maps.skel.h \
test_subskeleton.skel.h test_subskeleton_lib.skel.h \
- test_usdt.skel.h
+ test_usdt.skel.h tracing_multi.skel.h
LSKELS := fexit_sleep.c trace_printk.c trace_vprintk.c map_ptr_kern.c \
core_kern.c core_kern_overflow.c test_ringbuf.c \
@@ -519,6 +519,7 @@ test_usdt.skel.h-deps := test_usdt.bpf.o test_usdt_multispec.bpf.o
xsk_xdp_progs.skel.h-deps := xsk_xdp_progs.bpf.o
xdp_hw_metadata.skel.h-deps := xdp_hw_metadata.bpf.o
xdp_features.skel.h-deps := xdp_features.bpf.o
+tracing_multi.skel.h-deps := tracing_multi_attach.bpf.o tracing_multi_check.bpf.o
LINKED_BPF_OBJS := $(foreach skel,$(LINKED_SKELS),$($(skel)-deps))
LINKED_BPF_SRCS := $(patsubst %.bpf.o,%.c,$(LINKED_BPF_OBJS))
diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
new file mode 100644
index 000000000000..fc22a2cf8c13
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
@@ -0,0 +1,252 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <test_progs.h>
+#include <bpf/btf.h>
+#include <search.h>
+#include "bpf/libbpf_internal.h"
+#include "tracing_multi.skel.h"
+#include "trace_helpers.h"
+
+static const char * const bpf_fentry_test[] = {
+ "bpf_fentry_test1",
+ "bpf_fentry_test2",
+ "bpf_fentry_test3",
+ "bpf_fentry_test4",
+ "bpf_fentry_test5",
+ "bpf_fentry_test6",
+ "bpf_fentry_test7",
+ "bpf_fentry_test8",
+ "bpf_fentry_test9",
+ "bpf_fentry_test10",
+};
+
+#define FUNCS_CNT (ARRAY_SIZE(bpf_fentry_test))
+
+static int compare(const void *ppa, const void *ppb)
+{
+ const char *pa = *(const char **) ppa;
+ const char *pb = *(const char **) ppb;
+
+ return strcmp(pa, pb);
+}
+
+static void tdestroy_free_nop(void *ptr)
+{
+}
+
+static __u32 *get_ids(const char * const funcs[], int funcs_cnt, const char *mod)
+{
+ struct btf *btf, *vmlinux_btf;
+ __u32 nr, type_id, cnt = 0;
+ void *root = NULL;
+ __u32 *ids = NULL;
+ int i, err = 0;
+
+ btf = btf__load_vmlinux_btf();
+ if (!ASSERT_OK_PTR(btf, "btf__load_vmlinux_btf"))
+ return NULL;
+
+ if (mod) {
+ vmlinux_btf = btf;
+ btf = btf__load_module_btf(mod, vmlinux_btf);
+ if (!ASSERT_OK_PTR(btf, "btf__load_module_btf")) {
+ btf__free(vmlinux_btf);
+ goto out;
+ }
+ }
+
+ ids = calloc(funcs_cnt, sizeof(ids[0]));
+ if (!ids)
+ goto out;
+
+ /*
+ * We sort function names by name and search them
+ * below for each function.
+ */
+ for (i = 0; i < funcs_cnt; i++)
+ tsearch(&funcs[i], &root, compare);
+
+ nr = btf__type_cnt(btf);
+ for (type_id = 1; type_id < nr && cnt < funcs_cnt; type_id++) {
+ const struct btf_type *type;
+ const char *str, ***val;
+ unsigned int idx;
+
+ type = btf__type_by_id(btf, type_id);
+ if (!type) {
+ err = -1;
+ break;
+ }
+
+ if (BTF_INFO_KIND(type->info) != BTF_KIND_FUNC)
+ continue;
+
+ str = btf__name_by_offset(btf, type->name_off);
+ if (!str) {
+ err = -1;
+ break;
+ }
+
+ val = tfind(&str, &root, compare);
+ if (!val)
+ continue;
+
+ /*
+ * We keep pointer for each function name so we can get the original
+ * array index and have the resulting ids array matching the original
+ * function array.
+ *
+ * Doing it this way allow us to easily test the cookies support,
+ * because each cookie is attach to particular function/id.
+ */
+ idx = *val - funcs;
+ ids[idx] = type_id;
+ cnt++;
+ }
+
+ if (err) {
+ free(ids);
+ ids = NULL;
+ }
+
+out:
+ tdestroy(root, tdestroy_free_nop);
+ /* this will release base btf (vmlinux_btf) */
+ btf__free(btf);
+ return ids;
+}
+
+static void tracing_multi_test_run(struct tracing_multi *skel)
+{
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ int err, prog_fd;
+
+ prog_fd = bpf_program__fd(skel->progs.test_fentry);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+
+ /* extra +1 count for sleepable programs */
+ ASSERT_EQ(skel->bss->test_result_fentry, FUNCS_CNT + 1, "test_result_fentry");
+ ASSERT_EQ(skel->bss->test_result_fexit, FUNCS_CNT + 1, "test_result_fexit");
+}
+
+static void test_skel_api(void)
+{
+ struct tracing_multi *skel;
+ int err;
+
+ skel = tracing_multi__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "tracing_multi__open_and_load"))
+ return;
+
+ skel->bss->pid = getpid();
+
+ err = tracing_multi__attach(skel);
+ if (!ASSERT_OK(err, "tracing_multi__attach"))
+ goto cleanup;
+
+ tracing_multi_test_run(skel);
+
+cleanup:
+ tracing_multi__destroy(skel);
+}
+
+static void test_link_api_pattern(void)
+{
+ struct tracing_multi *skel;
+
+ skel = tracing_multi__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "tracing_multi__open_and_load"))
+ return;
+
+ skel->bss->pid = getpid();
+
+ skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
+ "bpf_fentry_test*", NULL);
+ if (!ASSERT_OK_PTR(skel->links.test_fentry, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ skel->links.test_fexit = bpf_program__attach_tracing_multi(skel->progs.test_fexit,
+ "bpf_fentry_test*", NULL);
+ if (!ASSERT_OK_PTR(skel->links.test_fexit, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ skel->links.test_fentry_s = bpf_program__attach_tracing_multi(skel->progs.test_fentry_s,
+ "bpf_fentry_test1", NULL);
+ if (!ASSERT_OK_PTR(skel->links.test_fentry_s, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ skel->links.test_fexit_s = bpf_program__attach_tracing_multi(skel->progs.test_fexit_s,
+ "bpf_fentry_test1", NULL);
+ if (!ASSERT_OK_PTR(skel->links.test_fexit_s, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ tracing_multi_test_run(skel);
+
+cleanup:
+ tracing_multi__destroy(skel);
+}
+
+static void test_link_api_ids(void)
+{
+ LIBBPF_OPTS(bpf_tracing_multi_opts, opts);
+ struct tracing_multi *skel;
+ size_t cnt = FUNCS_CNT;
+ __u32 *ids;
+
+ skel = tracing_multi__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "tracing_multi__open_and_load"))
+ return;
+
+ skel->bss->pid = getpid();
+
+ ids = get_ids(bpf_fentry_test, cnt, NULL);
+ if (!ASSERT_OK_PTR(ids, "get_ids"))
+ goto cleanup;
+
+ opts.ids = ids;
+ opts.cnt = cnt;
+
+ skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
+ NULL, &opts);
+ if (!ASSERT_OK_PTR(skel->links.test_fentry, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ skel->links.test_fexit = bpf_program__attach_tracing_multi(skel->progs.test_fexit,
+ NULL, &opts);
+ if (!ASSERT_OK_PTR(skel->links.test_fexit, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ /* Only bpf_fentry_test1 is allowed for sleepable programs. */
+ opts.cnt = 1;
+ skel->links.test_fentry_s = bpf_program__attach_tracing_multi(skel->progs.test_fentry_s,
+ NULL, &opts);
+ if (!ASSERT_OK_PTR(skel->links.test_fentry_s, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ skel->links.test_fexit_s = bpf_program__attach_tracing_multi(skel->progs.test_fexit_s,
+ NULL, &opts);
+ if (!ASSERT_OK_PTR(skel->links.test_fexit_s, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ tracing_multi_test_run(skel);
+
+cleanup:
+ tracing_multi__destroy(skel);
+ free(ids);
+}
+
+void test_tracing_multi_test(void)
+{
+#ifndef __x86_64__
+ test__skip();
+ return;
+#endif
+
+ if (test__start_subtest("skel_api"))
+ test_skel_api();
+ if (test__start_subtest("link_api_pattern"))
+ test_link_api_pattern();
+ if (test__start_subtest("link_api_ids"))
+ test_link_api_ids();
+}
diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_attach.c b/tools/testing/selftests/bpf/progs/tracing_multi_attach.c
new file mode 100644
index 000000000000..ae5e044b6997
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/tracing_multi_attach.c
@@ -0,0 +1,39 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+__hidden extern void tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return);
+
+__u64 test_result_fentry = 0;
+__u64 test_result_fexit = 0;
+
+SEC("fentry.multi/bpf_fentry_test*")
+int BPF_PROG(test_fentry)
+{
+ tracing_multi_arg_check(ctx, &test_result_fentry, false);
+ return 0;
+}
+
+SEC("fexit.multi/bpf_fentry_test*")
+int BPF_PROG(test_fexit)
+{
+ tracing_multi_arg_check(ctx, &test_result_fexit, true);
+ return 0;
+}
+
+SEC("fentry.multi.s/bpf_fentry_test1")
+int BPF_PROG(test_fentry_s)
+{
+ tracing_multi_arg_check(ctx, &test_result_fentry, false);
+ return 0;
+}
+
+SEC("fexit.multi.s/bpf_fentry_test1")
+int BPF_PROG(test_fexit_s)
+{
+ tracing_multi_arg_check(ctx, &test_result_fexit, true);
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_check.c b/tools/testing/selftests/bpf/progs/tracing_multi_check.c
new file mode 100644
index 000000000000..580195729506
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/tracing_multi_check.c
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+int pid = 0;
+
+/* bpf_fentry_test1 is exported as kfunc via vmlinux.h */
+extern const void bpf_fentry_test2 __ksym;
+extern const void bpf_fentry_test3 __ksym;
+extern const void bpf_fentry_test4 __ksym;
+extern const void bpf_fentry_test5 __ksym;
+extern const void bpf_fentry_test6 __ksym;
+extern const void bpf_fentry_test7 __ksym;
+extern const void bpf_fentry_test8 __ksym;
+extern const void bpf_fentry_test9 __ksym;
+extern const void bpf_fentry_test10 __ksym;
+
+void tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return)
+{
+ void *ip = (void *) bpf_get_func_ip(ctx);
+ __u64 value = 0, ret = 0;
+ long err = 0;
+
+ if (bpf_get_current_pid_tgid() >> 32 != pid)
+ return;
+
+ if (is_return)
+ err |= bpf_get_func_ret(ctx, &ret);
+
+ if (ip == &bpf_fentry_test1) {
+ int a;
+
+ err |= bpf_get_func_arg(ctx, 0, &value);
+ a = (int) value;
+
+ err |= is_return ? ret != 2 : 0;
+
+ *test_result += err == 0 && a == 1;
+ } else if (ip == &bpf_fentry_test2) {
+ __u64 b;
+ int a;
+
+ err |= bpf_get_func_arg(ctx, 0, &value);
+ a = (int) value;
+ err |= bpf_get_func_arg(ctx, 1, &value);
+ b = value;
+
+ err |= is_return ? ret != 5 : 0;
+
+ *test_result += err == 0 && a == 2 && b == 3;
+ } else if (ip == &bpf_fentry_test3) {
+ __u64 c;
+ char a;
+ int b;
+
+ err |= bpf_get_func_arg(ctx, 0, &value);
+ a = (char) value;
+ err |= bpf_get_func_arg(ctx, 1, &value);
+ b = (int) value;
+ err |= bpf_get_func_arg(ctx, 2, &value);
+ c = value;
+
+ err |= is_return ? ret != 15 : 0;
+
+ *test_result += err == 0 && a == 4 && b == 5 && c == 6;
+ } else if (ip == &bpf_fentry_test4) {
+ void *a;
+ char b;
+ int c;
+ __u64 d;
+
+ err |= bpf_get_func_arg(ctx, 0, &value);
+ a = (void *) value;
+ err |= bpf_get_func_arg(ctx, 1, &value);
+ b = (char) value;
+ err |= bpf_get_func_arg(ctx, 2, &value);
+ c = (int) value;
+ err |= bpf_get_func_arg(ctx, 3, &value);
+ d = value;
+
+ err |= is_return ? ret != 34 : 0;
+
+ *test_result += err == 0 && a == (void *) 7 && b == 8 && c == 9 && d == 10;
+ } else if (ip == &bpf_fentry_test5) {
+ __u64 a;
+ void *b;
+ short c;
+ int d;
+ __u64 e;
+
+ err |= bpf_get_func_arg(ctx, 0, &value);
+ a = value;
+ err |= bpf_get_func_arg(ctx, 1, &value);
+ b = (void *) value;
+ err |= bpf_get_func_arg(ctx, 2, &value);
+ c = (short) value;
+ err |= bpf_get_func_arg(ctx, 3, &value);
+ d = (int) value;
+ err |= bpf_get_func_arg(ctx, 4, &value);
+ e = value;
+
+ err |= is_return ? ret != 65 : 0;
+
+ *test_result += err == 0 && a == 11 && b == (void *) 12 && c == 13 && d == 14 && e == 15;
+ } else if (ip == &bpf_fentry_test6) {
+ __u64 a;
+ void *b;
+ short c;
+ int d;
+ void *e;
+ __u64 f;
+
+ err |= bpf_get_func_arg(ctx, 0, &value);
+ a = value;
+ err |= bpf_get_func_arg(ctx, 1, &value);
+ b = (void *) value;
+ err |= bpf_get_func_arg(ctx, 2, &value);
+ c = (short) value;
+ err |= bpf_get_func_arg(ctx, 3, &value);
+ d = (int) value;
+ err |= bpf_get_func_arg(ctx, 4, &value);
+ e = (void *) value;
+ err |= bpf_get_func_arg(ctx, 5, &value);
+ f = value;
+
+ err |= is_return ? ret != 111 : 0;
+
+ *test_result += err == 0 && a == 16 && b == (void *) 17 && c == 18 && d == 19 && e == (void *) 20 && f == 21;
+ } else if (ip == &bpf_fentry_test7) {
+ err |= is_return ? ret != 0 : 0;
+
+ *test_result += err == 0 ? 1 : 0;
+ } else if (ip == &bpf_fentry_test8) {
+ err |= is_return ? ret != 0 : 0;
+
+ *test_result += err == 0 ? 1 : 0;
+ } else if (ip == &bpf_fentry_test9) {
+ err |= is_return ? ret != 0 : 0;
+
+ *test_result += err == 0 ? 1 : 0;
+ } else if (ip == &bpf_fentry_test10) {
+ err |= is_return ? ret != 0 : 0;
+
+ *test_result += err == 0 ? 1 : 0;
+ }
+}
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* [PATCHv4 bpf-next 19/25] selftests/bpf: Add tracing multi skel/pattern/ids module attach tests
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (17 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 18/25] selftests/bpf: Add tracing multi skel/pattern/ids attach tests Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 20/25] selftests/bpf: Add tracing multi intersect tests Jiri Olsa
` (6 subsequent siblings)
25 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding tests for tracing_multi link attachment via all possible
libbpf apis - skeleton, function pattern and btf ids on top of
bpf_testmod kernel module.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
tools/testing/selftests/bpf/Makefile | 4 +-
.../selftests/bpf/prog_tests/tracing_multi.c | 105 ++++++++++++++++++
.../bpf/progs/tracing_multi_attach_module.c | 25 +++++
.../selftests/bpf/progs/tracing_multi_check.c | 50 +++++++++
4 files changed, 183 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_attach_module.c
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 308c085bad08..59e2d1f8f5cc 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -493,7 +493,8 @@ SKEL_BLACKLIST := btf__% test_pinning_invalid.c test_sk_assign.c
LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \
linked_vars.skel.h linked_maps.skel.h \
test_subskeleton.skel.h test_subskeleton_lib.skel.h \
- test_usdt.skel.h tracing_multi.skel.h
+ test_usdt.skel.h tracing_multi.skel.h \
+ tracing_multi_module.skel.h
LSKELS := fexit_sleep.c trace_printk.c trace_vprintk.c map_ptr_kern.c \
core_kern.c core_kern_overflow.c test_ringbuf.c \
@@ -520,6 +521,7 @@ xsk_xdp_progs.skel.h-deps := xsk_xdp_progs.bpf.o
xdp_hw_metadata.skel.h-deps := xdp_hw_metadata.bpf.o
xdp_features.skel.h-deps := xdp_features.bpf.o
tracing_multi.skel.h-deps := tracing_multi_attach.bpf.o tracing_multi_check.bpf.o
+tracing_multi_module.skel.h-deps := tracing_multi_attach_module.bpf.o tracing_multi_check.bpf.o
LINKED_BPF_OBJS := $(foreach skel,$(LINKED_SKELS),$($(skel)-deps))
LINKED_BPF_SRCS := $(patsubst %.bpf.o,%.c,$(LINKED_BPF_OBJS))
diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
index fc22a2cf8c13..c533c1671d58 100644
--- a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
+++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
@@ -5,6 +5,7 @@
#include <search.h>
#include "bpf/libbpf_internal.h"
#include "tracing_multi.skel.h"
+#include "tracing_multi_module.skel.h"
#include "trace_helpers.h"
static const char * const bpf_fentry_test[] = {
@@ -20,6 +21,14 @@ static const char * const bpf_fentry_test[] = {
"bpf_fentry_test10",
};
+static const char * const bpf_testmod_fentry_test[] = {
+ "bpf_testmod_fentry_test1",
+ "bpf_testmod_fentry_test2",
+ "bpf_testmod_fentry_test3",
+ "bpf_testmod_fentry_test7",
+ "bpf_testmod_fentry_test11",
+};
+
#define FUNCS_CNT (ARRAY_SIZE(bpf_fentry_test))
static int compare(const void *ppa, const void *ppb)
@@ -236,6 +245,96 @@ static void test_link_api_ids(void)
free(ids);
}
+static void test_module_skel_api(void)
+{
+ struct tracing_multi_module *skel = NULL;
+ int err;
+
+ skel = tracing_multi_module__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "tracing_multi__open_and_load"))
+ return;
+
+ skel->bss->pid = getpid();
+
+ err = tracing_multi_module__attach(skel);
+ if (!ASSERT_OK(err, "tracing_multi__attach"))
+ goto cleanup;
+
+ ASSERT_OK(trigger_module_test_read(1), "trigger_read");
+ ASSERT_EQ(skel->bss->test_result_fentry, 5, "test_result_fentry");
+ ASSERT_EQ(skel->bss->test_result_fexit, 5, "test_result_fexit");
+
+cleanup:
+ tracing_multi_module__destroy(skel);
+}
+
+static void test_module_link_api_pattern(void)
+{
+ struct tracing_multi_module *skel = NULL;
+
+ skel = tracing_multi_module__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "tracing_multi_module__open_and_load"))
+ return;
+
+ skel->bss->pid = getpid();
+
+ skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
+ "bpf_testmod:bpf_testmod_fentry_test*", NULL);
+ if (!ASSERT_OK_PTR(skel->links.test_fentry, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ skel->links.test_fexit = bpf_program__attach_tracing_multi(skel->progs.test_fexit,
+ "bpf_testmod:bpf_testmod_fentry_test*", NULL);
+ if (!ASSERT_OK_PTR(skel->links.test_fexit, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ ASSERT_OK(trigger_module_test_read(1), "trigger_read");
+ ASSERT_EQ(skel->bss->test_result_fentry, 5, "test_result_fentry");
+ ASSERT_EQ(skel->bss->test_result_fexit, 5, "test_result_fexit");
+
+cleanup:
+ tracing_multi_module__destroy(skel);
+}
+
+static void test_module_link_api_ids(void)
+{
+ size_t cnt = ARRAY_SIZE(bpf_testmod_fentry_test);
+ LIBBPF_OPTS(bpf_tracing_multi_opts, opts);
+ struct tracing_multi_module *skel = NULL;
+ __u32 *ids;
+
+ skel = tracing_multi_module__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "tracing_multi_module__open_and_load"))
+ return;
+
+ skel->bss->pid = getpid();
+
+ ids = get_ids(bpf_testmod_fentry_test, cnt, "bpf_testmod");
+ if (!ASSERT_OK_PTR(ids, "get_ids"))
+ goto cleanup;
+
+ opts.ids = ids;
+ opts.cnt = cnt;
+
+ skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
+ NULL, &opts);
+ if (!ASSERT_OK_PTR(skel->links.test_fentry, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ skel->links.test_fexit = bpf_program__attach_tracing_multi(skel->progs.test_fexit,
+ NULL, &opts);
+ if (!ASSERT_OK_PTR(skel->links.test_fexit, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ ASSERT_OK(trigger_module_test_read(1), "trigger_read");
+ ASSERT_EQ(skel->bss->test_result_fentry, 5, "test_result_fentry");
+ ASSERT_EQ(skel->bss->test_result_fexit, 5, "test_result_fexit");
+
+cleanup:
+ tracing_multi_module__destroy(skel);
+ free(ids);
+}
+
void test_tracing_multi_test(void)
{
#ifndef __x86_64__
@@ -249,4 +348,10 @@ void test_tracing_multi_test(void)
test_link_api_pattern();
if (test__start_subtest("link_api_ids"))
test_link_api_ids();
+ if (test__start_subtest("module_skel_api"))
+ test_module_skel_api();
+ if (test__start_subtest("module_link_api_pattern"))
+ test_module_link_api_pattern();
+ if (test__start_subtest("module_link_api_ids"))
+ test_module_link_api_ids();
}
diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_attach_module.c b/tools/testing/selftests/bpf/progs/tracing_multi_attach_module.c
new file mode 100644
index 000000000000..ad9e0a5fda4e
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/tracing_multi_attach_module.c
@@ -0,0 +1,25 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+__hidden extern void tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return);
+
+__u64 test_result_fentry = 0;
+__u64 test_result_fexit = 0;
+
+SEC("fentry.multi/bpf_testmod:bpf_testmod_fentry_test*")
+int BPF_PROG(test_fentry)
+{
+ tracing_multi_arg_check(ctx, &test_result_fentry, false);
+ return 0;
+}
+
+SEC("fexit.multi/bpf_testmod:bpf_testmod_fentry_test*")
+int BPF_PROG(test_fexit)
+{
+ tracing_multi_arg_check(ctx, &test_result_fexit, true);
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_check.c b/tools/testing/selftests/bpf/progs/tracing_multi_check.c
index 580195729506..631fa76ead6a 100644
--- a/tools/testing/selftests/bpf/progs/tracing_multi_check.c
+++ b/tools/testing/selftests/bpf/progs/tracing_multi_check.c
@@ -18,6 +18,12 @@ extern const void bpf_fentry_test8 __ksym;
extern const void bpf_fentry_test9 __ksym;
extern const void bpf_fentry_test10 __ksym;
+extern const void bpf_testmod_fentry_test1 __ksym;
+extern const void bpf_testmod_fentry_test2 __ksym;
+extern const void bpf_testmod_fentry_test3 __ksym;
+extern const void bpf_testmod_fentry_test7 __ksym;
+extern const void bpf_testmod_fentry_test11 __ksym;
+
void tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return)
{
void *ip = (void *) bpf_get_func_ip(ctx);
@@ -145,5 +151,49 @@ void tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return)
err |= is_return ? ret != 0 : 0;
*test_result += err == 0 ? 1 : 0;
+ } else if (ip == &bpf_testmod_fentry_test1) {
+ int a;
+
+ err |= bpf_get_func_arg(ctx, 0, &value);
+ a = (int) value;
+
+ err |= is_return ? ret != 2 : 0;
+
+ *test_result += err == 0 && a == 1;
+ } else if (ip == &bpf_testmod_fentry_test2) {
+ int a;
+ __u64 b;
+
+ err |= bpf_get_func_arg(ctx, 0, &value);
+ a = (int) value;
+ err |= bpf_get_func_arg(ctx, 1, &value);
+ b = (__u64) value;
+
+ err |= is_return ? ret != 5 : 0;
+
+ *test_result += err == 0 && a == 2 && b == 3;
+ } else if (ip == &bpf_testmod_fentry_test3) {
+ char a;
+ int b;
+ __u64 c;
+
+ err |= bpf_get_func_arg(ctx, 0, &value);
+ a = (char) value;
+ err |= bpf_get_func_arg(ctx, 1, &value);
+ b = (int) value;
+ err |= bpf_get_func_arg(ctx, 2, &value);
+ c = (__u64) value;
+
+ err |= is_return ? ret != 15 : 0;
+
+ *test_result += err == 0 && a == 4 && b == 5 && c == 6;
+ } else if (ip == &bpf_testmod_fentry_test7) {
+ err |= is_return ? ret != 133 : 0;
+
+ *test_result += err == 0;
+ } else if (ip == &bpf_testmod_fentry_test11) {
+ err |= is_return ? ret != 231 : 0;
+
+ *test_result += err == 0;
}
}
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* [PATCHv4 bpf-next 20/25] selftests/bpf: Add tracing multi intersect tests
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (18 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 19/25] selftests/bpf: Add tracing multi skel/pattern/ids module " Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 21/25] selftests/bpf: Add tracing multi cookies test Jiri Olsa
` (5 subsequent siblings)
25 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding tracing multi tests for intersecting attached functions.
Using bits from (from 1 to 16 values) to specify (up to 4) attached
programs, and randomly choosing bpf_fentry_test* functions they are
attached to.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
tools/testing/selftests/bpf/Makefile | 4 +-
.../selftests/bpf/prog_tests/tracing_multi.c | 99 +++++++++++++++++++
.../progs/tracing_multi_intersect_attach.c | 41 ++++++++
3 files changed, 143 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_intersect_attach.c
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 59e2d1f8f5cc..d33ad5ae0aff 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -494,7 +494,8 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \
linked_vars.skel.h linked_maps.skel.h \
test_subskeleton.skel.h test_subskeleton_lib.skel.h \
test_usdt.skel.h tracing_multi.skel.h \
- tracing_multi_module.skel.h
+ tracing_multi_module.skel.h \
+ tracing_multi_intersect.skel.h
LSKELS := fexit_sleep.c trace_printk.c trace_vprintk.c map_ptr_kern.c \
core_kern.c core_kern_overflow.c test_ringbuf.c \
@@ -522,6 +523,7 @@ xdp_hw_metadata.skel.h-deps := xdp_hw_metadata.bpf.o
xdp_features.skel.h-deps := xdp_features.bpf.o
tracing_multi.skel.h-deps := tracing_multi_attach.bpf.o tracing_multi_check.bpf.o
tracing_multi_module.skel.h-deps := tracing_multi_attach_module.bpf.o tracing_multi_check.bpf.o
+tracing_multi_intersect.skel.h-deps := tracing_multi_intersect_attach.bpf.o tracing_multi_check.bpf.o
LINKED_BPF_OBJS := $(foreach skel,$(LINKED_SKELS),$($(skel)-deps))
LINKED_BPF_SRCS := $(patsubst %.bpf.o,%.c,$(LINKED_BPF_OBJS))
diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
index c533c1671d58..44c6f3fbc82d 100644
--- a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
+++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
@@ -6,6 +6,7 @@
#include "bpf/libbpf_internal.h"
#include "tracing_multi.skel.h"
#include "tracing_multi_module.skel.h"
+#include "tracing_multi_intersect.skel.h"
#include "trace_helpers.h"
static const char * const bpf_fentry_test[] = {
@@ -31,6 +32,20 @@ static const char * const bpf_testmod_fentry_test[] = {
#define FUNCS_CNT (ARRAY_SIZE(bpf_fentry_test))
+static int get_random_funcs(const char **funcs)
+{
+ int i, cnt = 0;
+
+ for (i = 0; i < FUNCS_CNT; i++) {
+ if (rand() % 2)
+ funcs[cnt++] = bpf_fentry_test[i];
+ }
+ /* we always need at least one.. */
+ if (!cnt)
+ funcs[cnt++] = bpf_fentry_test[rand() % FUNCS_CNT];
+ return cnt;
+}
+
static int compare(const void *ppa, const void *ppb)
{
const char *pa = *(const char **) ppa;
@@ -335,6 +350,88 @@ static void test_module_link_api_ids(void)
free(ids);
}
+static bool is_set(__u32 mask, __u32 bit)
+{
+ return (1 << bit) & mask;
+}
+
+static void __test_intersect(__u32 mask, const struct bpf_program *progs[4], __u64 *test_results[4])
+{
+ LIBBPF_OPTS(bpf_tracing_multi_opts, opts);
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ struct bpf_link *links[4] = { NULL };
+ const char *funcs[FUNCS_CNT];
+ __u64 expected[4];
+ __u32 *ids, i;
+ int err, cnt;
+
+ /*
+ * We have 4 programs in progs and the mask bits pick which
+ * of them gets attached to randomly chosen functions.
+ */
+ for (i = 0; i < 4; i++) {
+ if (!is_set(mask, i))
+ continue;
+
+ cnt = get_random_funcs(funcs);
+ ids = get_ids(funcs, cnt, NULL);
+ if (!ASSERT_OK_PTR(ids, "get_ids"))
+ goto cleanup;
+
+ opts.ids = ids;
+ opts.cnt = cnt;
+ links[i] = bpf_program__attach_tracing_multi(progs[i], NULL, &opts);
+ free(ids);
+
+ if (!ASSERT_OK_PTR(links[i], "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ expected[i] = *test_results[i] + cnt;
+ }
+
+ err = bpf_prog_test_run_opts(bpf_program__fd(progs[0]), &topts);
+ ASSERT_OK(err, "test_run");
+
+ for (i = 0; i < 4; i++) {
+ if (!is_set(mask, i))
+ continue;
+ ASSERT_EQ(*test_results[i], expected[i], "test_results");
+ }
+
+cleanup:
+ for (i = 0; i < 4; i++)
+ bpf_link__destroy(links[i]);
+}
+
+static void test_intersect(void)
+{
+ struct tracing_multi_intersect *skel;
+ const struct bpf_program *progs[4];
+ __u64 *test_results[4];
+ __u32 i;
+
+ skel = tracing_multi_intersect__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "tracing_multi_intersect__open_and_load"))
+ return;
+
+ skel->bss->pid = getpid();
+
+ progs[0] = skel->progs.fentry_1;
+ progs[1] = skel->progs.fexit_1;
+ progs[2] = skel->progs.fentry_2;
+ progs[3] = skel->progs.fexit_2;
+
+ test_results[0] = &skel->bss->test_result_fentry_1;
+ test_results[1] = &skel->bss->test_result_fexit_1;
+ test_results[2] = &skel->bss->test_result_fentry_2;
+ test_results[3] = &skel->bss->test_result_fexit_2;
+
+ for (i = 1; i < 16; i++)
+ __test_intersect(i, progs, test_results);
+
+ tracing_multi_intersect__destroy(skel);
+}
+
void test_tracing_multi_test(void)
{
#ifndef __x86_64__
@@ -354,4 +451,6 @@ void test_tracing_multi_test(void)
test_module_link_api_pattern();
if (test__start_subtest("module_link_api_ids"))
test_module_link_api_ids();
+ if (test__start_subtest("intersect"))
+ test_intersect();
}
diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_intersect_attach.c b/tools/testing/selftests/bpf/progs/tracing_multi_intersect_attach.c
new file mode 100644
index 000000000000..76511bd7661d
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/tracing_multi_intersect_attach.c
@@ -0,0 +1,41 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+__hidden extern void tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return);
+
+__u64 test_result_fentry_1 = 0;
+__u64 test_result_fentry_2 = 0;
+__u64 test_result_fexit_1 = 0;
+__u64 test_result_fexit_2 = 0;
+
+SEC("fentry.multi")
+int BPF_PROG(fentry_1)
+{
+ tracing_multi_arg_check(ctx, &test_result_fentry_1, false);
+ return 0;
+}
+
+SEC("fentry.multi")
+int BPF_PROG(fentry_2)
+{
+ tracing_multi_arg_check(ctx, &test_result_fentry_2, false);
+ return 0;
+}
+
+SEC("fexit.multi")
+int BPF_PROG(fexit_1)
+{
+ tracing_multi_arg_check(ctx, &test_result_fexit_1, true);
+ return 0;
+}
+
+SEC("fexit.multi")
+int BPF_PROG(fexit_2)
+{
+ tracing_multi_arg_check(ctx, &test_result_fexit_2, true);
+ return 0;
+}
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* [PATCHv4 bpf-next 21/25] selftests/bpf: Add tracing multi cookies test
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (19 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 20/25] selftests/bpf: Add tracing multi intersect tests Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 22/25] selftests/bpf: Add tracing multi session test Jiri Olsa
` (4 subsequent siblings)
25 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding tests for using cookies on tracing multi link.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
.../selftests/bpf/prog_tests/tracing_multi.c | 23 +++++++++++++++++--
.../selftests/bpf/progs/tracing_multi_check.c | 15 +++++++++++-
2 files changed, 35 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
index 44c6f3fbc82d..c452bf574f22 100644
--- a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
+++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
@@ -9,6 +9,19 @@
#include "tracing_multi_intersect.skel.h"
#include "trace_helpers.h"
+static __u64 bpf_fentry_test_cookies[] = {
+ 8, /* bpf_fentry_test1 */
+ 9, /* bpf_fentry_test2 */
+ 7, /* bpf_fentry_test3 */
+ 5, /* bpf_fentry_test4 */
+ 4, /* bpf_fentry_test5 */
+ 2, /* bpf_fentry_test6 */
+ 3, /* bpf_fentry_test7 */
+ 1, /* bpf_fentry_test8 */
+ 10, /* bpf_fentry_test9 */
+ 6, /* bpf_fentry_test10 */
+};
+
static const char * const bpf_fentry_test[] = {
"bpf_fentry_test1",
"bpf_fentry_test2",
@@ -211,7 +224,7 @@ static void test_link_api_pattern(void)
tracing_multi__destroy(skel);
}
-static void test_link_api_ids(void)
+static void test_link_api_ids(bool test_cookies)
{
LIBBPF_OPTS(bpf_tracing_multi_opts, opts);
struct tracing_multi *skel;
@@ -223,6 +236,7 @@ static void test_link_api_ids(void)
return;
skel->bss->pid = getpid();
+ skel->bss->test_cookies = test_cookies;
ids = get_ids(bpf_fentry_test, cnt, NULL);
if (!ASSERT_OK_PTR(ids, "get_ids"))
@@ -231,6 +245,9 @@ static void test_link_api_ids(void)
opts.ids = ids;
opts.cnt = cnt;
+ if (test_cookies)
+ opts.cookies = bpf_fentry_test_cookies;
+
skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
NULL, &opts);
if (!ASSERT_OK_PTR(skel->links.test_fentry, "bpf_program__attach_tracing_multi"))
@@ -444,7 +461,7 @@ void test_tracing_multi_test(void)
if (test__start_subtest("link_api_pattern"))
test_link_api_pattern();
if (test__start_subtest("link_api_ids"))
- test_link_api_ids();
+ test_link_api_ids(false);
if (test__start_subtest("module_skel_api"))
test_module_skel_api();
if (test__start_subtest("module_link_api_pattern"))
@@ -453,4 +470,6 @@ void test_tracing_multi_test(void)
test_module_link_api_ids();
if (test__start_subtest("intersect"))
test_intersect();
+ if (test__start_subtest("cookies"))
+ test_link_api_ids(true);
}
diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_check.c b/tools/testing/selftests/bpf/progs/tracing_multi_check.c
index 631fa76ead6a..e2f49393aee9 100644
--- a/tools/testing/selftests/bpf/progs/tracing_multi_check.c
+++ b/tools/testing/selftests/bpf/progs/tracing_multi_check.c
@@ -6,6 +6,7 @@
char _license[] SEC("license") = "GPL";
int pid = 0;
+bool test_cookies = false;
/* bpf_fentry_test1 is exported as kfunc via vmlinux.h */
extern const void bpf_fentry_test2 __ksym;
@@ -27,7 +28,7 @@ extern const void bpf_testmod_fentry_test11 __ksym;
void tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return)
{
void *ip = (void *) bpf_get_func_ip(ctx);
- __u64 value = 0, ret = 0;
+ __u64 value = 0, ret = 0, cookie = 0;
long err = 0;
if (bpf_get_current_pid_tgid() >> 32 != pid)
@@ -35,6 +36,8 @@ void tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return)
if (is_return)
err |= bpf_get_func_ret(ctx, &ret);
+ if (test_cookies)
+ cookie = bpf_get_attach_cookie(ctx);
if (ip == &bpf_fentry_test1) {
int a;
@@ -43,6 +46,7 @@ void tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return)
a = (int) value;
err |= is_return ? ret != 2 : 0;
+ err |= test_cookies ? cookie != 8 : 0;
*test_result += err == 0 && a == 1;
} else if (ip == &bpf_fentry_test2) {
@@ -55,6 +59,7 @@ void tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return)
b = value;
err |= is_return ? ret != 5 : 0;
+ err |= test_cookies ? cookie != 9 : 0;
*test_result += err == 0 && a == 2 && b == 3;
} else if (ip == &bpf_fentry_test3) {
@@ -70,6 +75,7 @@ void tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return)
c = value;
err |= is_return ? ret != 15 : 0;
+ err |= test_cookies ? cookie != 7 : 0;
*test_result += err == 0 && a == 4 && b == 5 && c == 6;
} else if (ip == &bpf_fentry_test4) {
@@ -88,6 +94,7 @@ void tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return)
d = value;
err |= is_return ? ret != 34 : 0;
+ err |= test_cookies ? cookie != 5 : 0;
*test_result += err == 0 && a == (void *) 7 && b == 8 && c == 9 && d == 10;
} else if (ip == &bpf_fentry_test5) {
@@ -109,6 +116,7 @@ void tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return)
e = value;
err |= is_return ? ret != 65 : 0;
+ err |= test_cookies ? cookie != 4 : 0;
*test_result += err == 0 && a == 11 && b == (void *) 12 && c == 13 && d == 14 && e == 15;
} else if (ip == &bpf_fentry_test6) {
@@ -133,22 +141,27 @@ void tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return)
f = value;
err |= is_return ? ret != 111 : 0;
+ err |= test_cookies ? cookie != 2 : 0;
*test_result += err == 0 && a == 16 && b == (void *) 17 && c == 18 && d == 19 && e == (void *) 20 && f == 21;
} else if (ip == &bpf_fentry_test7) {
err |= is_return ? ret != 0 : 0;
+ err |= test_cookies ? cookie != 3 : 0;
*test_result += err == 0 ? 1 : 0;
} else if (ip == &bpf_fentry_test8) {
err |= is_return ? ret != 0 : 0;
+ err |= test_cookies ? cookie != 1 : 0;
*test_result += err == 0 ? 1 : 0;
} else if (ip == &bpf_fentry_test9) {
err |= is_return ? ret != 0 : 0;
+ err |= test_cookies ? cookie != 10 : 0;
*test_result += err == 0 ? 1 : 0;
} else if (ip == &bpf_fentry_test10) {
err |= is_return ? ret != 0 : 0;
+ err |= test_cookies ? cookie != 6 : 0;
*test_result += err == 0 ? 1 : 0;
} else if (ip == &bpf_testmod_fentry_test1) {
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* [PATCHv4 bpf-next 22/25] selftests/bpf: Add tracing multi session test
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (20 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 21/25] selftests/bpf: Add tracing multi cookies test Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 23/25] selftests/bpf: Add tracing multi attach fails test Jiri Olsa
` (3 subsequent siblings)
25 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding tests for tracing multi link session.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
tools/testing/selftests/bpf/Makefile | 4 +-
.../selftests/bpf/prog_tests/tracing_multi.c | 40 +++++++++++++++++
.../bpf/progs/tracing_multi_session_attach.c | 43 +++++++++++++++++++
3 files changed, 86 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_session_attach.c
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index d33ad5ae0aff..1262a40e3fba 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -495,7 +495,8 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \
test_subskeleton.skel.h test_subskeleton_lib.skel.h \
test_usdt.skel.h tracing_multi.skel.h \
tracing_multi_module.skel.h \
- tracing_multi_intersect.skel.h
+ tracing_multi_intersect.skel.h \
+ tracing_multi_session.skel.h
LSKELS := fexit_sleep.c trace_printk.c trace_vprintk.c map_ptr_kern.c \
core_kern.c core_kern_overflow.c test_ringbuf.c \
@@ -524,6 +525,7 @@ xdp_features.skel.h-deps := xdp_features.bpf.o
tracing_multi.skel.h-deps := tracing_multi_attach.bpf.o tracing_multi_check.bpf.o
tracing_multi_module.skel.h-deps := tracing_multi_attach_module.bpf.o tracing_multi_check.bpf.o
tracing_multi_intersect.skel.h-deps := tracing_multi_intersect_attach.bpf.o tracing_multi_check.bpf.o
+tracing_multi_session.skel.h-deps := tracing_multi_session_attach.bpf.o tracing_multi_check.bpf.o
LINKED_BPF_OBJS := $(foreach skel,$(LINKED_SKELS),$($(skel)-deps))
LINKED_BPF_SRCS := $(patsubst %.bpf.o,%.c,$(LINKED_BPF_OBJS))
diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
index c452bf574f22..2ed43e4719cd 100644
--- a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
+++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
@@ -7,6 +7,7 @@
#include "tracing_multi.skel.h"
#include "tracing_multi_module.skel.h"
#include "tracing_multi_intersect.skel.h"
+#include "tracing_multi_session.skel.h"
#include "trace_helpers.h"
static __u64 bpf_fentry_test_cookies[] = {
@@ -449,6 +450,43 @@ static void test_intersect(void)
tracing_multi_intersect__destroy(skel);
}
+static void test_session(void)
+{
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ struct tracing_multi_session *skel;
+ int err, prog_fd;
+
+ skel = tracing_multi_session__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "tracing_multi_session__open_and_load"))
+ return;
+
+ skel->bss->pid = getpid();
+
+ err = tracing_multi_session__attach(skel);
+ if (!ASSERT_OK(err, "tracing_multi_session__attach"))
+ goto cleanup;
+
+ /* execute kernel session */
+ prog_fd = bpf_program__fd(skel->progs.test_session_1);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+
+ ASSERT_EQ(skel->bss->test_result_fentry, 10, "test_result_fentry");
+ /* extra count (+1 for each fexit execution) for test_result_fexit cookie */
+ ASSERT_EQ(skel->bss->test_result_fexit, 20, "test_result_fexit");
+
+ /* execute bpf_testmo.ko session */
+ ASSERT_OK(trigger_module_test_read(1), "trigger_read");
+
+ ASSERT_EQ(skel->bss->test_result_fentry, 15, "test_result_fentry");
+ /* extra count (+1 for each fexit execution) for test_result_fexit cookie */
+ ASSERT_EQ(skel->bss->test_result_fexit, 30, "test_result_fexit");
+
+
+cleanup:
+ tracing_multi_session__destroy(skel);
+}
+
void test_tracing_multi_test(void)
{
#ifndef __x86_64__
@@ -472,4 +510,6 @@ void test_tracing_multi_test(void)
test_intersect();
if (test__start_subtest("cookies"))
test_link_api_ids(true);
+ if (test__start_subtest("session"))
+ test_session();
}
diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_session_attach.c b/tools/testing/selftests/bpf/progs/tracing_multi_session_attach.c
new file mode 100644
index 000000000000..c9e005939d74
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/tracing_multi_session_attach.c
@@ -0,0 +1,43 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+__hidden extern void tracing_multi_arg_check(__u64 *ctx, __u64 *test_result, bool is_return);
+
+__u64 test_result_fentry = 0;
+__u64 test_result_fexit = 0;
+
+SEC("fsession.multi/bpf_fentry_test*")
+int BPF_PROG(test_session_1)
+{
+ volatile __u64 *cookie = bpf_session_cookie(ctx);
+
+ if (bpf_session_is_return(ctx)) {
+ tracing_multi_arg_check(ctx, &test_result_fexit, true);
+ /* extra count for test_result_fexit cookie */
+ test_result_fexit += *cookie == 0xbeafbeafbeafbeaf;
+ } else {
+ tracing_multi_arg_check(ctx, &test_result_fentry, false);
+ *cookie = 0xbeafbeafbeafbeaf;
+ }
+ return 0;
+}
+
+SEC("fsession.multi/bpf_testmod:bpf_testmod_fentry_test*")
+int BPF_PROG(test_session_2)
+{
+ volatile __u64 *cookie = bpf_session_cookie(ctx);
+
+ if (bpf_session_is_return(ctx)) {
+ tracing_multi_arg_check(ctx, &test_result_fexit, true);
+ /* extra count for test_result_fexit cookie */
+ test_result_fexit += *cookie == 0xbeafbeafbeafbeaf;
+ } else {
+ tracing_multi_arg_check(ctx, &test_result_fentry, false);
+ *cookie = 0xbeafbeafbeafbeaf;
+ }
+ return 0;
+}
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* [PATCHv4 bpf-next 23/25] selftests/bpf: Add tracing multi attach fails test
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (21 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 22/25] selftests/bpf: Add tracing multi session test Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-24 8:18 ` [PATCHv4 bpf-next 24/25] selftests/bpf: Add tracing multi attach benchmark test Jiri Olsa
` (2 subsequent siblings)
25 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding tests for attach fails on tracing multi link.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
.../selftests/bpf/prog_tests/tracing_multi.c | 86 +++++++++++++++++++
.../selftests/bpf/progs/tracing_multi_fail.c | 18 ++++
2 files changed, 104 insertions(+)
create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_fail.c
diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
index 2ed43e4719cd..dece45d8fb5e 100644
--- a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
+++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
@@ -8,6 +8,7 @@
#include "tracing_multi_module.skel.h"
#include "tracing_multi_intersect.skel.h"
#include "tracing_multi_session.skel.h"
+#include "tracing_multi_fail.skel.h"
#include "trace_helpers.h"
static __u64 bpf_fentry_test_cookies[] = {
@@ -487,6 +488,89 @@ static void test_session(void)
tracing_multi_session__destroy(skel);
}
+static void test_attach_api_fails(void)
+{
+ LIBBPF_OPTS(bpf_tracing_multi_opts, opts);
+ static const char * const func[] = {
+ "bpf_fentry_test2",
+ };
+ struct tracing_multi_fail *skel = NULL;
+ __u32 ids[2], *ids2 = NULL;
+ __u64 cookies[2];
+
+ skel = tracing_multi_fail__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "tracing_multi_fail__open_and_load"))
+ return;
+
+ /* fail#1 pattern and opts NULL */
+ skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
+ NULL, NULL);
+ if (!ASSERT_EQ(libbpf_get_error(skel->links.test_fentry), -EINVAL, "fail_1"))
+ goto cleanup;
+
+ /* fail#2 pattern and ids */
+ opts.ids = ids;
+ opts.cnt = 2;
+
+ skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
+ "bpf_fentry_test*", &opts);
+ if (!ASSERT_EQ(libbpf_get_error(skel->links.test_fentry), -EINVAL, "fail_2"))
+ goto cleanup;
+
+ /* fail#3 pattern and cookies */
+ opts.ids = NULL;
+ opts.cnt = 2;
+ opts.cookies = cookies;
+
+ skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
+ "bpf_fentry_test*", &opts);
+ if (!ASSERT_EQ(libbpf_get_error(skel->links.test_fentry), -EINVAL, "fail_3"))
+ goto cleanup;
+
+ /* fail#4 bogus pattern */
+ skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
+ "bpf_not_really_a_function*", NULL);
+ if (!ASSERT_EQ(libbpf_get_error(skel->links.test_fentry), -EINVAL, "fail_4"))
+ goto cleanup;
+
+ /* fail#5 abnormal cnt */
+ opts.ids = ids;
+ opts.cnt = INT_MAX;
+
+ skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
+ NULL, &opts);
+ if (!ASSERT_EQ(libbpf_get_error(skel->links.test_fentry), -E2BIG, "fail_5"))
+ goto cleanup;
+
+ /* fail#6 attach sleepable program to not-allowed function */
+ ids2 = get_ids(func, 1, NULL);
+ if (!ASSERT_OK_PTR(ids2, "get_ids"))
+ goto cleanup;
+
+ opts.ids = ids2;
+ opts.cnt = 1;
+
+ skel->links.test_fentry_s = bpf_program__attach_tracing_multi(skel->progs.test_fentry_s,
+ NULL, &opts);
+ if (!ASSERT_EQ(libbpf_get_error(skel->links.test_fentry_s), -EINVAL, "fail_6"))
+ goto cleanup;
+
+ /* fail#7 attach with duplicate id */
+ ids[0] = ids2[0];
+ ids[1] = ids2[0];
+
+ opts.ids = ids;
+ opts.cnt = 2;
+
+ skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
+ NULL, &opts);
+ ASSERT_EQ(libbpf_get_error(skel->links.test_fentry), -EBUSY, "fail_7");
+
+cleanup:
+ tracing_multi_fail__destroy(skel);
+ free(ids2);
+}
+
void test_tracing_multi_test(void)
{
#ifndef __x86_64__
@@ -512,4 +596,6 @@ void test_tracing_multi_test(void)
test_link_api_ids(true);
if (test__start_subtest("session"))
test_session();
+ if (test__start_subtest("attach_api_fails"))
+ test_attach_api_fails();
}
diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_fail.c b/tools/testing/selftests/bpf/progs/tracing_multi_fail.c
new file mode 100644
index 000000000000..7f0375f4213d
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/tracing_multi_fail.c
@@ -0,0 +1,18 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+SEC("fentry.multi")
+int BPF_PROG(test_fentry)
+{
+ return 0;
+}
+
+SEC("fentry.multi.s")
+int BPF_PROG(test_fentry_s)
+{
+ return 0;
+}
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* [PATCHv4 bpf-next 24/25] selftests/bpf: Add tracing multi attach benchmark test
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (22 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 23/25] selftests/bpf: Add tracing multi attach fails test Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-25 6:45 ` Leon Hwang
2026-03-24 8:18 ` [PATCHv4 bpf-next 25/25] selftests/bpf: Add tracing multi attach rollback tests Jiri Olsa
2026-03-25 6:42 ` [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Leon Hwang
25 siblings, 1 reply; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding benchmark test that attaches to (almost) all allowed tracing
functions and display attach/detach times.
# ./test_progs -t tracing_multi_bench_attach -v
bpf_testmod.ko is already unloaded.
Loading bpf_testmod.ko...
Successfully loaded bpf_testmod.ko.
serial_test_tracing_multi_bench_attach:PASS:btf__load_vmlinux_btf 0 nsec
serial_test_tracing_multi_bench_attach:PASS:tracing_multi_bench__open_and_load 0 nsec
serial_test_tracing_multi_bench_attach:PASS:get_syms 0 nsec
serial_test_tracing_multi_bench_attach:PASS:bpf_program__attach_tracing_multi 0 nsec
serial_test_tracing_multi_bench_attach: found 51186 functions
serial_test_tracing_multi_bench_attach: attached in 1.295s
serial_test_tracing_multi_bench_attach: detached in 0.243s
#507 tracing_multi_bench_attach:OK
Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
Successfully unloaded bpf_testmod.ko.
Exporting skip_entry as is_unsafe_function and using it in the test.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
.../selftests/bpf/prog_tests/tracing_multi.c | 98 +++++++++++++++++++
.../selftests/bpf/progs/tracing_multi_bench.c | 12 +++
tools/testing/selftests/bpf/trace_helpers.c | 6 +-
tools/testing/selftests/bpf/trace_helpers.h | 1 +
4 files changed, 114 insertions(+), 3 deletions(-)
create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_bench.c
diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
index dece45d8fb5e..6917471e329c 100644
--- a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
+++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
@@ -9,6 +9,7 @@
#include "tracing_multi_intersect.skel.h"
#include "tracing_multi_session.skel.h"
#include "tracing_multi_fail.skel.h"
+#include "tracing_multi_bench.skel.h"
#include "trace_helpers.h"
static __u64 bpf_fentry_test_cookies[] = {
@@ -571,6 +572,103 @@ static void test_attach_api_fails(void)
free(ids2);
}
+void serial_test_tracing_multi_bench_attach(void)
+{
+ LIBBPF_OPTS(bpf_tracing_multi_opts, opts);
+ struct tracing_multi_bench *skel = NULL;
+ long attach_start_ns, attach_end_ns;
+ long detach_start_ns, detach_end_ns;
+ double attach_delta, detach_delta;
+ struct bpf_link *link = NULL;
+ size_t i, cap = 0, cnt = 0;
+ struct ksyms *ksyms = NULL;
+ void *root = NULL;
+ __u32 *ids = NULL;
+ __u32 nr, type_id;
+ struct btf *btf;
+ int err;
+
+#ifndef __x86_64__
+ test__skip();
+ return;
+#endif
+
+ btf = btf__load_vmlinux_btf();
+ if (!ASSERT_OK_PTR(btf, "btf__load_vmlinux_btf"))
+ return;
+
+ skel = tracing_multi_bench__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "tracing_multi_bench__open_and_load"))
+ goto cleanup;
+
+ if (!ASSERT_OK(bpf_get_ksyms(&ksyms, true), "get_syms"))
+ goto cleanup;
+
+ /* Get all ftrace 'safe' symbols.. */
+ for (i = 0; i < ksyms->filtered_cnt; i++) {
+ if (is_unsafe_function(ksyms->filtered_syms[i]))
+ continue;
+ tsearch(&ksyms->filtered_syms[i], &root, compare);
+ }
+
+ /* ..and filter them through BTF and btf_type_is_traceable_func. */
+ nr = btf__type_cnt(btf);
+ for (type_id = 1; type_id < nr; type_id++) {
+ const struct btf_type *type;
+ const char *str;
+
+ type = btf__type_by_id(btf, type_id);
+ if (!type)
+ break;
+
+ if (BTF_INFO_KIND(type->info) != BTF_KIND_FUNC)
+ continue;
+
+ str = btf__name_by_offset(btf, type->name_off);
+ if (!str)
+ break;
+
+ if (!tfind(&str, &root, compare))
+ continue;
+
+ if (!btf_type_is_traceable_func(btf, type))
+ continue;
+
+ err = libbpf_ensure_mem((void **) &ids, &cap, sizeof(*ids), cnt + 1);
+ if (err)
+ goto cleanup;
+
+ ids[cnt++] = type_id;
+ }
+
+ opts.ids = ids;
+ opts.cnt = cnt;
+
+ attach_start_ns = get_time_ns();
+ link = bpf_program__attach_tracing_multi(skel->progs.bench, NULL, &opts);
+ attach_end_ns = get_time_ns();
+
+ if (!ASSERT_OK_PTR(link, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ detach_start_ns = get_time_ns();
+ bpf_link__destroy(link);
+ detach_end_ns = get_time_ns();
+
+ attach_delta = (attach_end_ns - attach_start_ns) / 1000000000.0;
+ detach_delta = (detach_end_ns - detach_start_ns) / 1000000000.0;
+
+ printf("%s: found %lu functions\n", __func__, cnt);
+ printf("%s: attached in %7.3lfs\n", __func__, attach_delta);
+ printf("%s: detached in %7.3lfs\n", __func__, detach_delta);
+
+cleanup:
+ tracing_multi_bench__destroy(skel);
+ tdestroy(root, tdestroy_free_nop);
+ free_kallsyms_local(ksyms);
+ free(ids);
+}
+
void test_tracing_multi_test(void)
{
#ifndef __x86_64__
diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_bench.c b/tools/testing/selftests/bpf/progs/tracing_multi_bench.c
new file mode 100644
index 000000000000..beae946cb8c4
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/tracing_multi_bench.c
@@ -0,0 +1,12 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+SEC("fentry.multi")
+int BPF_PROG(bench)
+{
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/trace_helpers.c b/tools/testing/selftests/bpf/trace_helpers.c
index 0e63daf83ed5..8de0b60766de 100644
--- a/tools/testing/selftests/bpf/trace_helpers.c
+++ b/tools/testing/selftests/bpf/trace_helpers.c
@@ -548,7 +548,7 @@ static const char * const trace_blacklist[] = {
"bpf_get_numa_node_id",
};
-static bool skip_entry(char *name)
+bool is_unsafe_function(const char *name)
{
int i;
@@ -651,7 +651,7 @@ int bpf_get_ksyms(struct ksyms **ksymsp, bool kernel)
free(name);
if (sscanf(buf, "%ms$*[^\n]\n", &name) != 1)
continue;
- if (skip_entry(name))
+ if (is_unsafe_function(name))
continue;
ks = search_kallsyms_custom_local(ksyms, name, search_kallsyms_compare);
@@ -728,7 +728,7 @@ int bpf_get_addrs(unsigned long **addrsp, size_t *cntp, bool kernel)
free(name);
if (sscanf(buf, "%p %ms$*[^\n]\n", &addr, &name) != 2)
continue;
- if (skip_entry(name))
+ if (is_unsafe_function(name))
continue;
if (cnt == max_cnt) {
diff --git a/tools/testing/selftests/bpf/trace_helpers.h b/tools/testing/selftests/bpf/trace_helpers.h
index d5bf1433675d..01c8ecc45627 100644
--- a/tools/testing/selftests/bpf/trace_helpers.h
+++ b/tools/testing/selftests/bpf/trace_helpers.h
@@ -63,4 +63,5 @@ int read_build_id(const char *path, char *build_id, size_t size);
int bpf_get_ksyms(struct ksyms **ksymsp, bool kernel);
int bpf_get_addrs(unsigned long **addrsp, size_t *cntp, bool kernel);
+bool is_unsafe_function(const char *name);
#endif
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* Re: [PATCHv4 bpf-next 24/25] selftests/bpf: Add tracing multi attach benchmark test
2026-03-24 8:18 ` [PATCHv4 bpf-next 24/25] selftests/bpf: Add tracing multi attach benchmark test Jiri Olsa
@ 2026-03-25 6:45 ` Leon Hwang
2026-03-25 15:11 ` Alexei Starovoitov
2026-03-25 21:48 ` Jiri Olsa
0 siblings, 2 replies; 41+ messages in thread
From: Leon Hwang @ 2026-03-25 6:45 UTC (permalink / raw)
To: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
On 24/3/26 16:18, Jiri Olsa wrote:
> Adding benchmark test that attaches to (almost) all allowed tracing
> functions and display attach/detach times.
>
> # ./test_progs -t tracing_multi_bench_attach -v
> bpf_testmod.ko is already unloaded.
> Loading bpf_testmod.ko...
> Successfully loaded bpf_testmod.ko.
> serial_test_tracing_multi_bench_attach:PASS:btf__load_vmlinux_btf 0 nsec
> serial_test_tracing_multi_bench_attach:PASS:tracing_multi_bench__open_and_load 0 nsec
> serial_test_tracing_multi_bench_attach:PASS:get_syms 0 nsec
> serial_test_tracing_multi_bench_attach:PASS:bpf_program__attach_tracing_multi 0 nsec
> serial_test_tracing_multi_bench_attach: found 51186 functions
> serial_test_tracing_multi_bench_attach: attached in 1.295s
> serial_test_tracing_multi_bench_attach: detached in 0.243s
> #507 tracing_multi_bench_attach:OK
> Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
> Successfully unloaded bpf_testmod.ko.
>
> Exporting skip_entry as is_unsafe_function and using it in the test.
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
> .../selftests/bpf/prog_tests/tracing_multi.c | 98 +++++++++++++++++++
> .../selftests/bpf/progs/tracing_multi_bench.c | 12 +++
> tools/testing/selftests/bpf/trace_helpers.c | 6 +-
> tools/testing/selftests/bpf/trace_helpers.h | 1 +
> 4 files changed, 114 insertions(+), 3 deletions(-)
> create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_bench.c
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
> index dece45d8fb5e..6917471e329c 100644
> --- a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
> +++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
> @@ -9,6 +9,7 @@
> #include "tracing_multi_intersect.skel.h"
> #include "tracing_multi_session.skel.h"
> #include "tracing_multi_fail.skel.h"
> +#include "tracing_multi_bench.skel.h"
> #include "trace_helpers.h"
>
> static __u64 bpf_fentry_test_cookies[] = {
> @@ -571,6 +572,103 @@ static void test_attach_api_fails(void)
> free(ids2);
> }
>
> +void serial_test_tracing_multi_bench_attach(void)
> +{
> + LIBBPF_OPTS(bpf_tracing_multi_opts, opts);
> + struct tracing_multi_bench *skel = NULL;
> + long attach_start_ns, attach_end_ns;
> + long detach_start_ns, detach_end_ns;
> + double attach_delta, detach_delta;
> + struct bpf_link *link = NULL;
> + size_t i, cap = 0, cnt = 0;
> + struct ksyms *ksyms = NULL;
> + void *root = NULL;
> + __u32 *ids = NULL;
> + __u32 nr, type_id;
> + struct btf *btf;
> + int err;
> +
> +#ifndef __x86_64__
> + test__skip();
> + return;
> +#endif
> +
> + btf = btf__load_vmlinux_btf();
> + if (!ASSERT_OK_PTR(btf, "btf__load_vmlinux_btf"))
> + return;> +
> + skel = tracing_multi_bench__open_and_load();
> + if (!ASSERT_OK_PTR(skel, "tracing_multi_bench__open_and_load"))
> + goto cleanup;
> +
> + if (!ASSERT_OK(bpf_get_ksyms(&ksyms, true), "get_syms"))
> + goto cleanup;
> +
> + /* Get all ftrace 'safe' symbols.. */
> + for (i = 0; i < ksyms->filtered_cnt; i++) {
> + if (is_unsafe_function(ksyms->filtered_syms[i]))
> + continue;
> + tsearch(&ksyms->filtered_syms[i], &root, compare);
> + }
> +
> + /* ..and filter them through BTF and btf_type_is_traceable_func. */
> + nr = btf__type_cnt(btf);
> + for (type_id = 1; type_id < nr; type_id++) {
> + const struct btf_type *type;
> + const char *str;
> +
> + type = btf__type_by_id(btf, type_id);
> + if (!type)
> + break;
> +
> + if (BTF_INFO_KIND(type->info) != BTF_KIND_FUNC)
> + continue;
> +
> + str = btf__name_by_offset(btf, type->name_off);
> + if (!str)
> + break;
> +
> + if (!tfind(&str, &root, compare))
> + continue;
> +
> + if (!btf_type_is_traceable_func(btf, type))
> + continue;
> +
> + err = libbpf_ensure_mem((void **) &ids, &cap, sizeof(*ids), cnt + 1);
> + if (err)
> + goto cleanup;
> +
> + ids[cnt++] = type_id;
> + }
> +
> + opts.ids = ids;
> + opts.cnt = cnt;
> +
> + attach_start_ns = get_time_ns();
> + link = bpf_program__attach_tracing_multi(skel->progs.bench, NULL, &opts);
> + attach_end_ns = get_time_ns();
> +
> + if (!ASSERT_OK_PTR(link, "bpf_program__attach_tracing_multi"))
> + goto cleanup;
> +
> + detach_start_ns = get_time_ns();
> + bpf_link__destroy(link);
> + detach_end_ns = get_time_ns();
> +
> + attach_delta = (attach_end_ns - attach_start_ns) / 1000000000.0;
> + detach_delta = (detach_end_ns - detach_start_ns) / 1000000000.0;
> +
> + printf("%s: found %lu functions\n", __func__, cnt);
> + printf("%s: attached in %7.3lfs\n", __func__, attach_delta);
> + printf("%s: detached in %7.3lfs\n", __func__, detach_delta);
> +
> +cleanup:
> + tracing_multi_bench__destroy(skel);
> + tdestroy(root, tdestroy_free_nop);
> + free_kallsyms_local(ksyms);
> + free(ids);
Is btf__free(btf) missing here? Since 'btf' was calloc inner
btf__load_vmlinux_btf().
Thanks,
Leon
> +}
> +
> void test_tracing_multi_test(void)
> {
> #ifndef __x86_64__
> diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_bench.c b/tools/testing/selftests/bpf/progs/tracing_multi_bench.c
> new file mode 100644
> index 000000000000..beae946cb8c4
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/tracing_multi_bench.c
> @@ -0,0 +1,12 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#include <vmlinux.h>
> +#include <bpf/bpf_helpers.h>
> +#include <bpf/bpf_tracing.h>
> +
> +char _license[] SEC("license") = "GPL";
> +
> +SEC("fentry.multi")
> +int BPF_PROG(bench)
> +{
> + return 0;
> +}
> diff --git a/tools/testing/selftests/bpf/trace_helpers.c b/tools/testing/selftests/bpf/trace_helpers.c
> index 0e63daf83ed5..8de0b60766de 100644
> --- a/tools/testing/selftests/bpf/trace_helpers.c
> +++ b/tools/testing/selftests/bpf/trace_helpers.c
> @@ -548,7 +548,7 @@ static const char * const trace_blacklist[] = {
> "bpf_get_numa_node_id",
> };
>
> -static bool skip_entry(char *name)
> +bool is_unsafe_function(const char *name)
> {
> int i;
>
> @@ -651,7 +651,7 @@ int bpf_get_ksyms(struct ksyms **ksymsp, bool kernel)
> free(name);
> if (sscanf(buf, "%ms$*[^\n]\n", &name) != 1)
> continue;
> - if (skip_entry(name))
> + if (is_unsafe_function(name))
> continue;
>
> ks = search_kallsyms_custom_local(ksyms, name, search_kallsyms_compare);
> @@ -728,7 +728,7 @@ int bpf_get_addrs(unsigned long **addrsp, size_t *cntp, bool kernel)
> free(name);
> if (sscanf(buf, "%p %ms$*[^\n]\n", &addr, &name) != 2)
> continue;
> - if (skip_entry(name))
> + if (is_unsafe_function(name))
> continue;
>
> if (cnt == max_cnt) {
> diff --git a/tools/testing/selftests/bpf/trace_helpers.h b/tools/testing/selftests/bpf/trace_helpers.h
> index d5bf1433675d..01c8ecc45627 100644
> --- a/tools/testing/selftests/bpf/trace_helpers.h
> +++ b/tools/testing/selftests/bpf/trace_helpers.h
> @@ -63,4 +63,5 @@ int read_build_id(const char *path, char *build_id, size_t size);
> int bpf_get_ksyms(struct ksyms **ksymsp, bool kernel);
> int bpf_get_addrs(unsigned long **addrsp, size_t *cntp, bool kernel);
>
> +bool is_unsafe_function(const char *name);
> #endif
^ permalink raw reply [flat|nested] 41+ messages in thread* Re: [PATCHv4 bpf-next 24/25] selftests/bpf: Add tracing multi attach benchmark test
2026-03-25 6:45 ` Leon Hwang
@ 2026-03-25 15:11 ` Alexei Starovoitov
2026-03-25 21:48 ` Jiri Olsa
2026-03-25 21:48 ` Jiri Olsa
1 sibling, 1 reply; 41+ messages in thread
From: Alexei Starovoitov @ 2026-03-25 15:11 UTC (permalink / raw)
To: Leon Hwang
Cc: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
On Tue, Mar 24, 2026 at 11:45 PM Leon Hwang <leon.hwang@linux.dev> wrote:
>
> > +
> > + btf = btf__load_vmlinux_btf();
> > + if (!ASSERT_OK_PTR(btf, "btf__load_vmlinux_btf"))
> > + return;> +
> > + skel = tracing_multi_bench__open_and_load();
> > + if (!ASSERT_OK_PTR(skel, "tracing_multi_bench__open_and_load"))
> > + goto cleanup;
> > +
> > + if (!ASSERT_OK(bpf_get_ksyms(&ksyms, true), "get_syms"))
> > + goto cleanup;
> > +
> > + /* Get all ftrace 'safe' symbols.. */
> > + for (i = 0; i < ksyms->filtered_cnt; i++) {
> > + if (is_unsafe_function(ksyms->filtered_syms[i]))
> > + continue;
> > + tsearch(&ksyms->filtered_syms[i], &root, compare);
> > + }
> > +
> > + /* ..and filter them through BTF and btf_type_is_traceable_func. */
> > + nr = btf__type_cnt(btf);
> > + for (type_id = 1; type_id < nr; type_id++) {
> > + const struct btf_type *type;
> > + const char *str;
> > +
> > + type = btf__type_by_id(btf, type_id);
> > + if (!type)
> > + break;
> > +
> > + if (BTF_INFO_KIND(type->info) != BTF_KIND_FUNC)
> > + continue;
> > +
> > + str = btf__name_by_offset(btf, type->name_off);
> > + if (!str)
> > + break;
> > +
> > + if (!tfind(&str, &root, compare))
> > + continue;
> > +
> > + if (!btf_type_is_traceable_func(btf, type))
> > + continue;
> > +
> > + err = libbpf_ensure_mem((void **) &ids, &cap, sizeof(*ids), cnt + 1);
> > + if (err)
> > + goto cleanup;
> > +
> > + ids[cnt++] = type_id;
> > + }
> > +
> > + opts.ids = ids;
> > + opts.cnt = cnt;
> > +
> > + attach_start_ns = get_time_ns();
> > + link = bpf_program__attach_tracing_multi(skel->progs.bench, NULL, &opts);
> > + attach_end_ns = get_time_ns();
> > +
> > + if (!ASSERT_OK_PTR(link, "bpf_program__attach_tracing_multi"))
> > + goto cleanup;
> > +
> > + detach_start_ns = get_time_ns();
> > + bpf_link__destroy(link);
> > + detach_end_ns = get_time_ns();
> > +
> > + attach_delta = (attach_end_ns - attach_start_ns) / 1000000000.0;
> > + detach_delta = (detach_end_ns - detach_start_ns) / 1000000000.0;
> > +
> > + printf("%s: found %lu functions\n", __func__, cnt);
> > + printf("%s: attached in %7.3lfs\n", __func__, attach_delta);
> > + printf("%s: detached in %7.3lfs\n", __func__, detach_delta);
> > +
> > +cleanup:
> > + tracing_multi_bench__destroy(skel);
> > + tdestroy(root, tdestroy_free_nop);
> > + free_kallsyms_local(ksyms);
> > + free(ids);
>
> Is btf__free(btf) missing here? Since 'btf' was calloc inner
> btf__load_vmlinux_btf().
Good point.
Leon, please trim your replies. No need to quote the whole patch.
btw sashiko caught it too:
https://sashiko.dev/#/patchset/20260324081846.2334094-1-jolsa%40kernel.org
and many other bugs beyond what bpf CI could find.
Jiri, please address them all.
^ permalink raw reply [flat|nested] 41+ messages in thread* Re: [PATCHv4 bpf-next 24/25] selftests/bpf: Add tracing multi attach benchmark test
2026-03-25 15:11 ` Alexei Starovoitov
@ 2026-03-25 21:48 ` Jiri Olsa
0 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-25 21:48 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Leon Hwang, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
On Wed, Mar 25, 2026 at 08:11:00AM -0700, Alexei Starovoitov wrote:
SNIP
> > > + attach_start_ns = get_time_ns();
> > > + link = bpf_program__attach_tracing_multi(skel->progs.bench, NULL, &opts);
> > > + attach_end_ns = get_time_ns();
> > > +
> > > + if (!ASSERT_OK_PTR(link, "bpf_program__attach_tracing_multi"))
> > > + goto cleanup;
> > > +
> > > + detach_start_ns = get_time_ns();
> > > + bpf_link__destroy(link);
> > > + detach_end_ns = get_time_ns();
> > > +
> > > + attach_delta = (attach_end_ns - attach_start_ns) / 1000000000.0;
> > > + detach_delta = (detach_end_ns - detach_start_ns) / 1000000000.0;
> > > +
> > > + printf("%s: found %lu functions\n", __func__, cnt);
> > > + printf("%s: attached in %7.3lfs\n", __func__, attach_delta);
> > > + printf("%s: detached in %7.3lfs\n", __func__, detach_delta);
> > > +
> > > +cleanup:
> > > + tracing_multi_bench__destroy(skel);
> > > + tdestroy(root, tdestroy_free_nop);
> > > + free_kallsyms_local(ksyms);
> > > + free(ids);
> >
> > Is btf__free(btf) missing here? Since 'btf' was calloc inner
> > btf__load_vmlinux_btf().
>
> Good point.
> Leon, please trim your replies. No need to quote the whole patch.
>
> btw sashiko caught it too:
> https://sashiko.dev/#/patchset/20260324081846.2334094-1-jolsa%40kernel.org
> and many other bugs beyond what bpf CI could find.
>
> Jiri, please address them all.
ok, will check
jirka
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCHv4 bpf-next 24/25] selftests/bpf: Add tracing multi attach benchmark test
2026-03-25 6:45 ` Leon Hwang
2026-03-25 15:11 ` Alexei Starovoitov
@ 2026-03-25 21:48 ` Jiri Olsa
1 sibling, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-25 21:48 UTC (permalink / raw)
To: Leon Hwang
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
Yonghong Song, Menglong Dong, Steven Rostedt
On Wed, Mar 25, 2026 at 02:45:31PM +0800, Leon Hwang wrote:
SNIP
> > +
> > + attach_delta = (attach_end_ns - attach_start_ns) / 1000000000.0;
> > + detach_delta = (detach_end_ns - detach_start_ns) / 1000000000.0;
> > +
> > + printf("%s: found %lu functions\n", __func__, cnt);
> > + printf("%s: attached in %7.3lfs\n", __func__, attach_delta);
> > + printf("%s: detached in %7.3lfs\n", __func__, detach_delta);
> > +
> > +cleanup:
> > + tracing_multi_bench__destroy(skel);
> > + tdestroy(root, tdestroy_free_nop);
> > + free_kallsyms_local(ksyms);
> > + free(ids);
>
> Is btf__free(btf) missing here? Since 'btf' was calloc inner
> btf__load_vmlinux_btf().
ah yea, will add, thanks
jirka
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCHv4 bpf-next 25/25] selftests/bpf: Add tracing multi attach rollback tests
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (23 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 24/25] selftests/bpf: Add tracing multi attach benchmark test Jiri Olsa
@ 2026-03-24 8:18 ` Jiri Olsa
2026-03-25 6:45 ` Leon Hwang
2026-03-25 6:42 ` [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Leon Hwang
25 siblings, 1 reply; 41+ messages in thread
From: Jiri Olsa @ 2026-03-24 8:18 UTC (permalink / raw)
To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
Adding tests for the rollback code when the tracing_multi
link won't get attached, covering 2 reasons:
- wrong btf id passed by user, where all previously allocated
trampolines will be released
- trampoline for requested function is fully attached (has already
maximum programs attached) and the link fails, the rollback code
needs to release all previously link-ed trampolines and release
them
We need the bpf_fentry_test* unattached for the tests to pass,
so the rollback tests are serial.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
.../selftests/bpf/prog_tests/tracing_multi.c | 213 ++++++++++++++++++
.../bpf/progs/tracing_multi_rollback.c | 43 ++++
2 files changed, 256 insertions(+)
create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_rollback.c
diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
index 6917471e329c..6ff0f72f8c46 100644
--- a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
+++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
@@ -10,6 +10,7 @@
#include "tracing_multi_session.skel.h"
#include "tracing_multi_fail.skel.h"
#include "tracing_multi_bench.skel.h"
+#include "tracing_multi_rollback.skel.h"
#include "trace_helpers.h"
static __u64 bpf_fentry_test_cookies[] = {
@@ -669,6 +670,218 @@ void serial_test_tracing_multi_bench_attach(void)
free(ids);
}
+static void tracing_multi_rollback_run(struct tracing_multi_rollback *skel)
+{
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ int err, prog_fd;
+
+ prog_fd = bpf_program__fd(skel->progs.test_fentry);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+
+ /* make sure the rollback code did not leave any program attached */
+ ASSERT_EQ(skel->bss->test_result_fentry, 0, "test_result_fentry");
+ ASSERT_EQ(skel->bss->test_result_fexit, 0, "test_result_fexit");
+}
+
+static void test_rollback_put(void)
+{
+ LIBBPF_OPTS(bpf_tracing_multi_opts, opts);
+ struct tracing_multi_rollback *skel = NULL;
+ size_t cnt = FUNCS_CNT;
+ __u32 *ids = NULL;
+ int err;
+
+ skel = tracing_multi_rollback__open();
+ if (!ASSERT_OK_PTR(skel, "tracing_multi_rollback__open"))
+ return;
+
+ bpf_program__set_autoload(skel->progs.test_fentry, true);
+ bpf_program__set_autoload(skel->progs.test_fexit, true);
+
+ err = tracing_multi_rollback__load(skel);
+ if (!ASSERT_OK(err, "tracing_multi_rollback__load"))
+ goto cleanup;
+
+ ids = get_ids(bpf_fentry_test, cnt, NULL);
+ if (!ASSERT_OK_PTR(ids, "get_ids"))
+ goto cleanup;
+
+ /*
+ * Mangle last id to trigger rollback, which needs to do put
+ * on get-ed trampolines.
+ */
+ ids[9] = 0;
+
+ opts.ids = ids;
+ opts.cnt = cnt;
+
+ skel->bss->pid = getpid();
+
+ skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
+ NULL, &opts);
+ if (!ASSERT_ERR_PTR(skel->links.test_fentry, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ skel->links.test_fexit = bpf_program__attach_tracing_multi(skel->progs.test_fexit,
+ NULL, &opts);
+ if (!ASSERT_ERR_PTR(skel->links.test_fexit, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ /* We don't really attach any program, but let's make sure. */
+ tracing_multi_rollback_run(skel);
+
+cleanup:
+ tracing_multi_rollback__destroy(skel);
+ free(ids);
+}
+
+
+static void fillers_cleanup(struct tracing_multi_rollback **skels, int cnt)
+{
+ int i;
+
+ for (i = 0; i < cnt; i++)
+ tracing_multi_rollback__destroy(skels[i]);
+
+ free(skels);
+}
+
+static struct tracing_multi_rollback *extra_load_and_link(void)
+{
+ struct tracing_multi_rollback *skel;
+ int err;
+
+ skel = tracing_multi_rollback__open();
+ if (!ASSERT_OK_PTR(skel, "tracing_multi_rollback__open"))
+ goto cleanup;
+
+ bpf_program__set_autoload(skel->progs.extra, true);
+
+ err = tracing_multi_rollback__load(skel);
+ if (!ASSERT_OK(err, "tracing_multi_rollback__load"))
+ goto cleanup;
+
+ skel->links.extra = bpf_program__attach_trace(skel->progs.extra);
+ if (!ASSERT_OK_PTR(skel->links.extra, "bpf_program__attach_trace"))
+ goto cleanup;
+
+ return skel;
+
+cleanup:
+ tracing_multi_rollback__destroy(skel);
+ return NULL;
+}
+
+static struct tracing_multi_rollback **fillers_load_and_link(int max)
+{
+ struct tracing_multi_rollback **skels, *skel;
+ int i, err;
+
+ skels = calloc(max + 1, sizeof(*skels));
+ if (!ASSERT_OK_PTR(skels, "calloc"))
+ return NULL;
+
+ for (i = 0; i < max; i++) {
+ skel = skels[i] = tracing_multi_rollback__open();
+ if (!ASSERT_OK_PTR(skels[i], "tracing_multi_rollback__open"))
+ goto cleanup;
+
+ bpf_program__set_autoload(skel->progs.filler, true);
+
+ err = tracing_multi_rollback__load(skel);
+ if (!ASSERT_OK(err, "tracing_multi_rollback__load"))
+ goto cleanup;
+
+ skel->links.filler = bpf_program__attach_trace(skel->progs.filler);
+ if (!ASSERT_OK_PTR(skels[i]->links.filler, "bpf_program__attach_trace"))
+ goto cleanup;
+ }
+
+ return skels;
+
+cleanup:
+ fillers_cleanup(skels, i);
+ return NULL;
+}
+
+static void test_rollback_unlink(void)
+{
+ struct tracing_multi_rollback *skel, *extra;
+ LIBBPF_OPTS(bpf_tracing_multi_opts, opts);
+ struct tracing_multi_rollback **fillers;
+ size_t cnt = FUNCS_CNT;
+ __u32 *ids = NULL;
+ int err, max;
+
+ max = get_bpf_max_tramp_links();
+ if (!ASSERT_GE(max, 1, "bpf_max_tramp_links"))
+ return;
+
+ /* Attach maximum allowed programs to bpf_fentry_test10 */
+ fillers = fillers_load_and_link(max);
+ if (!ASSERT_OK_PTR(fillers, "fillers_load_and_link"))
+ return;
+
+ extra = extra_load_and_link();
+ if (!ASSERT_OK_PTR(extra, "extra_load_and_link"))
+ return;
+
+ skel = tracing_multi_rollback__open();
+ if (!ASSERT_OK_PTR(skel, "tracing_multi_rollback__open"))
+ goto cleanup;
+
+ bpf_program__set_autoload(skel->progs.test_fentry, true);
+ bpf_program__set_autoload(skel->progs.test_fexit, true);
+
+ /*
+ * Attach tracing_multi link on bpf_fentry_test1-10, which will
+ * fail on bpf_fentry_test10 function, because it already has
+ * maximum allowed programs attached.
+ *
+ * The rollback needs to unlink already link-ed trampolines and
+ * put all of them.
+ */
+ err = tracing_multi_rollback__load(skel);
+ if (!ASSERT_OK(err, "tracing_multi_rollback__load"))
+ goto cleanup;
+
+ ids = get_ids(bpf_fentry_test, cnt, NULL);
+ if (!ASSERT_OK_PTR(ids, "get_ids"))
+ goto cleanup;
+
+ opts.ids = ids;
+ opts.cnt = cnt;
+
+ skel->bss->pid = getpid();
+
+ skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
+ NULL, &opts);
+ if (!ASSERT_ERR_PTR(skel->links.test_fentry, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ skel->links.test_fexit = bpf_program__attach_tracing_multi(skel->progs.test_fexit,
+ NULL, &opts);
+ if (!ASSERT_ERR_PTR(skel->links.test_fexit, "bpf_program__attach_tracing_multi"))
+ goto cleanup;
+
+ tracing_multi_rollback_run(skel);
+
+cleanup:
+ fillers_cleanup(fillers, max);
+ tracing_multi_rollback__destroy(extra);
+ tracing_multi_rollback__destroy(skel);
+ free(ids);
+}
+
+void serial_test_tracing_multi_attach_rollback(void)
+{
+ if (test__start_subtest("put"))
+ test_rollback_put();
+ if (test__start_subtest("unlink"))
+ test_rollback_unlink();
+}
+
void test_tracing_multi_test(void)
{
#ifndef __x86_64__
diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_rollback.c b/tools/testing/selftests/bpf/progs/tracing_multi_rollback.c
new file mode 100644
index 000000000000..a49d1d841f3a
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/tracing_multi_rollback.c
@@ -0,0 +1,43 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+int pid = 0;
+
+__u64 test_result_fentry = 0;
+__u64 test_result_fexit = 0;
+
+SEC("?fentry.multi")
+int BPF_PROG(test_fentry)
+{
+ if (bpf_get_current_pid_tgid() >> 32 != pid)
+ return 0;
+
+ test_result_fentry++;
+ return 0;
+}
+
+SEC("?fexit.multi")
+int BPF_PROG(test_fexit)
+{
+ if (bpf_get_current_pid_tgid() >> 32 != pid)
+ return 0;
+
+ test_result_fexit++;
+ return 0;
+}
+
+SEC("?fentry/bpf_fentry_test1")
+int BPF_PROG(extra)
+{
+ return 0;
+}
+
+SEC("?fentry/bpf_fentry_test10")
+int BPF_PROG(filler)
+{
+ return 0;
+}
--
2.53.0
^ permalink raw reply related [flat|nested] 41+ messages in thread* Re: [PATCHv4 bpf-next 25/25] selftests/bpf: Add tracing multi attach rollback tests
2026-03-24 8:18 ` [PATCHv4 bpf-next 25/25] selftests/bpf: Add tracing multi attach rollback tests Jiri Olsa
@ 2026-03-25 6:45 ` Leon Hwang
2026-03-25 21:49 ` Jiri Olsa
0 siblings, 1 reply; 41+ messages in thread
From: Leon Hwang @ 2026-03-25 6:45 UTC (permalink / raw)
To: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt
On 24/3/26 16:18, Jiri Olsa wrote:
> Adding tests for the rollback code when the tracing_multi
> link won't get attached, covering 2 reasons:
>
> - wrong btf id passed by user, where all previously allocated
> trampolines will be released
> - trampoline for requested function is fully attached (has already
> maximum programs attached) and the link fails, the rollback code
> needs to release all previously link-ed trampolines and release
> them
>
> We need the bpf_fentry_test* unattached for the tests to pass,
> so the rollback tests are serial.
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
> .../selftests/bpf/prog_tests/tracing_multi.c | 213 ++++++++++++++++++
> .../bpf/progs/tracing_multi_rollback.c | 43 ++++
> 2 files changed, 256 insertions(+)
> create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_rollback.c
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
> index 6917471e329c..6ff0f72f8c46 100644
> --- a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
> +++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
> @@ -10,6 +10,7 @@
> #include "tracing_multi_session.skel.h"
> #include "tracing_multi_fail.skel.h"
> #include "tracing_multi_bench.skel.h"
> +#include "tracing_multi_rollback.skel.h"
> #include "trace_helpers.h"
>
> static __u64 bpf_fentry_test_cookies[] = {
> @@ -669,6 +670,218 @@ void serial_test_tracing_multi_bench_attach(void)
> free(ids);
> }
>
> +static void tracing_multi_rollback_run(struct tracing_multi_rollback *skel)
> +{
> + LIBBPF_OPTS(bpf_test_run_opts, topts);
> + int err, prog_fd;
> +
> + prog_fd = bpf_program__fd(skel->progs.test_fentry);
> + err = bpf_prog_test_run_opts(prog_fd, &topts);
> + ASSERT_OK(err, "test_run");
> +
> + /* make sure the rollback code did not leave any program attached */
> + ASSERT_EQ(skel->bss->test_result_fentry, 0, "test_result_fentry");
> + ASSERT_EQ(skel->bss->test_result_fexit, 0, "test_result_fexit");
> +}
> +
> +static void test_rollback_put(void)
> +{
> + LIBBPF_OPTS(bpf_tracing_multi_opts, opts);
> + struct tracing_multi_rollback *skel = NULL;
> + size_t cnt = FUNCS_CNT;
> + __u32 *ids = NULL;
> + int err;
> +
> + skel = tracing_multi_rollback__open();
> + if (!ASSERT_OK_PTR(skel, "tracing_multi_rollback__open"))
> + return;
> +
> + bpf_program__set_autoload(skel->progs.test_fentry, true);
> + bpf_program__set_autoload(skel->progs.test_fexit, true);
> +
> + err = tracing_multi_rollback__load(skel);
> + if (!ASSERT_OK(err, "tracing_multi_rollback__load"))
> + goto cleanup;
> +
> + ids = get_ids(bpf_fentry_test, cnt, NULL);
> + if (!ASSERT_OK_PTR(ids, "get_ids"))
> + goto cleanup;
> +
> + /*
> + * Mangle last id to trigger rollback, which needs to do put
> + * on get-ed trampolines.
> + */
> + ids[9] = 0;
> +
> + opts.ids = ids;
> + opts.cnt = cnt;
> +
> + skel->bss->pid = getpid();
> +
> + skel->links.test_fentry = bpf_program__attach_tracing_multi(skel->progs.test_fentry,
> + NULL, &opts);
> + if (!ASSERT_ERR_PTR(skel->links.test_fentry, "bpf_program__attach_tracing_multi"))
> + goto cleanup;
> +
> + skel->links.test_fexit = bpf_program__attach_tracing_multi(skel->progs.test_fexit,
> + NULL, &opts);
> + if (!ASSERT_ERR_PTR(skel->links.test_fexit, "bpf_program__attach_tracing_multi"))
> + goto cleanup;
> +
> + /* We don't really attach any program, but let's make sure. */
> + tracing_multi_rollback_run(skel);
> +
> +cleanup:
> + tracing_multi_rollback__destroy(skel);
> + free(ids);
> +}
> +
> +
NIT: keep one blank line here.
> +static void fillers_cleanup(struct tracing_multi_rollback **skels, int cnt)
> +{
> + int i;
> +
> + for (i = 0; i < cnt; i++)
> + tracing_multi_rollback__destroy(skels[i]);
> +
> + free(skels);
> +}
> +
> +static struct tracing_multi_rollback *extra_load_and_link(void)
> +{
> + struct tracing_multi_rollback *skel;
> + int err;
> +
> + skel = tracing_multi_rollback__open();
> + if (!ASSERT_OK_PTR(skel, "tracing_multi_rollback__open"))
> + goto cleanup;
> +
> + bpf_program__set_autoload(skel->progs.extra, true);
> +
> + err = tracing_multi_rollback__load(skel);
> + if (!ASSERT_OK(err, "tracing_multi_rollback__load"))
> + goto cleanup;
> +
> + skel->links.extra = bpf_program__attach_trace(skel->progs.extra);
> + if (!ASSERT_OK_PTR(skel->links.extra, "bpf_program__attach_trace"))
> + goto cleanup;
> +
> + return skel;
> +
> +cleanup:
> + tracing_multi_rollback__destroy(skel);
> + return NULL;
> +}
> +
> +static struct tracing_multi_rollback **fillers_load_and_link(int max)
> +{
> + struct tracing_multi_rollback **skels, *skel;
> + int i, err;
> +
> + skels = calloc(max + 1, sizeof(*skels));
> + if (!ASSERT_OK_PTR(skels, "calloc"))
> + return NULL;
> +
> + for (i = 0; i < max; i++) {
> + skel = skels[i] = tracing_multi_rollback__open();
> + if (!ASSERT_OK_PTR(skels[i], "tracing_multi_rollback__open"))
> + goto cleanup;
> +
> + bpf_program__set_autoload(skel->progs.filler, true);
> +
> + err = tracing_multi_rollback__load(skel);
> + if (!ASSERT_OK(err, "tracing_multi_rollback__load"))
> + goto cleanup;
> +
> + skel->links.filler = bpf_program__attach_trace(skel->progs.filler);
> + if (!ASSERT_OK_PTR(skels[i]->links.filler, "bpf_program__attach_trace"))
> + goto cleanup;
> + }
> +
> + return skels;
> +
> +cleanup:
> + fillers_cleanup(skels, i);
> + return NULL;
> +}
> +
> +static void test_rollback_unlink(void)
> +{
> + struct tracing_multi_rollback *skel, *extra;
> + LIBBPF_OPTS(bpf_tracing_multi_opts, opts);
> + struct tracing_multi_rollback **fillers;
> + size_t cnt = FUNCS_CNT;
> + __u32 *ids = NULL;
> + int err, max;
> +
> + max = get_bpf_max_tramp_links();
> + if (!ASSERT_GE(max, 1, "bpf_max_tramp_links"))
> + return;
> +
> + /* Attach maximum allowed programs to bpf_fentry_test10 */
> + fillers = fillers_load_and_link(max);
> + if (!ASSERT_OK_PTR(fillers, "fillers_load_and_link"))
> + return;
> +
> + extra = extra_load_and_link();
> + if (!ASSERT_OK_PTR(extra, "extra_load_and_link"))
Should cleanup fillers here?
Thanks,
Leon
> + return;
> +
> + skel = tracing_multi_rollback__open();
> + if (!ASSERT_OK_PTR(skel, "tracing_multi_rollback__open"))
> + goto cleanup;
> +
[...]
^ permalink raw reply [flat|nested] 41+ messages in thread* Re: [PATCHv4 bpf-next 25/25] selftests/bpf: Add tracing multi attach rollback tests
2026-03-25 6:45 ` Leon Hwang
@ 2026-03-25 21:49 ` Jiri Olsa
0 siblings, 0 replies; 41+ messages in thread
From: Jiri Olsa @ 2026-03-25 21:49 UTC (permalink / raw)
To: Leon Hwang
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
Yonghong Song, Menglong Dong, Steven Rostedt
On Wed, Mar 25, 2026 at 02:45:59PM +0800, Leon Hwang wrote:
SNIP
> > +static void test_rollback_unlink(void)
> > +{
> > + struct tracing_multi_rollback *skel, *extra;
> > + LIBBPF_OPTS(bpf_tracing_multi_opts, opts);
> > + struct tracing_multi_rollback **fillers;
> > + size_t cnt = FUNCS_CNT;
> > + __u32 *ids = NULL;
> > + int err, max;
> > +
> > + max = get_bpf_max_tramp_links();
> > + if (!ASSERT_GE(max, 1, "bpf_max_tramp_links"))
> > + return;
> > +
> > + /* Attach maximum allowed programs to bpf_fentry_test10 */
> > + fillers = fillers_load_and_link(max);
> > + if (!ASSERT_OK_PTR(fillers, "fillers_load_and_link"))
> > + return;
> > +
> > + extra = extra_load_and_link();
> > + if (!ASSERT_OK_PTR(extra, "extra_load_and_link"))
>
> Should cleanup fillers here?
yep, should jump to cleanup, thanks
jirka
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCHv4 bpf-next 00/25] bpf: tracing_multi link
2026-03-24 8:18 [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Jiri Olsa
` (24 preceding siblings ...)
2026-03-24 8:18 ` [PATCHv4 bpf-next 25/25] selftests/bpf: Add tracing multi attach rollback tests Jiri Olsa
@ 2026-03-25 6:42 ` Leon Hwang
2026-03-25 14:58 ` Leon Hwang
25 siblings, 1 reply; 41+ messages in thread
From: Leon Hwang @ 2026-03-25 6:42 UTC (permalink / raw)
To: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: Hengqi Chen, bpf, linux-trace-kernel, Martin KaFai Lau,
Eduard Zingerman, Song Liu, Yonghong Song, Menglong Dong,
Steven Rostedt
Hi Jiri,
Nice version for tracing_multi link.
I hope I have time to add tracing_multi link support to bpfsnoop, and
test this new tracing feature.
I left comments on patches #13, #24, and #25.
Hope this series lands in bpf-next soon.
Thanks,
Leon
On 24/3/26 16:18, Jiri Olsa wrote:
> hi,
> adding tracing_multi link support that allows fast attachment
> of tracing program to many functions.
>
> RFC: https://lore.kernel.org/bpf/20260203093819.2105105-1-jolsa@kernel.org/
> v1: https://lore.kernel.org/bpf/20260220100649.628307-1-jolsa@kernel.org/
> v2: https://lore.kernel.org/bpf/20260304222141.497203-1-jolsa@kernel.org/
> v3: https://lore.kernel.org/bpf/20260316075138.465430-1-jolsa@kernel.org/
>
> v4 changes:
> - unlink rollback fix (added ftrace_hash_count) [bot]
> - use const for some bpf_link_create_opts tracing_multi members [bot]
> - adding missing comment for lockdep keys [bot]
> - selftest error path fixes (leaks) and other assorted test fixes [Leon Hwang]
> - several compile fixes wrt CONFIG_BPF_SYSCALL and CONFIG_BPF_JIT [kernel test robot]
> - make ftrace_hash_clear global, because it's needed in rollback
>
> v3 changes:
> - fix module parsing [Leon Hwang]
> - use function traceable check from libbpf [Leon Hwang]
> - use ptr_to_u64 and fix/updated few comments [ci]
> - display cookies as decimal numbers [ci]
> - added link_create.flags check [ci]
> - fix error path in bpf_trampoline_multi_detach [ci]
> - make fentry/fexit.multi not extendable [ci]
> - add missing OPTS_VALID to bpf_program__attach_tracing_multi [ci]
>
> v2 changes:
> - allocate data.unreg in bpf_trampoline_multi_attach for rollback path [ci]
> and fixed link count setup in rollback path [ci]
> - several small assorted fixes [ci]
> - added loongarch and powerpc changes for struct bpf_tramp_node change
> - added support to attach functions from modules
> - added tests for sleepable programs
> - added rollback tests
>
> v1 changes:
> - added ftrace_hash_count as wrapper for hash_count [Steven]
> - added trampoline mutex pool [Andrii]
> - reworked 'struct bpf_tramp_node' separatoin [Andrii]
> - the 'struct bpf_tramp_node' now holds pointer to bpf_link,
> which is similar to what we do for uprobe_multi;
> I understand it's not a fundamental change compared to previous
> version which used bpf_prog pointer instead, but I don't see better
> way of doing this.. I'm happy to discuss this further if there's
> better idea
> - reworked 'struct bpf_fsession_link' based on bpf_tramp_node
> - made btf__find_by_glob_kind function internal helper [Andrii]
> - many small assorted fixes [Andrii,CI]
> - added session support [Leon Hwang]
> - added cookies support
> - added more tests
>
>
> Note I plan to send linkinfo support separately, the patchset is big enough.
>
> thanks,
> jirka
>
>
> Cc: Hengqi Chen <hengqi.chen@gmail.com>
> ---
> Jiri Olsa (25):
> ftrace: Add ftrace_hash_count function
> ftrace: Make ftrace_hash_clear global
> bpf: Use mutex lock pool for bpf trampolines
> bpf: Add struct bpf_trampoline_ops object
> bpf: Add struct bpf_tramp_node object
> bpf: Factor fsession link to use struct bpf_tramp_node
> bpf: Add multi tracing attach types
> bpf: Move sleepable verification code to btf_id_allow_sleepable
> bpf: Add bpf_trampoline_multi_attach/detach functions
> bpf: Add support for tracing multi link
> bpf: Add support for tracing_multi link cookies
> bpf: Add support for tracing_multi link session
> bpf: Add support for tracing_multi link fdinfo
> libbpf: Add bpf_object_cleanup_btf function
> libbpf: Add bpf_link_create support for tracing_multi link
> libbpf: Add btf_type_is_traceable_func function
> libbpf: Add support to create tracing multi link
> selftests/bpf: Add tracing multi skel/pattern/ids attach tests
> selftests/bpf: Add tracing multi skel/pattern/ids module attach tests
> selftests/bpf: Add tracing multi intersect tests
> selftests/bpf: Add tracing multi cookies test
> selftests/bpf: Add tracing multi session test
> selftests/bpf: Add tracing multi attach fails test
> selftests/bpf: Add tracing multi attach benchmark test
> selftests/bpf: Add tracing multi attach rollback tests
>
> arch/arm64/net/bpf_jit_comp.c | 58 +++---
> arch/loongarch/net/bpf_jit.c | 44 ++---
> arch/powerpc/net/bpf_jit_comp.c | 46 ++---
> arch/riscv/net/bpf_jit_comp64.c | 52 ++---
> arch/s390/net/bpf_jit_comp.c | 44 ++---
> arch/x86/net/bpf_jit_comp.c | 54 ++---
> include/linux/bpf.h | 102 +++++++---
> include/linux/bpf_types.h | 1 +
> include/linux/bpf_verifier.h | 3 +
> include/linux/btf_ids.h | 1 +
> include/linux/ftrace.h | 2 +
> include/linux/trace_events.h | 6 +
> include/uapi/linux/bpf.h | 9 +
> kernel/bpf/bpf_struct_ops.c | 27 +--
> kernel/bpf/btf.c | 4 +
> kernel/bpf/syscall.c | 88 ++++++---
> kernel/bpf/trampoline.c | 536 ++++++++++++++++++++++++++++++++++++++++---------
> kernel/bpf/verifier.c | 124 +++++++++---
> kernel/trace/bpf_trace.c | 149 +++++++++++++-
> kernel/trace/ftrace.c | 9 +-
> net/bpf/bpf_dummy_struct_ops.c | 14 +-
> net/bpf/test_run.c | 3 +
> tools/include/uapi/linux/bpf.h | 10 +
> tools/lib/bpf/bpf.c | 9 +
> tools/lib/bpf/bpf.h | 5 +
> tools/lib/bpf/libbpf.c | 337 ++++++++++++++++++++++++++++++-
> tools/lib/bpf/libbpf.h | 15 ++
> tools/lib/bpf/libbpf.map | 1 +
> tools/lib/bpf/libbpf_internal.h | 1 +
> tools/testing/selftests/bpf/Makefile | 9 +-
> tools/testing/selftests/bpf/prog_tests/tracing_multi.c | 912 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> tools/testing/selftests/bpf/progs/tracing_multi_attach.c | 39 ++++
> tools/testing/selftests/bpf/progs/tracing_multi_attach_module.c | 25 +++
> tools/testing/selftests/bpf/progs/tracing_multi_bench.c | 12 ++
> tools/testing/selftests/bpf/progs/tracing_multi_check.c | 212 ++++++++++++++++++++
> tools/testing/selftests/bpf/progs/tracing_multi_fail.c | 18 ++
> tools/testing/selftests/bpf/progs/tracing_multi_intersect_attach.c | 41 ++++
> tools/testing/selftests/bpf/progs/tracing_multi_rollback.c | 43 ++++
> tools/testing/selftests/bpf/progs/tracing_multi_session_attach.c | 43 ++++
> tools/testing/selftests/bpf/trace_helpers.c | 6 +-
> tools/testing/selftests/bpf/trace_helpers.h | 1 +
> 41 files changed, 2749 insertions(+), 366 deletions(-)
> create mode 100644 tools/testing/selftests/bpf/prog_tests/tracing_multi.c
> create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_attach.c
> create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_attach_module.c
> create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_bench.c
> create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_check.c
> create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_fail.c
> create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_intersect_attach.c
> create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_rollback.c
> create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_session_attach.c
>
^ permalink raw reply [flat|nested] 41+ messages in thread* Re: [PATCHv4 bpf-next 00/25] bpf: tracing_multi link
2026-03-25 6:42 ` [PATCHv4 bpf-next 00/25] bpf: tracing_multi link Leon Hwang
@ 2026-03-25 14:58 ` Leon Hwang
0 siblings, 0 replies; 41+ messages in thread
From: Leon Hwang @ 2026-03-25 14:58 UTC (permalink / raw)
To: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
Cc: Hengqi Chen, bpf, linux-trace-kernel, Martin KaFai Lau,
Eduard Zingerman, Song Liu, Yonghong Song, Menglong Dong,
Steven Rostedt
On 2026/3/25 14:42, Leon Hwang wrote:
> Hi Jiri,
>
> Nice version for tracing_multi link.
>
> I hope I have time to add tracing_multi link support to bpfsnoop, and
> test this new tracing feature.
>
> I left comments on patches #13, #24, and #25.
>
Hmm, sashiko's reviews [1] cover my comments on patches #24 and #25. I
should check them first.
[1]
https://sashiko.dev/#/patchset/20260324081846.2334094-1-jolsa%40kernel.org
Thanks,
Leon
[...]
^ permalink raw reply [flat|nested] 41+ messages in thread