public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC bpf-next 00/12] bpf: tracing_multi link
@ 2026-02-03  9:38 Jiri Olsa
  2026-02-03  9:38 ` [RFC bpf-next 01/12] ftrace: Add ftrace_hash_count function Jiri Olsa
                   ` (12 more replies)
  0 siblings, 13 replies; 54+ messages in thread
From: Jiri Olsa @ 2026-02-03  9:38 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
	Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt

hi,
as an option to Meglong's change [1] I'm sending proposal for tracing_multi
link that does not add static trampoline but attaches program to all needed
trampolines.

This approach keeps the same performance but has some drawbacks:

 - when attaching 20k functions we allocate and attach 20k trampolines
 - during attachment we hold each trampoline mutex, so for above
   20k functions we will hold 20k mutexes during the attachment,
   should be very prone to deadlock, but haven't hit it yet

I was hoping we'd find some common solution, but it looks like it's either
static trampoline with performance penalty or having troubles described
above but keeping the current trampoline performance.

It looks the trampoline allocations/generation might not be big a problem
and I'll try to find a solution for holding that many mutexes. If there's
no better solution I think having one read/write mutex for tracing multi
link attach/detach should work.

We'd like to use trampolines instead of kprobes for the performance gains,
so naturally we want to keep the same performance even when it's attached
through tracing multi link.

thoughts? thanks,
jirka


[1] https://lore.kernel.org/bpf/20250703121521.1874196-1-dongml2@chinatelecom.cn/
---
Jiri Olsa (12):
      ftrace: Add ftrace_hash_count function
      bpf: Add struct bpf_trampoline_ops object
      bpf: Add struct bpf_struct_ops_tramp_link object
      bpf: Add struct bpf_tramp_node object
      bpf: Add multi tracing attach types
      bpf: Add bpf_trampoline_multi_attach/detach functions
      bpf: Add support to create tracing multi link
      libbpf: Add btf__find_by_glob_kind function
      libbpf: Add support to create tracing multi link
      selftests/bpf: Add fentry tracing multi func test
      selftests/bpf: Add fentry intersected tracing multi func test
      selftests/bpf: Add tracing multi benchmark test

 arch/arm64/net/bpf_jit_comp.c                            |  58 +++++++--------
 arch/s390/net/bpf_jit_comp.c                             |  42 +++++------
 arch/x86/net/bpf_jit_comp.c                              |  54 +++++++-------
 include/linux/bpf.h                                      |  74 +++++++++++++------
 include/linux/ftrace.h                                   |   1 +
 include/linux/trace_events.h                             |   6 ++
 include/uapi/linux/bpf.h                                 |   7 ++
 kernel/bpf/bpf_struct_ops.c                              |  39 +++++-----
 kernel/bpf/syscall.c                                     |  62 +++++++++++-----
 kernel/bpf/trampoline.c                                  | 340 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-----------------
 kernel/bpf/verifier.c                                    |   8 ++-
 kernel/trace/bpf_trace.c                                 | 105 +++++++++++++++++++++++++++
 kernel/trace/ftrace.c                                    |  14 ++--
 net/bpf/bpf_dummy_struct_ops.c                           |  23 +++---
 net/bpf/test_run.c                                       |   2 +
 tools/include/uapi/linux/bpf.h                           |   7 ++
 tools/lib/bpf/bpf.c                                      |   7 ++
 tools/lib/bpf/bpf.h                                      |   4 ++
 tools/lib/bpf/btf.c                                      |  41 +++++++++++
 tools/lib/bpf/btf.h                                      |   3 +
 tools/lib/bpf/libbpf.c                                   |  87 +++++++++++++++++++++++
 tools/lib/bpf/libbpf.h                                   |  14 ++++
 tools/lib/bpf/libbpf.map                                 |   1 +
 tools/testing/selftests/bpf/Makefile                     |   3 +-
 tools/testing/selftests/bpf/prog_tests/tracing_multi.c   | 363 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 tools/testing/selftests/bpf/progs/tracing_multi_check.c  | 132 ++++++++++++++++++++++++++++++++++
 tools/testing/selftests/bpf/progs/tracing_multi_fentry.c |  39 ++++++++++
 27 files changed, 1319 insertions(+), 217 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/tracing_multi.c
 create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_check.c
 create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_fentry.c

^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC bpf-next 01/12] ftrace: Add ftrace_hash_count function
  2026-02-03  9:38 [RFC bpf-next 00/12] bpf: tracing_multi link Jiri Olsa
@ 2026-02-03  9:38 ` Jiri Olsa
  2026-02-03 15:40   ` Steven Rostedt
  2026-02-03  9:38 ` [RFC bpf-next 02/12] bpf: Add struct bpf_trampoline_ops object Jiri Olsa
                   ` (11 subsequent siblings)
  12 siblings, 1 reply; 54+ messages in thread
From: Jiri Olsa @ 2026-02-03  9:38 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
	Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt

Adding external ftrace_hash_count function that replaces hash_count
function, so we can get hash count outside of ftrace object.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/linux/ftrace.h |  1 +
 kernel/trace/ftrace.c  | 14 +++++++-------
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 705db0a6d995..6dade0eaee46 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -413,6 +413,7 @@ struct ftrace_hash *alloc_ftrace_hash(int size_bits);
 void free_ftrace_hash(struct ftrace_hash *hash);
 struct ftrace_func_entry *add_ftrace_hash_entry_direct(struct ftrace_hash *hash,
 						       unsigned long ip, unsigned long direct);
+unsigned long ftrace_hash_count(struct ftrace_hash *hash);
 
 /* The hash used to know what functions callbacks trace */
 struct ftrace_ops_hash {
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index b12dbd93ae1c..be9e0ac1fd95 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -6284,7 +6284,7 @@ int modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
 }
 EXPORT_SYMBOL_GPL(modify_ftrace_direct);
 
-static unsigned long hash_count(struct ftrace_hash *hash)
+unsigned long ftrace_hash_count(struct ftrace_hash *hash)
 {
 	return hash ? hash->count : 0;
 }
@@ -6302,7 +6302,7 @@ static struct ftrace_hash *hash_add(struct ftrace_hash *a, struct ftrace_hash *b
 	struct ftrace_hash *add;
 	int size;
 
-	size = hash_count(a) + hash_count(b);
+	size = ftrace_hash_count(a) + ftrace_hash_count(b);
 	if (size > 32)
 		size = 32;
 
@@ -6345,7 +6345,7 @@ int update_ftrace_direct_add(struct ftrace_ops *ops, struct ftrace_hash *hash)
 	int size;
 	bool reg;
 
-	if (!hash_count(hash))
+	if (!ftrace_hash_count(hash))
 		return -EINVAL;
 
 	mutex_lock(&direct_mutex);
@@ -6362,7 +6362,7 @@ int update_ftrace_direct_add(struct ftrace_ops *ops, struct ftrace_hash *hash)
 	old_filter_hash = ops->func_hash ? ops->func_hash->filter_hash : NULL;
 
 	/* If there's nothing in filter_hash we need to register the ops. */
-	reg = hash_count(old_filter_hash) == 0;
+	reg = ftrace_hash_count(old_filter_hash) == 0;
 	if (reg) {
 		if (ops->func || ops->trampoline)
 			goto out_unlock;
@@ -6480,7 +6480,7 @@ int update_ftrace_direct_del(struct ftrace_ops *ops, struct ftrace_hash *hash)
 	unsigned long size;
 	int err = -EINVAL;
 
-	if (!hash_count(hash))
+	if (!ftrace_hash_count(hash))
 		return -EINVAL;
 	if (check_direct_multi(ops))
 		return -EINVAL;
@@ -6493,7 +6493,7 @@ int update_ftrace_direct_del(struct ftrace_ops *ops, struct ftrace_hash *hash)
 
 	old_filter_hash = ops->func_hash ? ops->func_hash->filter_hash : NULL;
 
-	if (!hash_count(old_filter_hash))
+	if (!ftrace_hash_count(old_filter_hash))
 		goto out_unlock;
 
 	/* Make sure requested entries are already registered. */
@@ -6580,7 +6580,7 @@ int update_ftrace_direct_mod(struct ftrace_ops *ops, struct ftrace_hash *hash, b
 	unsigned long size, i;
 	int err = -EINVAL;
 
-	if (!hash_count(hash))
+	if (!ftrace_hash_count(hash))
 		return -EINVAL;
 	if (check_direct_multi(ops))
 		return -EINVAL;
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [RFC bpf-next 02/12] bpf: Add struct bpf_trampoline_ops object
  2026-02-03  9:38 [RFC bpf-next 00/12] bpf: tracing_multi link Jiri Olsa
  2026-02-03  9:38 ` [RFC bpf-next 01/12] ftrace: Add ftrace_hash_count function Jiri Olsa
@ 2026-02-03  9:38 ` Jiri Olsa
  2026-02-03  9:38 ` [RFC bpf-next 03/12] bpf: Add struct bpf_struct_ops_tramp_link object Jiri Olsa
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 54+ messages in thread
From: Jiri Olsa @ 2026-02-03  9:38 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
	Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt

In following changes we will need to change ftrace direct attachment
logic. In order to do that adding struct bpf_trampoline_ops object
that defines 3 callbacks that follow ftrace attachment functions:

   register_fentry
   unregister_fentry
   modify_fentry

The new struct bpf_trampoline_ops object is passed as an argument to
__bpf_trampoline_link_prog function.

At the moment the default trampoline_ops is set to the current ftrace
direct attachment functions, so there's no change for current code.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 kernel/bpf/trampoline.c | 53 +++++++++++++++++++++++++++++------------
 1 file changed, 38 insertions(+), 15 deletions(-)

diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 952cd7932461..ec9c1db78f47 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -30,6 +30,13 @@ static struct hlist_head trampoline_ip_table[TRAMPOLINE_TABLE_SIZE];
 /* serializes access to trampoline tables */
 static DEFINE_MUTEX(trampoline_mutex);
 
+struct bpf_trampoline_ops {
+	int (*register_fentry)(struct bpf_trampoline *tr, void *new_addr, void *data);
+	int (*unregister_fentry)(struct bpf_trampoline *tr, u32 orig_flags, void *old_addr, void *data);
+	int (*modify_fentry)(struct bpf_trampoline *tr, u32 orig_flags, void *old_addr, void *new_addr,
+			     bool lock_direct_mutex, void *data);
+};
+
 #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
 static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex);
 
@@ -387,7 +394,7 @@ static int bpf_trampoline_update_fentry(struct bpf_trampoline *tr, u32 orig_flag
 }
 
 static int unregister_fentry(struct bpf_trampoline *tr, u32 orig_flags,
-			     void *old_addr)
+			     void *old_addr, void *data)
 {
 	int ret;
 
@@ -401,7 +408,7 @@ static int unregister_fentry(struct bpf_trampoline *tr, u32 orig_flags,
 
 static int modify_fentry(struct bpf_trampoline *tr, u32 orig_flags,
 			 void *old_addr, void *new_addr,
-			 bool lock_direct_mutex)
+			 bool lock_direct_mutex, void *data __maybe_unused)
 {
 	int ret;
 
@@ -415,7 +422,7 @@ static int modify_fentry(struct bpf_trampoline *tr, u32 orig_flags,
 }
 
 /* first time registering */
-static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
+static int register_fentry(struct bpf_trampoline *tr, void *new_addr, void *data __maybe_unused)
 {
 	void *ip = tr->func.addr;
 	unsigned long faddr;
@@ -437,6 +444,12 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
 	return ret;
 }
 
+static struct bpf_trampoline_ops trampoline_ops = {
+	.register_fentry   = register_fentry,
+	.unregister_fentry = unregister_fentry,
+	.modify_fentry     = modify_fentry,
+};
+
 static struct bpf_tramp_links *
 bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total, bool *ip_arg)
 {
@@ -604,7 +617,8 @@ static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key, int size)
 	return ERR_PTR(err);
 }
 
-static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex)
+static int bpf_trampoline_update_ops(struct bpf_trampoline *tr, bool lock_direct_mutex,
+				     struct bpf_trampoline_ops *ops, void *data)
 {
 	struct bpf_tramp_image *im;
 	struct bpf_tramp_links *tlinks;
@@ -617,7 +631,7 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
 		return PTR_ERR(tlinks);
 
 	if (total == 0) {
-		err = unregister_fentry(tr, orig_flags, tr->cur_image->image);
+		err = ops->unregister_fentry(tr, orig_flags, tr->cur_image->image, data);
 		bpf_tramp_image_put(tr->cur_image);
 		tr->cur_image = NULL;
 		goto out;
@@ -688,11 +702,11 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
 	WARN_ON(tr->cur_image && total == 0);
 	if (tr->cur_image)
 		/* progs already running at this address */
-		err = modify_fentry(tr, orig_flags, tr->cur_image->image,
-				    im->image, lock_direct_mutex);
+		err = ops->modify_fentry(tr, orig_flags, tr->cur_image->image,
+					 im->image, lock_direct_mutex, data);
 	else
 		/* first time registering */
-		err = register_fentry(tr, im->image);
+		err = ops->register_fentry(tr, im->image, data);
 
 #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
 	if (err == -EAGAIN) {
@@ -722,6 +736,11 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
 	goto out;
 }
 
+static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex)
+{
+	return bpf_trampoline_update_ops(tr, lock_direct_mutex, &trampoline_ops, NULL);
+}
+
 static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(struct bpf_prog *prog)
 {
 	switch (prog->expected_attach_type) {
@@ -766,7 +785,9 @@ static int bpf_freplace_check_tgt_prog(struct bpf_prog *tgt_prog)
 
 static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
 				      struct bpf_trampoline *tr,
-				      struct bpf_prog *tgt_prog)
+				      struct bpf_prog *tgt_prog,
+				      struct bpf_trampoline_ops *ops,
+				      void *data)
 {
 	struct bpf_fsession_link *fslink = NULL;
 	enum bpf_tramp_prog_type kind;
@@ -824,7 +845,7 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
 	} else {
 		tr->progs_cnt[kind]++;
 	}
-	err = bpf_trampoline_update(tr, true /* lock_direct_mutex */);
+	err = bpf_trampoline_update_ops(tr, true /* lock_direct_mutex */, ops, data);
 	if (err) {
 		hlist_del_init(&link->tramp_hlist);
 		if (kind == BPF_TRAMP_FSESSION) {
@@ -845,14 +866,16 @@ int bpf_trampoline_link_prog(struct bpf_tramp_link *link,
 	int err;
 
 	mutex_lock(&tr->mutex);
-	err = __bpf_trampoline_link_prog(link, tr, tgt_prog);
+	err = __bpf_trampoline_link_prog(link, tr, tgt_prog, &trampoline_ops, NULL);
 	mutex_unlock(&tr->mutex);
 	return err;
 }
 
 static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
 					struct bpf_trampoline *tr,
-					struct bpf_prog *tgt_prog)
+					struct bpf_prog *tgt_prog,
+					struct bpf_trampoline_ops *ops,
+					void *data)
 {
 	enum bpf_tramp_prog_type kind;
 	int err;
@@ -877,7 +900,7 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
 	}
 	hlist_del_init(&link->tramp_hlist);
 	tr->progs_cnt[kind]--;
-	return bpf_trampoline_update(tr, true /* lock_direct_mutex */);
+	return bpf_trampoline_update_ops(tr, true /* lock_direct_mutex */, ops, data);
 }
 
 /* bpf_trampoline_unlink_prog() should never fail. */
@@ -888,7 +911,7 @@ int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
 	int err;
 
 	mutex_lock(&tr->mutex);
-	err = __bpf_trampoline_unlink_prog(link, tr, tgt_prog);
+	err = __bpf_trampoline_unlink_prog(link, tr, tgt_prog, &trampoline_ops, NULL);
 	mutex_unlock(&tr->mutex);
 	return err;
 }
@@ -1019,7 +1042,7 @@ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog,
 		goto err;
 	}
 
-	err = __bpf_trampoline_link_prog(&shim_link->link, tr, NULL);
+	err = __bpf_trampoline_link_prog(&shim_link->link, tr, NULL, &trampoline_ops, NULL);
 	if (err)
 		goto err;
 
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [RFC bpf-next 03/12] bpf: Add struct bpf_struct_ops_tramp_link object
  2026-02-03  9:38 [RFC bpf-next 00/12] bpf: tracing_multi link Jiri Olsa
  2026-02-03  9:38 ` [RFC bpf-next 01/12] ftrace: Add ftrace_hash_count function Jiri Olsa
  2026-02-03  9:38 ` [RFC bpf-next 02/12] bpf: Add struct bpf_trampoline_ops object Jiri Olsa
@ 2026-02-03  9:38 ` Jiri Olsa
  2026-02-03  9:38 ` [RFC bpf-next 04/12] bpf: Add struct bpf_tramp_node object Jiri Olsa
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 54+ messages in thread
From: Jiri Olsa @ 2026-02-03  9:38 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
	Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt

Add struct bpf_struct_ops_tramp_link for struct_ops link, to follow
the code of all the other users of bpf_tramp_link object.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/linux/bpf.h            |  4 ++++
 kernel/bpf/bpf_struct_ops.c    | 17 +++++++++--------
 net/bpf/bpf_dummy_struct_ops.c | 14 +++++++-------
 3 files changed, 20 insertions(+), 15 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index cd9b96434904..512d75094be0 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1878,6 +1878,10 @@ struct bpf_shim_tramp_link {
 	struct bpf_trampoline *trampoline;
 };
 
+struct bpf_struct_ops_tramp_link {
+	struct bpf_tramp_link link;
+};
+
 struct bpf_tracing_link {
 	struct bpf_tramp_link link;
 	struct bpf_trampoline *trampoline;
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index c43346cb3d76..ecca0a6be6af 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -585,9 +585,10 @@ static void bpf_struct_ops_link_release(struct bpf_link *link)
 
 static void bpf_struct_ops_link_dealloc(struct bpf_link *link)
 {
-	struct bpf_tramp_link *tlink = container_of(link, struct bpf_tramp_link, link);
+	struct bpf_struct_ops_tramp_link *st_link =
+		container_of(link, struct bpf_struct_ops_tramp_link, link.link);
 
-	kfree(tlink);
+	kfree(st_link);
 }
 
 const struct bpf_link_ops bpf_struct_ops_link_lops = {
@@ -747,7 +748,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
 	for_each_member(i, t, member) {
 		const struct btf_type *mtype, *ptype;
 		struct bpf_prog *prog;
-		struct bpf_tramp_link *link;
+		struct bpf_struct_ops_tramp_link *st_link;
 		struct bpf_ksym *ksym;
 		u32 moff;
 
@@ -815,15 +816,15 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
 		/* Poison pointer on error instead of return for backward compatibility */
 		bpf_prog_assoc_struct_ops(prog, &st_map->map);
 
-		link = kzalloc(sizeof(*link), GFP_USER);
-		if (!link) {
+		st_link = kzalloc(sizeof(*st_link), GFP_USER);
+		if (!st_link) {
 			bpf_prog_put(prog);
 			err = -ENOMEM;
 			goto reset_unlock;
 		}
-		bpf_link_init(&link->link, BPF_LINK_TYPE_STRUCT_OPS,
+		bpf_link_init(&st_link->link.link, BPF_LINK_TYPE_STRUCT_OPS,
 			      &bpf_struct_ops_link_lops, prog, prog->expected_attach_type);
-		*plink++ = &link->link;
+		*plink++ = &st_link->link.link;
 
 		ksym = kzalloc(sizeof(*ksym), GFP_USER);
 		if (!ksym) {
@@ -833,7 +834,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
 		*pksym++ = ksym;
 
 		trampoline_start = image_off;
-		err = bpf_struct_ops_prepare_trampoline(tlinks, link,
+		err = bpf_struct_ops_prepare_trampoline(tlinks, &st_link->link,
 						&st_ops->func_models[i],
 						*(void **)(st_ops->cfi_stubs + moff),
 						&image, &image_off,
diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c
index 812457819b5a..4029931a4fce 100644
--- a/net/bpf/bpf_dummy_struct_ops.c
+++ b/net/bpf/bpf_dummy_struct_ops.c
@@ -130,10 +130,10 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
 			    union bpf_attr __user *uattr)
 {
 	const struct bpf_struct_ops *st_ops = &bpf_bpf_dummy_ops;
+	struct bpf_struct_ops_tramp_link *st_link = NULL;
 	const struct btf_type *func_proto;
 	struct bpf_dummy_ops_test_args *args;
 	struct bpf_tramp_links *tlinks = NULL;
-	struct bpf_tramp_link *link = NULL;
 	void *image = NULL;
 	unsigned int op_idx;
 	u32 image_off = 0;
@@ -164,18 +164,18 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
 		goto out;
 	}
 
-	link = kzalloc(sizeof(*link), GFP_USER);
-	if (!link) {
+	st_link = kzalloc(sizeof(*st_link), GFP_USER);
+	if (!st_link) {
 		err = -ENOMEM;
 		goto out;
 	}
 	/* prog doesn't take the ownership of the reference from caller */
 	bpf_prog_inc(prog);
-	bpf_link_init(&link->link, BPF_LINK_TYPE_STRUCT_OPS, &bpf_struct_ops_link_lops, prog,
+	bpf_link_init(&st_link->link.link, BPF_LINK_TYPE_STRUCT_OPS, &bpf_struct_ops_link_lops, prog,
 		      prog->expected_attach_type);
 
 	op_idx = prog->expected_attach_type;
-	err = bpf_struct_ops_prepare_trampoline(tlinks, link,
+	err = bpf_struct_ops_prepare_trampoline(tlinks, &st_link->link,
 						&st_ops->func_models[op_idx],
 						&dummy_ops_test_ret_function,
 						&image, &image_off,
@@ -196,8 +196,8 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
 out:
 	kfree(args);
 	bpf_struct_ops_image_free(image);
-	if (link)
-		bpf_link_put(&link->link);
+	if (st_link)
+		bpf_link_put(&st_link->link.link);
 	kfree(tlinks);
 	return err;
 }
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [RFC bpf-next 04/12] bpf: Add struct bpf_tramp_node object
  2026-02-03  9:38 [RFC bpf-next 00/12] bpf: tracing_multi link Jiri Olsa
                   ` (2 preceding siblings ...)
  2026-02-03  9:38 ` [RFC bpf-next 03/12] bpf: Add struct bpf_struct_ops_tramp_link object Jiri Olsa
@ 2026-02-03  9:38 ` Jiri Olsa
  2026-02-04 19:00   ` Andrii Nakryiko
  2026-02-03  9:38 ` [RFC bpf-next 05/12] bpf: Add multi tracing attach types Jiri Olsa
                   ` (8 subsequent siblings)
  12 siblings, 1 reply; 54+ messages in thread
From: Jiri Olsa @ 2026-02-03  9:38 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
	Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt

Adding struct bpf_tramp_node to decouple the link out of the trampoline
attachment info.

At the moment the object for attaching bpf program to the trampoline is
'struct bpf_tramp_link':

  struct bpf_tramp_link {
       struct bpf_link link;
       struct hlist_node tramp_hlist;
       u64 cookie;
  }

The link holds the bpf_prog pointer and forces one link - one program
binding logic. In following changes we want to attach program to multiple
trampolines but have just one bpf_link object.

Splitting struct bpf_tramp_link into:

  struct bpf_tramp_link {
       struct bpf_link link;
       struct bpf_tramp_node node;
  };

  struct bpf_tramp_node {
       struct hlist_node tramp_hlist;
       struct bpf_prog *prog;
       u64 cookie;
  };

where 'struct bpf_tramp_link' defines standard single trampoline link,
and 'struct bpf_tramp_node' is the attachment trampoline object. This
will allow us to define link for multiple trampolines, like:

  struct bpf_tracing_multi_link {
       struct bpf_link link;
       ...
       int nodes_cnt;
       struct bpf_tracing_multi_node nodes[] __counted_by(nodes_cnt);
  };

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 arch/arm64/net/bpf_jit_comp.c  |  58 +++++++++----------
 arch/s390/net/bpf_jit_comp.c   |  42 +++++++-------
 arch/x86/net/bpf_jit_comp.c    |  54 ++++++++---------
 include/linux/bpf.h            |  47 ++++++++-------
 kernel/bpf/bpf_struct_ops.c    |  24 ++++----
 kernel/bpf/syscall.c           |  25 ++++----
 kernel/bpf/trampoline.c        | 102 ++++++++++++++++-----------------
 net/bpf/bpf_dummy_struct_ops.c |  11 ++--
 8 files changed, 185 insertions(+), 178 deletions(-)

diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index 2dc5037694ba..ca4de9dbb96a 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -2295,24 +2295,24 @@ bool bpf_jit_supports_subprog_tailcalls(void)
 	return true;
 }
 
-static void invoke_bpf_prog(struct jit_ctx *ctx, struct bpf_tramp_link *l,
+static void invoke_bpf_prog(struct jit_ctx *ctx, struct bpf_tramp_node *node,
 			    int bargs_off, int retval_off, int run_ctx_off,
 			    bool save_ret)
 {
 	__le32 *branch;
 	u64 enter_prog;
 	u64 exit_prog;
-	struct bpf_prog *p = l->link.prog;
+	struct bpf_prog *p = node->prog;
 	int cookie_off = offsetof(struct bpf_tramp_run_ctx, bpf_cookie);
 
 	enter_prog = (u64)bpf_trampoline_enter(p);
 	exit_prog = (u64)bpf_trampoline_exit(p);
 
-	if (l->cookie == 0) {
+	if (node->cookie == 0) {
 		/* if cookie is zero, one instruction is enough to store it */
 		emit(A64_STR64I(A64_ZR, A64_SP, run_ctx_off + cookie_off), ctx);
 	} else {
-		emit_a64_mov_i64(A64_R(10), l->cookie, ctx);
+		emit_a64_mov_i64(A64_R(10), node->cookie, ctx);
 		emit(A64_STR64I(A64_R(10), A64_SP, run_ctx_off + cookie_off),
 		     ctx);
 	}
@@ -2362,7 +2362,7 @@ static void invoke_bpf_prog(struct jit_ctx *ctx, struct bpf_tramp_link *l,
 	emit_call(exit_prog, ctx);
 }
 
-static void invoke_bpf_mod_ret(struct jit_ctx *ctx, struct bpf_tramp_links *tl,
+static void invoke_bpf_mod_ret(struct jit_ctx *ctx, struct bpf_tramp_nodes *tn,
 			       int bargs_off, int retval_off, int run_ctx_off,
 			       __le32 **branches)
 {
@@ -2372,8 +2372,8 @@ static void invoke_bpf_mod_ret(struct jit_ctx *ctx, struct bpf_tramp_links *tl,
 	 * Set this to 0 to avoid confusing the program.
 	 */
 	emit(A64_STR64I(A64_ZR, A64_SP, retval_off), ctx);
-	for (i = 0; i < tl->nr_links; i++) {
-		invoke_bpf_prog(ctx, tl->links[i], bargs_off, retval_off,
+	for (i = 0; i < tn->nr_nodes; i++) {
+		invoke_bpf_prog(ctx, tn->nodes[i], bargs_off, retval_off,
 				run_ctx_off, true);
 		/* if (*(u64 *)(sp + retval_off) !=  0)
 		 *	goto do_fexit;
@@ -2504,10 +2504,10 @@ static void restore_args(struct jit_ctx *ctx, int bargs_off, int nregs)
 	}
 }
 
-static bool is_struct_ops_tramp(const struct bpf_tramp_links *fentry_links)
+static bool is_struct_ops_tramp(const struct bpf_tramp_nodes *fentry_nodes)
 {
-	return fentry_links->nr_links == 1 &&
-		fentry_links->links[0]->link.type == BPF_LINK_TYPE_STRUCT_OPS;
+	return fentry_nodes->nr_nodes == 1 &&
+		fentry_nodes->nodes[0]->prog->type == BPF_PROG_TYPE_STRUCT_OPS;
 }
 
 static void store_func_meta(struct jit_ctx *ctx, u64 func_meta, int func_meta_off)
@@ -2528,7 +2528,7 @@ static void store_func_meta(struct jit_ctx *ctx, u64 func_meta, int func_meta_of
  *
  */
 static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
-			      struct bpf_tramp_links *tlinks, void *func_addr,
+			      struct bpf_tramp_nodes *tnodes, void *func_addr,
 			      const struct btf_func_model *m,
 			      const struct arg_aux *a,
 			      u32 flags)
@@ -2544,14 +2544,14 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
 	int run_ctx_off;
 	int oargs_off;
 	int nfuncargs;
-	struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
-	struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
-	struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
+	struct bpf_tramp_nodes *fentry = &tnodes[BPF_TRAMP_FENTRY];
+	struct bpf_tramp_nodes *fexit = &tnodes[BPF_TRAMP_FEXIT];
+	struct bpf_tramp_nodes *fmod_ret = &tnodes[BPF_TRAMP_MODIFY_RETURN];
 	bool save_ret;
 	__le32 **branches = NULL;
 	bool is_struct_ops = is_struct_ops_tramp(fentry);
 	int cookie_off, cookie_cnt, cookie_bargs_off;
-	int fsession_cnt = bpf_fsession_cnt(tlinks);
+	int fsession_cnt = bpf_fsession_cnt(tnodes);
 	u64 func_meta;
 
 	/* trampoline stack layout:
@@ -2597,7 +2597,7 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
 
 	cookie_off = stack_size;
 	/* room for session cookies */
-	cookie_cnt = bpf_fsession_cookie_cnt(tlinks);
+	cookie_cnt = bpf_fsession_cookie_cnt(tnodes);
 	stack_size += cookie_cnt * 8;
 
 	ip_off = stack_size;
@@ -2694,20 +2694,20 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
 	}
 
 	cookie_bargs_off = (bargs_off - cookie_off) / 8;
-	for (i = 0; i < fentry->nr_links; i++) {
-		if (bpf_prog_calls_session_cookie(fentry->links[i])) {
+	for (i = 0; i < fentry->nr_nodes; i++) {
+		if (bpf_prog_calls_session_cookie(fentry->nodes[i])) {
 			u64 meta = func_meta | (cookie_bargs_off << BPF_TRAMP_COOKIE_INDEX_SHIFT);
 
 			store_func_meta(ctx, meta, func_meta_off);
 			cookie_bargs_off--;
 		}
-		invoke_bpf_prog(ctx, fentry->links[i], bargs_off,
+		invoke_bpf_prog(ctx, fentry->nodes[i], bargs_off,
 				retval_off, run_ctx_off,
 				flags & BPF_TRAMP_F_RET_FENTRY_RET);
 	}
 
-	if (fmod_ret->nr_links) {
-		branches = kcalloc(fmod_ret->nr_links, sizeof(__le32 *),
+	if (fmod_ret->nr_nodes) {
+		branches = kcalloc(fmod_ret->nr_nodes, sizeof(__le32 *),
 				   GFP_KERNEL);
 		if (!branches)
 			return -ENOMEM;
@@ -2731,7 +2731,7 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
 	}
 
 	/* update the branches saved in invoke_bpf_mod_ret with cbnz */
-	for (i = 0; i < fmod_ret->nr_links && ctx->image != NULL; i++) {
+	for (i = 0; i < fmod_ret->nr_nodes && ctx->image != NULL; i++) {
 		int offset = &ctx->image[ctx->idx] - branches[i];
 		*branches[i] = cpu_to_le32(A64_CBNZ(1, A64_R(10), offset));
 	}
@@ -2742,14 +2742,14 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
 		store_func_meta(ctx, func_meta, func_meta_off);
 
 	cookie_bargs_off = (bargs_off - cookie_off) / 8;
-	for (i = 0; i < fexit->nr_links; i++) {
-		if (bpf_prog_calls_session_cookie(fexit->links[i])) {
+	for (i = 0; i < fexit->nr_nodes; i++) {
+		if (bpf_prog_calls_session_cookie(fexit->nodes[i])) {
 			u64 meta = func_meta | (cookie_bargs_off << BPF_TRAMP_COOKIE_INDEX_SHIFT);
 
 			store_func_meta(ctx, meta, func_meta_off);
 			cookie_bargs_off--;
 		}
-		invoke_bpf_prog(ctx, fexit->links[i], bargs_off, retval_off,
+		invoke_bpf_prog(ctx, fexit->nodes[i], bargs_off, retval_off,
 				run_ctx_off, false);
 	}
 
@@ -2807,7 +2807,7 @@ bool bpf_jit_supports_fsession(void)
 }
 
 int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
-			     struct bpf_tramp_links *tlinks, void *func_addr)
+			     struct bpf_tramp_nodes *tnodes, void *func_addr)
 {
 	struct jit_ctx ctx = {
 		.image = NULL,
@@ -2821,7 +2821,7 @@ int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
 	if (ret < 0)
 		return ret;
 
-	ret = prepare_trampoline(&ctx, &im, tlinks, func_addr, m, &aaux, flags);
+	ret = prepare_trampoline(&ctx, &im, tnodes, func_addr, m, &aaux, flags);
 	if (ret < 0)
 		return ret;
 
@@ -2845,7 +2845,7 @@ int arch_protect_bpf_trampoline(void *image, unsigned int size)
 
 int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *ro_image,
 				void *ro_image_end, const struct btf_func_model *m,
-				u32 flags, struct bpf_tramp_links *tlinks,
+				u32 flags, struct bpf_tramp_nodes *tnodes,
 				void *func_addr)
 {
 	u32 size = ro_image_end - ro_image;
@@ -2872,7 +2872,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *ro_image,
 	ret = calc_arg_aux(m, &aaux);
 	if (ret)
 		goto out;
-	ret = prepare_trampoline(&ctx, im, tlinks, func_addr, m, &aaux, flags);
+	ret = prepare_trampoline(&ctx, im, tnodes, func_addr, m, &aaux, flags);
 
 	if (ret > 0 && validate_code(&ctx) < 0) {
 		ret = -EINVAL;
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index 579461d471bb..2d673d96de2f 100644
--- a/arch/s390/net/bpf_jit_comp.c
+++ b/arch/s390/net/bpf_jit_comp.c
@@ -2508,20 +2508,20 @@ static void load_imm64(struct bpf_jit *jit, int dst_reg, u64 val)
 
 static int invoke_bpf_prog(struct bpf_tramp_jit *tjit,
 			   const struct btf_func_model *m,
-			   struct bpf_tramp_link *tlink, bool save_ret)
+			   struct bpf_tramp_node *node, bool save_ret)
 {
 	struct bpf_jit *jit = &tjit->common;
 	int cookie_off = tjit->run_ctx_off +
 			 offsetof(struct bpf_tramp_run_ctx, bpf_cookie);
-	struct bpf_prog *p = tlink->link.prog;
+	struct bpf_prog *p = node->prog;
 	int patch;
 
 	/*
-	 * run_ctx.cookie = tlink->cookie;
+	 * run_ctx.cookie = node->cookie;
 	 */
 
-	/* %r0 = tlink->cookie */
-	load_imm64(jit, REG_W0, tlink->cookie);
+	/* %r0 = node->cookie */
+	load_imm64(jit, REG_W0, node->cookie);
 	/* stg %r0,cookie_off(%r15) */
 	EMIT6_DISP_LH(0xe3000000, 0x0024, REG_W0, REG_0, REG_15, cookie_off);
 
@@ -2603,12 +2603,12 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
 					 struct bpf_tramp_jit *tjit,
 					 const struct btf_func_model *m,
 					 u32 flags,
-					 struct bpf_tramp_links *tlinks,
+					 struct bpf_tramp_nodes *nodes,
 					 void *func_addr)
 {
-	struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
-	struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
-	struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
+	struct bpf_tramp_nodes *fmod_ret = &nodes[BPF_TRAMP_MODIFY_RETURN];
+	struct bpf_tramp_nodes *fentry = &nodes[BPF_TRAMP_FENTRY];
+	struct bpf_tramp_nodes *fexit = &nodes[BPF_TRAMP_FEXIT];
 	int nr_bpf_args, nr_reg_args, nr_stack_args;
 	struct bpf_jit *jit = &tjit->common;
 	int arg, bpf_arg_off;
@@ -2767,12 +2767,12 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
 		EMIT6_PCREL_RILB_PTR(0xc0050000, REG_14, __bpf_tramp_enter);
 	}
 
-	for (i = 0; i < fentry->nr_links; i++)
-		if (invoke_bpf_prog(tjit, m, fentry->links[i],
+	for (i = 0; i < fentry->nr_nodes; i++)
+		if (invoke_bpf_prog(tjit, m, fentry->nodes[i],
 				    flags & BPF_TRAMP_F_RET_FENTRY_RET))
 			return -EINVAL;
 
-	if (fmod_ret->nr_links) {
+	if (fmod_ret->nr_nodes) {
 		/*
 		 * retval = 0;
 		 */
@@ -2781,8 +2781,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
 		_EMIT6(0xd707f000 | tjit->retval_off,
 		       0xf000 | tjit->retval_off);
 
-		for (i = 0; i < fmod_ret->nr_links; i++) {
-			if (invoke_bpf_prog(tjit, m, fmod_ret->links[i], true))
+		for (i = 0; i < fmod_ret->nr_nodes; i++) {
+			if (invoke_bpf_prog(tjit, m, fmod_ret->nodes[i], true))
 				return -EINVAL;
 
 			/*
@@ -2849,8 +2849,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
 
 	/* do_fexit: */
 	tjit->do_fexit = jit->prg;
-	for (i = 0; i < fexit->nr_links; i++)
-		if (invoke_bpf_prog(tjit, m, fexit->links[i], false))
+	for (i = 0; i < fexit->nr_nodes; i++)
+		if (invoke_bpf_prog(tjit, m, fexit->nodes[i], false))
 			return -EINVAL;
 
 	if (flags & BPF_TRAMP_F_CALL_ORIG) {
@@ -2902,7 +2902,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
 }
 
 int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
-			     struct bpf_tramp_links *tlinks, void *orig_call)
+			     struct bpf_tramp_nodes *tnodes, void *orig_call)
 {
 	struct bpf_tramp_image im;
 	struct bpf_tramp_jit tjit;
@@ -2911,14 +2911,14 @@ int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
 	memset(&tjit, 0, sizeof(tjit));
 
 	ret = __arch_prepare_bpf_trampoline(&im, &tjit, m, flags,
-					    tlinks, orig_call);
+					    tnodes, orig_call);
 
 	return ret < 0 ? ret : tjit.common.prg;
 }
 
 int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image,
 				void *image_end, const struct btf_func_model *m,
-				u32 flags, struct bpf_tramp_links *tlinks,
+				u32 flags, struct bpf_tramp_nodes *tnodes,
 				void *func_addr)
 {
 	struct bpf_tramp_jit tjit;
@@ -2927,7 +2927,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image,
 	/* Compute offsets, check whether the code fits. */
 	memset(&tjit, 0, sizeof(tjit));
 	ret = __arch_prepare_bpf_trampoline(im, &tjit, m, flags,
-					    tlinks, func_addr);
+					    tnodes, func_addr);
 
 	if (ret < 0)
 		return ret;
@@ -2941,7 +2941,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image,
 	tjit.common.prg = 0;
 	tjit.common.prg_buf = image;
 	ret = __arch_prepare_bpf_trampoline(im, &tjit, m, flags,
-					    tlinks, func_addr);
+					    tnodes, func_addr);
 
 	return ret < 0 ? ret : tjit.common.prg;
 }
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 070ba80e39d7..e1d496311008 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -2978,15 +2978,15 @@ static void restore_regs(const struct btf_func_model *m, u8 **prog,
 }
 
 static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
-			   struct bpf_tramp_link *l, int stack_size,
+			   struct bpf_tramp_node *node, int stack_size,
 			   int run_ctx_off, bool save_ret,
 			   void *image, void *rw_image)
 {
 	u8 *prog = *pprog;
 	u8 *jmp_insn;
 	int ctx_cookie_off = offsetof(struct bpf_tramp_run_ctx, bpf_cookie);
-	struct bpf_prog *p = l->link.prog;
-	u64 cookie = l->cookie;
+	struct bpf_prog *p = node->prog;
+	u64 cookie = node->cookie;
 
 	/* mov rdi, cookie */
 	emit_mov_imm64(&prog, BPF_REG_1, (long) cookie >> 32, (u32) (long) cookie);
@@ -3093,7 +3093,7 @@ static int emit_cond_near_jump(u8 **pprog, void *func, void *ip, u8 jmp_cond)
 }
 
 static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
-		      struct bpf_tramp_links *tl, int stack_size,
+		      struct bpf_tramp_nodes *tl, int stack_size,
 		      int run_ctx_off, int func_meta_off, bool save_ret,
 		      void *image, void *rw_image, u64 func_meta,
 		      int cookie_off)
@@ -3101,13 +3101,13 @@ static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
 	int i, cur_cookie = (cookie_off - stack_size) / 8;
 	u8 *prog = *pprog;
 
-	for (i = 0; i < tl->nr_links; i++) {
-		if (tl->links[i]->link.prog->call_session_cookie) {
+	for (i = 0; i < tl->nr_nodes; i++) {
+		if (tl->nodes[i]->prog->call_session_cookie) {
 			emit_store_stack_imm64(&prog, BPF_REG_0, -func_meta_off,
 				func_meta | (cur_cookie << BPF_TRAMP_COOKIE_INDEX_SHIFT));
 			cur_cookie--;
 		}
-		if (invoke_bpf_prog(m, &prog, tl->links[i], stack_size,
+		if (invoke_bpf_prog(m, &prog, tl->nodes[i], stack_size,
 				    run_ctx_off, save_ret, image, rw_image))
 			return -EINVAL;
 	}
@@ -3116,7 +3116,7 @@ static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
 }
 
 static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
-			      struct bpf_tramp_links *tl, int stack_size,
+			      struct bpf_tramp_nodes *tl, int stack_size,
 			      int run_ctx_off, u8 **branches,
 			      void *image, void *rw_image)
 {
@@ -3128,8 +3128,8 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
 	 */
 	emit_mov_imm32(&prog, false, BPF_REG_0, 0);
 	emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
-	for (i = 0; i < tl->nr_links; i++) {
-		if (invoke_bpf_prog(m, &prog, tl->links[i], stack_size, run_ctx_off, true,
+	for (i = 0; i < tl->nr_nodes; i++) {
+		if (invoke_bpf_prog(m, &prog, tl->nodes[i], stack_size, run_ctx_off, true,
 				    image, rw_image))
 			return -EINVAL;
 
@@ -3220,14 +3220,14 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
 static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_image,
 					 void *rw_image_end, void *image,
 					 const struct btf_func_model *m, u32 flags,
-					 struct bpf_tramp_links *tlinks,
+					 struct bpf_tramp_nodes *tnodes,
 					 void *func_addr)
 {
 	int i, ret, nr_regs = m->nr_args, stack_size = 0;
 	int regs_off, func_meta_off, ip_off, run_ctx_off, arg_stack_off, rbx_off;
-	struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
-	struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
-	struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
+	struct bpf_tramp_nodes *fentry = &tnodes[BPF_TRAMP_FENTRY];
+	struct bpf_tramp_nodes *fexit = &tnodes[BPF_TRAMP_FEXIT];
+	struct bpf_tramp_nodes *fmod_ret = &tnodes[BPF_TRAMP_MODIFY_RETURN];
 	void *orig_call = func_addr;
 	int cookie_off, cookie_cnt;
 	u8 **branches = NULL;
@@ -3299,7 +3299,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
 
 	ip_off = stack_size;
 
-	cookie_cnt = bpf_fsession_cookie_cnt(tlinks);
+	cookie_cnt = bpf_fsession_cookie_cnt(tnodes);
 	/* room for session cookies */
 	stack_size += cookie_cnt * 8;
 	cookie_off = stack_size;
@@ -3392,7 +3392,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
 		}
 	}
 
-	if (bpf_fsession_cnt(tlinks)) {
+	if (bpf_fsession_cnt(tnodes)) {
 		/* clear all the session cookies' value */
 		for (int i = 0; i < cookie_cnt; i++)
 			emit_store_stack_imm64(&prog, BPF_REG_0, -cookie_off + 8 * i, 0);
@@ -3400,15 +3400,15 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
 		emit_store_stack_imm64(&prog, BPF_REG_0, -8, 0);
 	}
 
-	if (fentry->nr_links) {
+	if (fentry->nr_nodes) {
 		if (invoke_bpf(m, &prog, fentry, regs_off, run_ctx_off, func_meta_off,
 			       flags & BPF_TRAMP_F_RET_FENTRY_RET, image, rw_image,
 			       func_meta, cookie_off))
 			return -EINVAL;
 	}
 
-	if (fmod_ret->nr_links) {
-		branches = kcalloc(fmod_ret->nr_links, sizeof(u8 *),
+	if (fmod_ret->nr_nodes) {
+		branches = kcalloc(fmod_ret->nr_nodes, sizeof(u8 *),
 				   GFP_KERNEL);
 		if (!branches)
 			return -ENOMEM;
@@ -3447,7 +3447,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
 		emit_nops(&prog, X86_PATCH_SIZE);
 	}
 
-	if (fmod_ret->nr_links) {
+	if (fmod_ret->nr_nodes) {
 		/* From Intel 64 and IA-32 Architectures Optimization
 		 * Reference Manual, 3.4.1.4 Code Alignment, Assembly/Compiler
 		 * Coding Rule 11: All branch targets should be 16-byte
@@ -3457,7 +3457,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
 		/* Update the branches saved in invoke_bpf_mod_ret with the
 		 * aligned address of do_fexit.
 		 */
-		for (i = 0; i < fmod_ret->nr_links; i++) {
+		for (i = 0; i < fmod_ret->nr_nodes; i++) {
 			emit_cond_near_jump(&branches[i], image + (prog - (u8 *)rw_image),
 					    image + (branches[i] - (u8 *)rw_image), X86_JNE);
 		}
@@ -3465,10 +3465,10 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
 
 	/* set the "is_return" flag for fsession */
 	func_meta |= (1ULL << BPF_TRAMP_IS_RETURN_SHIFT);
-	if (bpf_fsession_cnt(tlinks))
+	if (bpf_fsession_cnt(tnodes))
 		emit_store_stack_imm64(&prog, BPF_REG_0, -func_meta_off, func_meta);
 
-	if (fexit->nr_links) {
+	if (fexit->nr_nodes) {
 		if (invoke_bpf(m, &prog, fexit, regs_off, run_ctx_off, func_meta_off,
 			       false, image, rw_image, func_meta, cookie_off)) {
 			ret = -EINVAL;
@@ -3542,7 +3542,7 @@ int arch_protect_bpf_trampoline(void *image, unsigned int size)
 
 int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *image_end,
 				const struct btf_func_model *m, u32 flags,
-				struct bpf_tramp_links *tlinks,
+				struct bpf_tramp_nodes *tnodes,
 				void *func_addr)
 {
 	void *rw_image, *tmp;
@@ -3557,7 +3557,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 		return -ENOMEM;
 
 	ret = __arch_prepare_bpf_trampoline(im, rw_image, rw_image + size, image, m,
-					    flags, tlinks, func_addr);
+					    flags, tnodes, func_addr);
 	if (ret < 0)
 		goto out;
 
@@ -3570,7 +3570,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 }
 
 int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
-			     struct bpf_tramp_links *tlinks, void *func_addr)
+			     struct bpf_tramp_nodes *tnodes, void *func_addr)
 {
 	struct bpf_tramp_image im;
 	void *image;
@@ -3588,7 +3588,7 @@ int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
 		return -ENOMEM;
 
 	ret = __arch_prepare_bpf_trampoline(&im, image, image + PAGE_SIZE, image,
-					    m, flags, tlinks, func_addr);
+					    m, flags, tnodes, func_addr);
 	bpf_jit_free_exec(image);
 	return ret;
 }
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 512d75094be0..4aee54e6a8ca 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1233,9 +1233,9 @@ enum {
 #define BPF_TRAMP_COOKIE_INDEX_SHIFT	8
 #define BPF_TRAMP_IS_RETURN_SHIFT	63
 
-struct bpf_tramp_links {
-	struct bpf_tramp_link *links[BPF_MAX_TRAMP_LINKS];
-	int nr_links;
+struct bpf_tramp_nodes {
+	struct bpf_tramp_node *nodes[BPF_MAX_TRAMP_LINKS];
+	int nr_nodes;
 };
 
 struct bpf_tramp_run_ctx;
@@ -1263,13 +1263,13 @@ struct bpf_tramp_run_ctx;
 struct bpf_tramp_image;
 int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *image_end,
 				const struct btf_func_model *m, u32 flags,
-				struct bpf_tramp_links *tlinks,
+				struct bpf_tramp_nodes *tnodes,
 				void *func_addr);
 void *arch_alloc_bpf_trampoline(unsigned int size);
 void arch_free_bpf_trampoline(void *image, unsigned int size);
 int __must_check arch_protect_bpf_trampoline(void *image, unsigned int size);
 int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
-			     struct bpf_tramp_links *tlinks, void *func_addr);
+			     struct bpf_tramp_nodes *tnodes, void *func_addr);
 
 u64 notrace __bpf_prog_enter_sleepable_recur(struct bpf_prog *prog,
 					     struct bpf_tramp_run_ctx *run_ctx);
@@ -1455,10 +1455,10 @@ static inline int bpf_dynptr_check_off_len(const struct bpf_dynptr_kern *ptr, u6
 }
 
 #ifdef CONFIG_BPF_JIT
-int bpf_trampoline_link_prog(struct bpf_tramp_link *link,
+int bpf_trampoline_link_prog(struct bpf_tramp_node *node,
 			     struct bpf_trampoline *tr,
 			     struct bpf_prog *tgt_prog);
-int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
+int bpf_trampoline_unlink_prog(struct bpf_tramp_node *node,
 			       struct bpf_trampoline *tr,
 			       struct bpf_prog *tgt_prog);
 struct bpf_trampoline *bpf_trampoline_get(u64 key,
@@ -1867,12 +1867,17 @@ struct bpf_link_ops {
 	__poll_t (*poll)(struct file *file, struct poll_table_struct *pts);
 };
 
-struct bpf_tramp_link {
-	struct bpf_link link;
+struct bpf_tramp_node {
 	struct hlist_node tramp_hlist;
+	struct bpf_prog *prog;
 	u64 cookie;
 };
 
+struct bpf_tramp_link {
+	struct bpf_link link;
+	struct bpf_tramp_node node;
+};
+
 struct bpf_shim_tramp_link {
 	struct bpf_tramp_link link;
 	struct bpf_trampoline *trampoline;
@@ -2094,8 +2099,8 @@ void bpf_struct_ops_put(const void *kdata);
 int bpf_struct_ops_supported(const struct bpf_struct_ops *st_ops, u32 moff);
 int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key,
 				       void *value);
-int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks,
-				      struct bpf_tramp_link *link,
+int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_nodes *tnodes,
+				      struct bpf_tramp_node *node,
 				      const struct btf_func_model *model,
 				      void *stub_func,
 				      void **image, u32 *image_off,
@@ -2187,31 +2192,31 @@ static inline void bpf_struct_ops_desc_release(struct bpf_struct_ops_desc *st_op
 
 #endif
 
-static inline int bpf_fsession_cnt(struct bpf_tramp_links *links)
+static inline int bpf_fsession_cnt(struct bpf_tramp_nodes *nodes)
 {
-	struct bpf_tramp_links fentries = links[BPF_TRAMP_FENTRY];
+	struct bpf_tramp_nodes fentries = nodes[BPF_TRAMP_FENTRY];
 	int cnt = 0;
 
-	for (int i = 0; i < links[BPF_TRAMP_FENTRY].nr_links; i++) {
-		if (fentries.links[i]->link.prog->expected_attach_type == BPF_TRACE_FSESSION)
+	for (int i = 0; i < nodes[BPF_TRAMP_FENTRY].nr_nodes; i++) {
+		if (fentries.nodes[i]->prog->expected_attach_type == BPF_TRACE_FSESSION)
 			cnt++;
 	}
 
 	return cnt;
 }
 
-static inline bool bpf_prog_calls_session_cookie(struct bpf_tramp_link *link)
+static inline bool bpf_prog_calls_session_cookie(struct bpf_tramp_node *node)
 {
-	return link->link.prog->call_session_cookie;
+	return node->prog->call_session_cookie;
 }
 
-static inline int bpf_fsession_cookie_cnt(struct bpf_tramp_links *links)
+static inline int bpf_fsession_cookie_cnt(struct bpf_tramp_nodes *nodes)
 {
-	struct bpf_tramp_links fentries = links[BPF_TRAMP_FENTRY];
+	struct bpf_tramp_nodes fentries = nodes[BPF_TRAMP_FENTRY];
 	int cnt = 0;
 
-	for (int i = 0; i < links[BPF_TRAMP_FENTRY].nr_links; i++) {
-		if (bpf_prog_calls_session_cookie(fentries.links[i]))
+	for (int i = 0; i < nodes[BPF_TRAMP_FENTRY].nr_nodes; i++) {
+		if (bpf_prog_calls_session_cookie(fentries.nodes[i]))
 			cnt++;
 	}
 
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index ecca0a6be6af..7f26918f181e 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -596,8 +596,8 @@ const struct bpf_link_ops bpf_struct_ops_link_lops = {
 	.dealloc = bpf_struct_ops_link_dealloc,
 };
 
-int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks,
-				      struct bpf_tramp_link *link,
+int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_nodes *tnodes,
+				      struct bpf_tramp_node *node,
 				      const struct btf_func_model *model,
 				      void *stub_func,
 				      void **_image, u32 *_image_off,
@@ -607,13 +607,13 @@ int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks,
 	void *image = *_image;
 	int size;
 
-	tlinks[BPF_TRAMP_FENTRY].links[0] = link;
-	tlinks[BPF_TRAMP_FENTRY].nr_links = 1;
+	tnodes[BPF_TRAMP_FENTRY].nodes[0] = node;
+	tnodes[BPF_TRAMP_FENTRY].nr_nodes = 1;
 
 	if (model->ret_size > 0)
 		flags |= BPF_TRAMP_F_RET_FENTRY_RET;
 
-	size = arch_bpf_trampoline_size(model, flags, tlinks, stub_func);
+	size = arch_bpf_trampoline_size(model, flags, tnodes, stub_func);
 	if (size <= 0)
 		return size ? : -EFAULT;
 
@@ -630,7 +630,7 @@ int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks,
 
 	size = arch_prepare_bpf_trampoline(NULL, image + image_off,
 					   image + image_off + size,
-					   model, flags, tlinks, stub_func);
+					   model, flags, tnodes, stub_func);
 	if (size <= 0) {
 		if (image != *_image)
 			bpf_struct_ops_image_free(image);
@@ -695,7 +695,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
 	const struct btf_type *module_type;
 	const struct btf_member *member;
 	const struct btf_type *t = st_ops_desc->type;
-	struct bpf_tramp_links *tlinks;
+	struct bpf_tramp_nodes *tnodes;
 	void *udata, *kdata;
 	int prog_fd, err;
 	u32 i, trampoline_start, image_off = 0;
@@ -722,8 +722,8 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
 	if (uvalue->common.state || refcount_read(&uvalue->common.refcnt))
 		return -EINVAL;
 
-	tlinks = kcalloc(BPF_TRAMP_MAX, sizeof(*tlinks), GFP_KERNEL);
-	if (!tlinks)
+	tnodes = kcalloc(BPF_TRAMP_MAX, sizeof(*tnodes), GFP_KERNEL);
+	if (!tnodes)
 		return -ENOMEM;
 
 	uvalue = (struct bpf_struct_ops_value *)st_map->uvalue;
@@ -824,6 +824,8 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
 		}
 		bpf_link_init(&st_link->link.link, BPF_LINK_TYPE_STRUCT_OPS,
 			      &bpf_struct_ops_link_lops, prog, prog->expected_attach_type);
+		st_link->link.node.prog = prog;
+
 		*plink++ = &st_link->link.link;
 
 		ksym = kzalloc(sizeof(*ksym), GFP_USER);
@@ -834,7 +836,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
 		*pksym++ = ksym;
 
 		trampoline_start = image_off;
-		err = bpf_struct_ops_prepare_trampoline(tlinks, &st_link->link,
+		err = bpf_struct_ops_prepare_trampoline(tnodes, &st_link->link.node,
 						&st_ops->func_models[i],
 						*(void **)(st_ops->cfi_stubs + moff),
 						&image, &image_off,
@@ -912,7 +914,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
 	memset(uvalue, 0, map->value_size);
 	memset(kvalue, 0, map->value_size);
 unlock:
-	kfree(tlinks);
+	kfree(tnodes);
 	mutex_unlock(&st_map->lock);
 	if (!err)
 		bpf_struct_ops_map_add_ksyms(st_map);
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 5f59dd47a5b1..ec10d6d1997f 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -3494,7 +3494,7 @@ static void bpf_tracing_link_release(struct bpf_link *link)
 	struct bpf_tracing_link *tr_link =
 		container_of(link, struct bpf_tracing_link, link.link);
 
-	WARN_ON_ONCE(bpf_trampoline_unlink_prog(&tr_link->link,
+	WARN_ON_ONCE(bpf_trampoline_unlink_prog(&tr_link->link.node,
 						tr_link->trampoline,
 						tr_link->tgt_prog));
 
@@ -3507,8 +3507,7 @@ static void bpf_tracing_link_release(struct bpf_link *link)
 
 static void bpf_tracing_link_dealloc(struct bpf_link *link)
 {
-	struct bpf_tracing_link *tr_link =
-		container_of(link, struct bpf_tracing_link, link.link);
+	struct bpf_tracing_link *tr_link = container_of(link, struct bpf_tracing_link, link.link);
 
 	kfree(tr_link);
 }
@@ -3516,8 +3515,8 @@ static void bpf_tracing_link_dealloc(struct bpf_link *link)
 static void bpf_tracing_link_show_fdinfo(const struct bpf_link *link,
 					 struct seq_file *seq)
 {
-	struct bpf_tracing_link *tr_link =
-		container_of(link, struct bpf_tracing_link, link.link);
+	struct bpf_tracing_link *tr_link = container_of(link, struct bpf_tracing_link, link.link);
+
 	u32 target_btf_id, target_obj_id;
 
 	bpf_trampoline_unpack_key(tr_link->trampoline->key,
@@ -3530,17 +3529,16 @@ static void bpf_tracing_link_show_fdinfo(const struct bpf_link *link,
 		   link->attach_type,
 		   target_obj_id,
 		   target_btf_id,
-		   tr_link->link.cookie);
+		   tr_link->link.node.cookie);
 }
 
 static int bpf_tracing_link_fill_link_info(const struct bpf_link *link,
 					   struct bpf_link_info *info)
 {
-	struct bpf_tracing_link *tr_link =
-		container_of(link, struct bpf_tracing_link, link.link);
+	struct bpf_tracing_link *tr_link = container_of(link, struct bpf_tracing_link, link.link);
 
 	info->tracing.attach_type = link->attach_type;
-	info->tracing.cookie = tr_link->link.cookie;
+	info->tracing.cookie = tr_link->link.node.cookie;
 	bpf_trampoline_unpack_key(tr_link->trampoline->key,
 				  &info->tracing.target_obj_id,
 				  &info->tracing.target_btf_id);
@@ -3629,7 +3627,8 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
 		if (fslink) {
 			bpf_link_init(&fslink->fexit.link, BPF_LINK_TYPE_TRACING,
 				      &bpf_tracing_link_lops, prog, attach_type);
-			fslink->fexit.cookie = bpf_cookie;
+			fslink->fexit.node.cookie = bpf_cookie;
+			fslink->fexit.node.prog = prog;
 			link = &fslink->link;
 		} else {
 			link = NULL;
@@ -3643,8 +3642,8 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
 	}
 	bpf_link_init(&link->link.link, BPF_LINK_TYPE_TRACING,
 		      &bpf_tracing_link_lops, prog, attach_type);
-
-	link->link.cookie = bpf_cookie;
+	link->link.node.cookie = bpf_cookie;
+	link->link.node.prog = prog;
 
 	mutex_lock(&prog->aux->dst_mutex);
 
@@ -3730,7 +3729,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
 	if (err)
 		goto out_unlock;
 
-	err = bpf_trampoline_link_prog(&link->link, tr, tgt_prog);
+	err = bpf_trampoline_link_prog(&link->link.node, tr, tgt_prog);
 	if (err) {
 		bpf_link_cleanup(&link_primer);
 		link = NULL;
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index ec9c1db78f47..9b8e036a3b2d 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -450,30 +450,29 @@ static struct bpf_trampoline_ops trampoline_ops = {
 	.modify_fentry     = modify_fentry,
 };
 
-static struct bpf_tramp_links *
+static struct bpf_tramp_nodes *
 bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total, bool *ip_arg)
 {
-	struct bpf_tramp_link *link;
-	struct bpf_tramp_links *tlinks;
-	struct bpf_tramp_link **links;
+	struct bpf_tramp_node *node, **nodes;
+	struct bpf_tramp_nodes *tnodes;
 	int kind;
 
 	*total = 0;
-	tlinks = kcalloc(BPF_TRAMP_MAX, sizeof(*tlinks), GFP_KERNEL);
-	if (!tlinks)
+	tnodes = kcalloc(BPF_TRAMP_MAX, sizeof(*tnodes), GFP_KERNEL);
+	if (!tnodes)
 		return ERR_PTR(-ENOMEM);
 
 	for (kind = 0; kind < BPF_TRAMP_MAX; kind++) {
-		tlinks[kind].nr_links = tr->progs_cnt[kind];
+		tnodes[kind].nr_nodes = tr->progs_cnt[kind];
 		*total += tr->progs_cnt[kind];
-		links = tlinks[kind].links;
+		nodes = tnodes[kind].nodes;
 
-		hlist_for_each_entry(link, &tr->progs_hlist[kind], tramp_hlist) {
-			*ip_arg |= link->link.prog->call_get_func_ip;
-			*links++ = link;
+		hlist_for_each_entry(node, &tr->progs_hlist[kind], tramp_hlist) {
+			*ip_arg |= node->prog->call_get_func_ip;
+			*nodes++ = node;
 		}
 	}
-	return tlinks;
+	return tnodes;
 }
 
 static void bpf_tramp_image_free(struct bpf_tramp_image *im)
@@ -621,14 +620,14 @@ static int bpf_trampoline_update_ops(struct bpf_trampoline *tr, bool lock_direct
 				     struct bpf_trampoline_ops *ops, void *data)
 {
 	struct bpf_tramp_image *im;
-	struct bpf_tramp_links *tlinks;
+	struct bpf_tramp_nodes *tnodes;
 	u32 orig_flags = tr->flags;
 	bool ip_arg = false;
 	int err, total, size;
 
-	tlinks = bpf_trampoline_get_progs(tr, &total, &ip_arg);
-	if (IS_ERR(tlinks))
-		return PTR_ERR(tlinks);
+	tnodes = bpf_trampoline_get_progs(tr, &total, &ip_arg);
+	if (IS_ERR(tnodes))
+		return PTR_ERR(tnodes);
 
 	if (total == 0) {
 		err = ops->unregister_fentry(tr, orig_flags, tr->cur_image->image, data);
@@ -640,8 +639,8 @@ static int bpf_trampoline_update_ops(struct bpf_trampoline *tr, bool lock_direct
 	/* clear all bits except SHARE_IPMODIFY and TAIL_CALL_CTX */
 	tr->flags &= (BPF_TRAMP_F_SHARE_IPMODIFY | BPF_TRAMP_F_TAIL_CALL_CTX);
 
-	if (tlinks[BPF_TRAMP_FEXIT].nr_links ||
-	    tlinks[BPF_TRAMP_MODIFY_RETURN].nr_links) {
+	if (tnodes[BPF_TRAMP_FEXIT].nr_nodes ||
+	    tnodes[BPF_TRAMP_MODIFY_RETURN].nr_nodes) {
 		/* NOTE: BPF_TRAMP_F_RESTORE_REGS and BPF_TRAMP_F_SKIP_FRAME
 		 * should not be set together.
 		 */
@@ -672,7 +671,7 @@ static int bpf_trampoline_update_ops(struct bpf_trampoline *tr, bool lock_direct
 #endif
 
 	size = arch_bpf_trampoline_size(&tr->func.model, tr->flags,
-					tlinks, tr->func.addr);
+					tnodes, tr->func.addr);
 	if (size < 0) {
 		err = size;
 		goto out;
@@ -690,7 +689,7 @@ static int bpf_trampoline_update_ops(struct bpf_trampoline *tr, bool lock_direct
 	}
 
 	err = arch_prepare_bpf_trampoline(im, im->image, im->image + size,
-					  &tr->func.model, tr->flags, tlinks,
+					  &tr->func.model, tr->flags, tnodes,
 					  tr->func.addr);
 	if (err < 0)
 		goto out_free;
@@ -728,7 +727,7 @@ static int bpf_trampoline_update_ops(struct bpf_trampoline *tr, bool lock_direct
 	/* If any error happens, restore previous flags */
 	if (err)
 		tr->flags = orig_flags;
-	kfree(tlinks);
+	kfree(tnodes);
 	return err;
 
 out_free:
@@ -783,7 +782,7 @@ static int bpf_freplace_check_tgt_prog(struct bpf_prog *tgt_prog)
 	return 0;
 }
 
-static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
+static int __bpf_trampoline_link_prog(struct bpf_tramp_node *node,
 				      struct bpf_trampoline *tr,
 				      struct bpf_prog *tgt_prog,
 				      struct bpf_trampoline_ops *ops,
@@ -791,12 +790,12 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
 {
 	struct bpf_fsession_link *fslink = NULL;
 	enum bpf_tramp_prog_type kind;
-	struct bpf_tramp_link *link_exiting;
+	struct bpf_tramp_node *node_existing;
 	struct hlist_head *prog_list;
 	int err = 0;
 	int cnt = 0, i;
 
-	kind = bpf_attach_type_to_tramp(link->link.prog);
+	kind = bpf_attach_type_to_tramp(node->prog);
 	if (tr->extension_prog)
 		/* cannot attach fentry/fexit if extension prog is attached.
 		 * cannot overwrite extension prog either.
@@ -813,10 +812,10 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
 		err = bpf_freplace_check_tgt_prog(tgt_prog);
 		if (err)
 			return err;
-		tr->extension_prog = link->link.prog;
+		tr->extension_prog = node->prog;
 		return bpf_arch_text_poke(tr->func.addr, BPF_MOD_NOP,
 					  BPF_MOD_JUMP, NULL,
-					  link->link.prog->bpf_func);
+					  node->prog->bpf_func);
 	}
 	if (kind == BPF_TRAMP_FSESSION) {
 		prog_list = &tr->progs_hlist[BPF_TRAMP_FENTRY];
@@ -826,31 +825,31 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
 	}
 	if (cnt >= BPF_MAX_TRAMP_LINKS)
 		return -E2BIG;
-	if (!hlist_unhashed(&link->tramp_hlist))
+	if (!hlist_unhashed(&node->tramp_hlist))
 		/* prog already linked */
 		return -EBUSY;
-	hlist_for_each_entry(link_exiting, prog_list, tramp_hlist) {
-		if (link_exiting->link.prog != link->link.prog)
+	hlist_for_each_entry(node_existing, prog_list, tramp_hlist) {
+		if (node_existing->prog != node->prog)
 			continue;
 		/* prog already linked */
 		return -EBUSY;
 	}
 
-	hlist_add_head(&link->tramp_hlist, prog_list);
+	hlist_add_head(&node->tramp_hlist, prog_list);
 	if (kind == BPF_TRAMP_FSESSION) {
 		tr->progs_cnt[BPF_TRAMP_FENTRY]++;
-		fslink = container_of(link, struct bpf_fsession_link, link.link);
-		hlist_add_head(&fslink->fexit.tramp_hlist, &tr->progs_hlist[BPF_TRAMP_FEXIT]);
+		fslink = container_of(node, struct bpf_fsession_link, link.link.node);
+		hlist_add_head(&fslink->fexit.node.tramp_hlist, &tr->progs_hlist[BPF_TRAMP_FEXIT]);
 		tr->progs_cnt[BPF_TRAMP_FEXIT]++;
 	} else {
 		tr->progs_cnt[kind]++;
 	}
 	err = bpf_trampoline_update_ops(tr, true /* lock_direct_mutex */, ops, data);
 	if (err) {
-		hlist_del_init(&link->tramp_hlist);
+		hlist_del_init(&node->tramp_hlist);
 		if (kind == BPF_TRAMP_FSESSION) {
 			tr->progs_cnt[BPF_TRAMP_FENTRY]--;
-			hlist_del_init(&fslink->fexit.tramp_hlist);
+			hlist_del_init(&fslink->fexit.node.tramp_hlist);
 			tr->progs_cnt[BPF_TRAMP_FEXIT]--;
 		} else {
 			tr->progs_cnt[kind]--;
@@ -859,19 +858,19 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
 	return err;
 }
 
-int bpf_trampoline_link_prog(struct bpf_tramp_link *link,
+int bpf_trampoline_link_prog(struct bpf_tramp_node *node,
 			     struct bpf_trampoline *tr,
 			     struct bpf_prog *tgt_prog)
 {
 	int err;
 
 	mutex_lock(&tr->mutex);
-	err = __bpf_trampoline_link_prog(link, tr, tgt_prog, &trampoline_ops, NULL);
+	err = __bpf_trampoline_link_prog(node, tr, tgt_prog, &trampoline_ops, NULL);
 	mutex_unlock(&tr->mutex);
 	return err;
 }
 
-static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
+static int __bpf_trampoline_unlink_prog(struct bpf_tramp_node *node,
 					struct bpf_trampoline *tr,
 					struct bpf_prog *tgt_prog,
 					struct bpf_trampoline_ops *ops,
@@ -880,7 +879,7 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
 	enum bpf_tramp_prog_type kind;
 	int err;
 
-	kind = bpf_attach_type_to_tramp(link->link.prog);
+	kind = bpf_attach_type_to_tramp(node->prog);
 	if (kind == BPF_TRAMP_REPLACE) {
 		WARN_ON_ONCE(!tr->extension_prog);
 		err = bpf_arch_text_poke(tr->func.addr, BPF_MOD_JUMP,
@@ -892,26 +891,26 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
 		return err;
 	} else if (kind == BPF_TRAMP_FSESSION) {
 		struct bpf_fsession_link *fslink =
-			container_of(link, struct bpf_fsession_link, link.link);
+			container_of(node, struct bpf_fsession_link, link.link.node);
 
-		hlist_del_init(&fslink->fexit.tramp_hlist);
+		hlist_del_init(&fslink->fexit.node.tramp_hlist);
 		tr->progs_cnt[BPF_TRAMP_FEXIT]--;
 		kind = BPF_TRAMP_FENTRY;
 	}
-	hlist_del_init(&link->tramp_hlist);
+	hlist_del_init(&node->tramp_hlist);
 	tr->progs_cnt[kind]--;
 	return bpf_trampoline_update_ops(tr, true /* lock_direct_mutex */, ops, data);
 }
 
 /* bpf_trampoline_unlink_prog() should never fail. */
-int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
+int bpf_trampoline_unlink_prog(struct bpf_tramp_node *node,
 			       struct bpf_trampoline *tr,
 			       struct bpf_prog *tgt_prog)
 {
 	int err;
 
 	mutex_lock(&tr->mutex);
-	err = __bpf_trampoline_unlink_prog(link, tr, tgt_prog, &trampoline_ops, NULL);
+	err = __bpf_trampoline_unlink_prog(node, tr, tgt_prog, &trampoline_ops, NULL);
 	mutex_unlock(&tr->mutex);
 	return err;
 }
@@ -926,7 +925,7 @@ static void bpf_shim_tramp_link_release(struct bpf_link *link)
 	if (!shim_link->trampoline)
 		return;
 
-	WARN_ON_ONCE(bpf_trampoline_unlink_prog(&shim_link->link, shim_link->trampoline, NULL));
+	WARN_ON_ONCE(bpf_trampoline_unlink_prog(&shim_link->link.node, shim_link->trampoline, NULL));
 	bpf_trampoline_put(shim_link->trampoline);
 }
 
@@ -974,6 +973,7 @@ static struct bpf_shim_tramp_link *cgroup_shim_alloc(const struct bpf_prog *prog
 	bpf_prog_inc(p);
 	bpf_link_init(&shim_link->link.link, BPF_LINK_TYPE_UNSPEC,
 		      &bpf_shim_tramp_link_lops, p, attach_type);
+	shim_link->link.node.prog = p;
 	bpf_cgroup_atype_get(p->aux->attach_btf_id, cgroup_atype);
 
 	return shim_link;
@@ -982,15 +982,15 @@ static struct bpf_shim_tramp_link *cgroup_shim_alloc(const struct bpf_prog *prog
 static struct bpf_shim_tramp_link *cgroup_shim_find(struct bpf_trampoline *tr,
 						    bpf_func_t bpf_func)
 {
-	struct bpf_tramp_link *link;
+	struct bpf_tramp_node *node;
 	int kind;
 
 	for (kind = 0; kind < BPF_TRAMP_MAX; kind++) {
-		hlist_for_each_entry(link, &tr->progs_hlist[kind], tramp_hlist) {
-			struct bpf_prog *p = link->link.prog;
+		hlist_for_each_entry(node, &tr->progs_hlist[kind], tramp_hlist) {
+			struct bpf_prog *p = node->prog;
 
 			if (p->bpf_func == bpf_func)
-				return container_of(link, struct bpf_shim_tramp_link, link);
+				return container_of(node, struct bpf_shim_tramp_link, link.node);
 		}
 	}
 
@@ -1042,7 +1042,7 @@ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog,
 		goto err;
 	}
 
-	err = __bpf_trampoline_link_prog(&shim_link->link, tr, NULL, &trampoline_ops, NULL);
+	err = __bpf_trampoline_link_prog(&shim_link->link.node, tr, NULL, &trampoline_ops, NULL);
 	if (err)
 		goto err;
 
@@ -1358,7 +1358,7 @@ bpf_trampoline_exit_t bpf_trampoline_exit(const struct bpf_prog *prog)
 int __weak
 arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *image_end,
 			    const struct btf_func_model *m, u32 flags,
-			    struct bpf_tramp_links *tlinks,
+			    struct bpf_tramp_nodes *tnodes,
 			    void *func_addr)
 {
 	return -ENOTSUPP;
@@ -1392,7 +1392,7 @@ int __weak arch_protect_bpf_trampoline(void *image, unsigned int size)
 }
 
 int __weak arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
-				    struct bpf_tramp_links *tlinks, void *func_addr)
+				    struct bpf_tramp_nodes *tnodes, void *func_addr)
 {
 	return -ENOTSUPP;
 }
diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c
index 4029931a4fce..738a9d64fa2a 100644
--- a/net/bpf/bpf_dummy_struct_ops.c
+++ b/net/bpf/bpf_dummy_struct_ops.c
@@ -133,7 +133,7 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
 	struct bpf_struct_ops_tramp_link *st_link = NULL;
 	const struct btf_type *func_proto;
 	struct bpf_dummy_ops_test_args *args;
-	struct bpf_tramp_links *tlinks = NULL;
+	struct bpf_tramp_nodes *tnodes = NULL;
 	void *image = NULL;
 	unsigned int op_idx;
 	u32 image_off = 0;
@@ -158,8 +158,8 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
 	if (err)
 		goto out;
 
-	tlinks = kcalloc(BPF_TRAMP_MAX, sizeof(*tlinks), GFP_KERNEL);
-	if (!tlinks) {
+	tnodes = kcalloc(BPF_TRAMP_MAX, sizeof(*tnodes), GFP_KERNEL);
+	if (!tnodes) {
 		err = -ENOMEM;
 		goto out;
 	}
@@ -173,9 +173,10 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
 	bpf_prog_inc(prog);
 	bpf_link_init(&st_link->link.link, BPF_LINK_TYPE_STRUCT_OPS, &bpf_struct_ops_link_lops, prog,
 		      prog->expected_attach_type);
+	st_link->link.node.prog = prog;
 
 	op_idx = prog->expected_attach_type;
-	err = bpf_struct_ops_prepare_trampoline(tlinks, &st_link->link,
+	err = bpf_struct_ops_prepare_trampoline(tnodes, &st_link->link.node,
 						&st_ops->func_models[op_idx],
 						&dummy_ops_test_ret_function,
 						&image, &image_off,
@@ -198,7 +199,7 @@ int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
 	bpf_struct_ops_image_free(image);
 	if (st_link)
 		bpf_link_put(&st_link->link.link);
-	kfree(tlinks);
+	kfree(tnodes);
 	return err;
 }
 
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [RFC bpf-next 05/12] bpf: Add multi tracing attach types
  2026-02-03  9:38 [RFC bpf-next 00/12] bpf: tracing_multi link Jiri Olsa
                   ` (3 preceding siblings ...)
  2026-02-03  9:38 ` [RFC bpf-next 04/12] bpf: Add struct bpf_tramp_node object Jiri Olsa
@ 2026-02-03  9:38 ` Jiri Olsa
  2026-02-03 10:13   ` bot+bpf-ci
  2026-02-04  2:20   ` Leon Hwang
  2026-02-03  9:38 ` [RFC bpf-next 06/12] bpf: Add bpf_trampoline_multi_attach/detach functions Jiri Olsa
                   ` (7 subsequent siblings)
  12 siblings, 2 replies; 54+ messages in thread
From: Jiri Olsa @ 2026-02-03  9:38 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
	Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt

Adding new attach type to identify multi tracing attachment:
  BPF_TRACE_FENTRY_MULTI
  BPF_TRACE_FEXIT_MULTI

Programs with such attach type will use specific link attachment
interface coming in following changes.

This was suggested by Andrii some (long) time ago and turned out
to be easier than having special program flag for that.

Bpf programs with such types have 'bpf_multi_func' function set
as their attach_btf_id.

Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/linux/bpf.h            |  5 +++++
 include/uapi/linux/bpf.h       |  2 ++
 kernel/bpf/syscall.c           | 35 ++++++++++++++++++++++++++++++----
 kernel/bpf/trampoline.c        |  5 ++++-
 kernel/bpf/verifier.c          |  8 +++++++-
 net/bpf/test_run.c             |  2 ++
 tools/include/uapi/linux/bpf.h |  2 ++
 7 files changed, 53 insertions(+), 6 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 4aee54e6a8ca..f06f0a11ccb7 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2126,6 +2126,11 @@ void bpf_prog_disassoc_struct_ops(struct bpf_prog *prog);
 void *bpf_prog_get_assoc_struct_ops(const struct bpf_prog_aux *aux);
 u32 bpf_struct_ops_id(const void *kdata);
 
+static inline bool is_tracing_multi(enum bpf_attach_type type)
+{
+	return type == BPF_TRACE_FENTRY_MULTI || type == BPF_TRACE_FEXIT_MULTI;
+}
+
 #ifdef CONFIG_NET
 /* Define it here to avoid the use of forward declaration */
 struct bpf_dummy_ops_state {
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index c8d400b7680a..68600972a778 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1154,6 +1154,8 @@ enum bpf_attach_type {
 	BPF_TRACE_KPROBE_SESSION,
 	BPF_TRACE_UPROBE_SESSION,
 	BPF_TRACE_FSESSION,
+	BPF_TRACE_FENTRY_MULTI,
+	BPF_TRACE_FEXIT_MULTI,
 	__MAX_BPF_ATTACH_TYPE
 };
 
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index ec10d6d1997f..2f8932addf96 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -40,6 +40,7 @@
 #include <linux/overflow.h>
 #include <linux/cookie.h>
 #include <linux/verification.h>
+#include <linux/btf_ids.h>
 
 #include <net/netfilter/nf_bpf_link.h>
 #include <net/netkit.h>
@@ -2652,7 +2653,8 @@ static int
 bpf_prog_load_check_attach(enum bpf_prog_type prog_type,
 			   enum bpf_attach_type expected_attach_type,
 			   struct btf *attach_btf, u32 btf_id,
-			   struct bpf_prog *dst_prog)
+			   struct bpf_prog *dst_prog,
+			   bool multi_func)
 {
 	if (btf_id) {
 		if (btf_id > BTF_MAX_TYPE)
@@ -2672,6 +2674,14 @@ bpf_prog_load_check_attach(enum bpf_prog_type prog_type,
 		}
 	}
 
+	if (multi_func) {
+		if (prog_type != BPF_PROG_TYPE_TRACING)
+			return -EINVAL;
+		if (!attach_btf || btf_id)
+			return -EINVAL;
+		return 0;
+	}
+
 	if (attach_btf && (!btf_id || dst_prog))
 		return -EINVAL;
 
@@ -2857,6 +2867,16 @@ static int bpf_prog_mark_insn_arrays_ready(struct bpf_prog *prog)
 	return 0;
 }
 
+#define DEFINE_BPF_MULTI_FUNC(args...)			\
+	extern int bpf_multi_func(args);		\
+	int __init bpf_multi_func(args) { return 0; }
+
+DEFINE_BPF_MULTI_FUNC(unsigned long a1, unsigned long a2,
+		      unsigned long a3, unsigned long a4,
+		      unsigned long a5, unsigned long a6)
+
+BTF_ID_LIST_SINGLE(bpf_multi_func_btf_id, func, bpf_multi_func)
+
 /* last field in 'union bpf_attr' used by this command */
 #define BPF_PROG_LOAD_LAST_FIELD keyring_id
 
@@ -2869,6 +2889,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
 	bool bpf_cap;
 	int err;
 	char license[128];
+	bool multi_func;
 
 	if (CHECK_ATTR(BPF_PROG_LOAD))
 		return -EINVAL;
@@ -2935,6 +2956,8 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
 	if (is_perfmon_prog_type(type) && !bpf_token_capable(token, CAP_PERFMON))
 		goto put_token;
 
+	multi_func = is_tracing_multi(attr->expected_attach_type);
+
 	/* attach_prog_fd/attach_btf_obj_fd can specify fd of either bpf_prog
 	 * or btf, we need to check which one it is
 	 */
@@ -2956,7 +2979,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
 				goto put_token;
 			}
 		}
-	} else if (attr->attach_btf_id) {
+	} else if (attr->attach_btf_id || multi_func) {
 		/* fall back to vmlinux BTF, if BTF type ID is specified */
 		attach_btf = bpf_get_btf_vmlinux();
 		if (IS_ERR(attach_btf)) {
@@ -2972,7 +2995,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
 
 	if (bpf_prog_load_check_attach(type, attr->expected_attach_type,
 				       attach_btf, attr->attach_btf_id,
-				       dst_prog)) {
+				       dst_prog, multi_func)) {
 		if (dst_prog)
 			bpf_prog_put(dst_prog);
 		if (attach_btf)
@@ -2995,7 +3018,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
 	prog->expected_attach_type = attr->expected_attach_type;
 	prog->sleepable = !!(attr->prog_flags & BPF_F_SLEEPABLE);
 	prog->aux->attach_btf = attach_btf;
-	prog->aux->attach_btf_id = attr->attach_btf_id;
+	prog->aux->attach_btf_id = multi_func ? bpf_multi_func_btf_id[0] : attr->attach_btf_id;
 	prog->aux->dst_prog = dst_prog;
 	prog->aux->dev_bound = !!attr->prog_ifindex;
 	prog->aux->xdp_has_frags = attr->prog_flags & BPF_F_XDP_HAS_FRAGS;
@@ -3571,6 +3594,8 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
 		if (prog->expected_attach_type != BPF_TRACE_FENTRY &&
 		    prog->expected_attach_type != BPF_TRACE_FEXIT &&
 		    prog->expected_attach_type != BPF_TRACE_FSESSION &&
+		    prog->expected_attach_type != BPF_TRACE_FENTRY_MULTI &&
+		    prog->expected_attach_type != BPF_TRACE_FEXIT_MULTI &&
 		    prog->expected_attach_type != BPF_MODIFY_RETURN) {
 			err = -EINVAL;
 			goto out_put_prog;
@@ -4360,6 +4385,8 @@ attach_type_to_prog_type(enum bpf_attach_type attach_type)
 	case BPF_TRACE_FENTRY:
 	case BPF_TRACE_FEXIT:
 	case BPF_TRACE_FSESSION:
+	case BPF_TRACE_FENTRY_MULTI:
+	case BPF_TRACE_FEXIT_MULTI:
 	case BPF_MODIFY_RETURN:
 		return BPF_PROG_TYPE_TRACING;
 	case BPF_LSM_MAC:
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 9b8e036a3b2d..2be2f1d0b7d7 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -149,7 +149,8 @@ bool bpf_prog_has_trampoline(const struct bpf_prog *prog)
 	switch (ptype) {
 	case BPF_PROG_TYPE_TRACING:
 		if (eatype == BPF_TRACE_FENTRY || eatype == BPF_TRACE_FEXIT ||
-		    eatype == BPF_MODIFY_RETURN || eatype == BPF_TRACE_FSESSION)
+		    eatype == BPF_MODIFY_RETURN || eatype == BPF_TRACE_FSESSION ||
+		    eatype == BPF_TRACE_FENTRY_MULTI || eatype == BPF_TRACE_FEXIT_MULTI)
 			return true;
 		return false;
 	case BPF_PROG_TYPE_LSM:
@@ -744,10 +745,12 @@ static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(struct bpf_prog *prog)
 {
 	switch (prog->expected_attach_type) {
 	case BPF_TRACE_FENTRY:
+	case BPF_TRACE_FENTRY_MULTI:
 		return BPF_TRAMP_FENTRY;
 	case BPF_MODIFY_RETURN:
 		return BPF_TRAMP_MODIFY_RETURN;
 	case BPF_TRACE_FEXIT:
+	case BPF_TRACE_FEXIT_MULTI:
 		return BPF_TRAMP_FEXIT;
 	case BPF_TRACE_FSESSION:
 		return BPF_TRAMP_FSESSION;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 6b62b6d57175..fb52ba2f7f7a 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -17809,6 +17809,8 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
 		case BPF_TRACE_FENTRY:
 		case BPF_TRACE_FEXIT:
 		case BPF_TRACE_FSESSION:
+		case BPF_TRACE_FENTRY_MULTI:
+		case BPF_TRACE_FEXIT_MULTI:
 			range = retval_range(0, 0);
 			break;
 		case BPF_TRACE_RAW_TP:
@@ -23771,6 +23773,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 		    insn->imm == BPF_FUNC_get_func_ret) {
 			if (eatype == BPF_TRACE_FEXIT ||
 			    eatype == BPF_TRACE_FSESSION ||
+			    eatype == BPF_TRACE_FEXIT_MULTI ||
 			    eatype == BPF_MODIFY_RETURN) {
 				/* Load nr_args from ctx - 8 */
 				insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
@@ -24828,6 +24831,8 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
 	case BPF_TRACE_FENTRY:
 	case BPF_TRACE_FEXIT:
 	case BPF_TRACE_FSESSION:
+	case BPF_TRACE_FENTRY_MULTI:
+	case BPF_TRACE_FEXIT_MULTI:
 		if (prog->expected_attach_type == BPF_TRACE_FSESSION &&
 		    !bpf_jit_supports_fsession()) {
 			bpf_log(log, "JIT does not support fsession\n");
@@ -25069,7 +25074,8 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
 		return 0;
 	} else if (prog->expected_attach_type == BPF_TRACE_ITER) {
 		return bpf_iter_prog_supported(prog);
-	}
+	} else if (is_tracing_multi(prog->expected_attach_type))
+		return prog->type == BPF_PROG_TYPE_TRACING ? 0 : -EINVAL;
 
 	if (prog->type == BPF_PROG_TYPE_LSM) {
 		ret = bpf_lsm_verify_prog(&env->log, prog);
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index 178c4738e63b..3373450132f0 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -686,6 +686,8 @@ int bpf_prog_test_run_tracing(struct bpf_prog *prog,
 	case BPF_TRACE_FENTRY:
 	case BPF_TRACE_FEXIT:
 	case BPF_TRACE_FSESSION:
+	case BPF_TRACE_FENTRY_MULTI:
+	case BPF_TRACE_FEXIT_MULTI:
 		if (bpf_fentry_test1(1) != 2 ||
 		    bpf_fentry_test2(2, 3) != 5 ||
 		    bpf_fentry_test3(4, 5, 6) != 15 ||
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 5e38b4887de6..61f0fe5bc0aa 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1154,6 +1154,8 @@ enum bpf_attach_type {
 	BPF_TRACE_KPROBE_SESSION,
 	BPF_TRACE_UPROBE_SESSION,
 	BPF_TRACE_FSESSION,
+	BPF_TRACE_FENTRY_MULTI,
+	BPF_TRACE_FEXIT_MULTI,
 	__MAX_BPF_ATTACH_TYPE
 };
 
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [RFC bpf-next 06/12] bpf: Add bpf_trampoline_multi_attach/detach functions
  2026-02-03  9:38 [RFC bpf-next 00/12] bpf: tracing_multi link Jiri Olsa
                   ` (4 preceding siblings ...)
  2026-02-03  9:38 ` [RFC bpf-next 05/12] bpf: Add multi tracing attach types Jiri Olsa
@ 2026-02-03  9:38 ` Jiri Olsa
  2026-02-03 10:14   ` bot+bpf-ci
  2026-02-05  9:16   ` Menglong Dong
  2026-02-03  9:38 ` [RFC bpf-next 07/12] bpf: Add support to create tracing multi link Jiri Olsa
                   ` (6 subsequent siblings)
  12 siblings, 2 replies; 54+ messages in thread
From: Jiri Olsa @ 2026-02-03  9:38 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
	Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt

Adding bpf_trampoline_multi_attach/detach functions that allows
to attach/detach multi tracing trampoline.

The attachment is defined with bpf_program and array of BTF ids
of functions to attach the bpf program to.

The attachment will allocate or use currently existing trampoline
for function to attach and link it with the bpf program.

The attach works as follows:
- we get all the needed trampolines
- lock them and add the bpf program to each (__bpf_trampoline_link_prog)
- the trampoline_multi_ops passed in __bpf_trampoline_link_prog gather needed
  ftrace_hash ip->trampoline data
- we call update_ftrace_direct_add/mod to update needed locations
- we unlock all the trampolines

The detach works as follows:
- we lock all the needed trampolines
- remove the program from each (__bpf_trampoline_unlink_prog)
- the trampoline_multi_ops passed in __bpf_trampoline_link_prog gather needed
  ftrace_hash ip->trampoline data
- we call update_ftrace_direct_del/mod to update needed locations
- we unlock and put all the trampolines

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/linux/bpf.h     |  18 ++++
 kernel/bpf/trampoline.c | 186 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 204 insertions(+)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index f06f0a11ccb7..5591660da6e1 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1466,6 +1466,12 @@ struct bpf_trampoline *bpf_trampoline_get(u64 key,
 void bpf_trampoline_put(struct bpf_trampoline *tr);
 int arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_funcs);
 
+struct bpf_tracing_multi_link;
+int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
+				struct bpf_tracing_multi_link *link);
+int bpf_trampoline_multi_detach(struct bpf_prog *prog,
+				struct bpf_tracing_multi_link *link);
+
 /*
  * When the architecture supports STATIC_CALL replace the bpf_dispatcher_fn
  * indirection with a direct call to the bpf program. If the architecture does
@@ -1898,6 +1904,18 @@ struct bpf_fsession_link {
 	struct bpf_tramp_link fexit;
 };
 
+struct bpf_tracing_multi_node {
+	struct bpf_tramp_node node;
+	struct bpf_trampoline *trampoline;
+};
+
+struct bpf_tracing_multi_link {
+	struct bpf_link link;
+	enum bpf_attach_type attach_type;
+	int nodes_cnt;
+	struct bpf_tracing_multi_node nodes[] __counted_by(nodes_cnt);
+};
+
 struct bpf_raw_tp_link {
 	struct bpf_link link;
 	struct bpf_raw_event_map *btp;
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 2be2f1d0b7d7..b76bb545077b 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -367,7 +367,11 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key, unsigned long ip)
 	head = &trampoline_ip_table[hash_64(tr->ip, TRAMPOLINE_HASH_BITS)];
 	hlist_add_head(&tr->hlist_ip, head);
 	refcount_set(&tr->refcnt, 1);
+#ifdef CONFIG_LOCKDEP
+	mutex_init_with_key(&tr->mutex, &__lockdep_no_track__);
+#else
 	mutex_init(&tr->mutex);
+#endif
 	for (i = 0; i < BPF_TRAMP_MAX; i++)
 		INIT_HLIST_HEAD(&tr->progs_hlist[i]);
 out:
@@ -1400,6 +1404,188 @@ int __weak arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
 	return -ENOTSUPP;
 }
 
+#if defined(CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS) && defined(CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS)
+
+struct fentry_multi_data {
+	struct ftrace_hash *unreg;
+	struct ftrace_hash *modify;
+	struct ftrace_hash *reg;
+};
+
+static void free_fentry_multi_data(struct fentry_multi_data *data)
+{
+	free_ftrace_hash(data->reg);
+	free_ftrace_hash(data->unreg);
+	free_ftrace_hash(data->modify);
+}
+
+static int register_fentry_multi(struct bpf_trampoline *tr, void *new_addr, void *ptr)
+{
+	struct fentry_multi_data *data = ptr;
+	unsigned long ip = ftrace_location(tr->ip);
+
+	return add_ftrace_hash_entry_direct(data->reg, ip,
+					    (unsigned long) new_addr) ? 0 : -ENOMEM;
+}
+
+static int unregister_fentry_multi(struct bpf_trampoline *tr, u32 orig_flags, void *old_addr, void *ptr)
+{
+	struct fentry_multi_data *data = ptr;
+	unsigned long ip = ftrace_location(tr->ip);
+
+	return add_ftrace_hash_entry_direct(data->unreg, ip,
+					    (unsigned long) old_addr) ? 0 : -ENOMEM;
+}
+
+static int modify_fentry_multi(struct bpf_trampoline *tr, u32 orig_flags, void *old_addr, void *new_addr,
+			       bool lock_direct_mutex, void *ptr)
+{
+	struct fentry_multi_data *data = ptr;
+	unsigned long ip = ftrace_location(tr->ip);
+
+	return add_ftrace_hash_entry_direct(data->modify, ip,
+					    (unsigned long) new_addr) ? 0 : -ENOMEM;
+}
+
+static struct bpf_trampoline_ops trampoline_multi_ops = {
+	.register_fentry   = register_fentry_multi,
+	.unregister_fentry = unregister_fentry_multi,
+	.modify_fentry     = modify_fentry_multi,
+};
+
+int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
+				struct bpf_tracing_multi_link *link)
+{
+	struct bpf_attach_target_info tgt_info = {};
+	struct bpf_tracing_multi_node *mnode;
+	int j, i, err, cnt = link->nodes_cnt;
+	struct fentry_multi_data data = {};
+	struct bpf_trampoline *tr;
+	u64 key;
+
+	data.reg = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
+	if (!data.reg)
+		return -ENOMEM;
+
+	data.modify = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
+	if (!data.modify) {
+		free_ftrace_hash(data.reg);
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < cnt; i++) {
+		mnode = &link->nodes[i];
+		err = bpf_check_attach_target(NULL, prog, NULL, ids[i], &tgt_info);
+		if (err)
+			goto rollback_put;
+
+		key = bpf_trampoline_compute_key(NULL, prog->aux->attach_btf, ids[i]);
+
+		tr = bpf_trampoline_get(key, &tgt_info);
+		if (!tr)
+			goto rollback_put;
+
+		mnode->trampoline = tr;
+		mnode->node.prog = prog;
+	}
+
+	for (i = 0; i < cnt; i++) {
+		mnode = &link->nodes[i];
+		tr = mnode->trampoline;
+
+		mutex_lock(&tr->mutex);
+
+		err = __bpf_trampoline_link_prog(&mnode->node, tr, NULL, &trampoline_multi_ops, &data);
+		if (err) {
+			mutex_unlock(&tr->mutex);
+			goto rollback_unlink;
+		}
+	}
+
+	if (ftrace_hash_count(data.reg)) {
+		err = update_ftrace_direct_add(&direct_ops, data.reg);
+		if (err)
+			goto rollback_unlink;
+	}
+
+	if (ftrace_hash_count(data.modify)) {
+		err = update_ftrace_direct_mod(&direct_ops, data.modify, true);
+		if (err) {
+			WARN_ON_ONCE(update_ftrace_direct_del(&direct_ops, data.reg));
+			goto rollback_unlink;
+		}
+	}
+
+	for (i = 0; i < cnt; i++) {
+		tr = link->nodes[i].trampoline;
+		mutex_unlock(&tr->mutex);
+	}
+
+	free_fentry_multi_data(&data);
+	return 0;
+
+rollback_unlink:
+	for (j = 0; j < i; j++) {
+		mnode = &link->nodes[j];
+		tr = mnode->trampoline;
+		WARN_ON_ONCE(__bpf_trampoline_unlink_prog(&mnode->node, tr, NULL,
+			     &trampoline_multi_ops, &data));
+		mutex_unlock(&tr->mutex);
+	}
+
+rollback_put:
+	for (j = 0; j < i; j++) {
+		mnode = &link->nodes[j];
+		bpf_trampoline_put(mnode->trampoline);
+	}
+
+	free_fentry_multi_data(&data);
+	return err;
+}
+
+int bpf_trampoline_multi_detach(struct bpf_prog *prog, struct bpf_tracing_multi_link *link)
+{
+	struct bpf_tracing_multi_node *mnode;
+	struct fentry_multi_data data = {};
+	int i, cnt = link->nodes_cnt;
+	struct bpf_trampoline *tr;
+
+	data.unreg = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
+	if (!data.unreg)
+		return -ENOMEM;
+
+	data.modify = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
+	if (!data.modify) {
+		free_ftrace_hash(data.unreg);
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < cnt; i++) {
+		mnode = &link->nodes[i];
+		tr = link->nodes[i].trampoline;
+
+		mutex_lock(&tr->mutex);
+		WARN_ON_ONCE(__bpf_trampoline_unlink_prog(&mnode->node, tr, NULL,
+							  &trampoline_multi_ops, &data));
+	}
+
+	if (ftrace_hash_count(data.unreg))
+		WARN_ON_ONCE(update_ftrace_direct_del(&direct_ops, data.unreg));
+	if (ftrace_hash_count(data.modify))
+		WARN_ON_ONCE(update_ftrace_direct_mod(&direct_ops, data.modify, true));
+
+	for (i = 0; i < cnt; i++) {
+		tr = link->nodes[i].trampoline;
+		mutex_unlock(&tr->mutex);
+		bpf_trampoline_put(tr);
+	}
+
+	free_fentry_multi_data(&data);
+	return 0;
+}
+
+#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS) && CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS */
+
 static int __init init_trampolines(void)
 {
 	int i;
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [RFC bpf-next 07/12] bpf: Add support to create tracing multi link
  2026-02-03  9:38 [RFC bpf-next 00/12] bpf: tracing_multi link Jiri Olsa
                   ` (5 preceding siblings ...)
  2026-02-03  9:38 ` [RFC bpf-next 06/12] bpf: Add bpf_trampoline_multi_attach/detach functions Jiri Olsa
@ 2026-02-03  9:38 ` Jiri Olsa
  2026-02-03 10:13   ` bot+bpf-ci
  2026-02-04 19:05   ` Andrii Nakryiko
  2026-02-03  9:38 ` [RFC bpf-next 08/12] libbpf: Add btf__find_by_glob_kind function Jiri Olsa
                   ` (5 subsequent siblings)
  12 siblings, 2 replies; 54+ messages in thread
From: Jiri Olsa @ 2026-02-03  9:38 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
	Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt

Adding new link to allow to attach program to multiple function
BTF IDs. The link is represented by struct bpf_tracing_multi_link.

To configure the link, new fields are added to bpf_attr::link_create
to pass array of BTF IDs;

  struct {
      __aligned_u64   btf_ids;        /* addresses to attach */
      __u32           btf_ids_cnt;    /* addresses count */
  } tracing_multi;

Each BTF ID represents function (BTF_KIND_FUNC) that the link will
attach bpf program to.

We use previously added bpf_trampoline_multi_attach/detach functions
to attach/detach the link.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/linux/trace_events.h   |   6 ++
 include/uapi/linux/bpf.h       |   5 ++
 kernel/bpf/syscall.c           |   2 +
 kernel/trace/bpf_trace.c       | 105 +++++++++++++++++++++++++++++++++
 tools/include/uapi/linux/bpf.h |   5 ++
 5 files changed, 123 insertions(+)

diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 3690221ba3d8..6ea2f30728de 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -778,6 +778,7 @@ int bpf_get_perf_event_info(const struct perf_event *event, u32 *prog_id,
 			    unsigned long *missed);
 int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog);
 int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog);
+int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr);
 #else
 static inline unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx)
 {
@@ -830,6 +831,11 @@ bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
 {
 	return -EOPNOTSUPP;
 }
+static inline int
+bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
+{
+	return -EOPNOTSUPP;
+}
 #endif
 
 enum {
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 68600972a778..010785246576 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1180,6 +1180,7 @@ enum bpf_link_type {
 	BPF_LINK_TYPE_UPROBE_MULTI = 12,
 	BPF_LINK_TYPE_NETKIT = 13,
 	BPF_LINK_TYPE_SOCKMAP = 14,
+	BPF_LINK_TYPE_TRACING_MULTI = 15,
 	__MAX_BPF_LINK_TYPE,
 };
 
@@ -1863,6 +1864,10 @@ union bpf_attr {
 				};
 				__u64		expected_revision;
 			} cgroup;
+			struct {
+				__aligned_u64	btf_ids;	/* addresses to attach */
+				__u32		btf_ids_cnt;	/* addresses count */
+			} tracing_multi;
 		};
 	} link_create;
 
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 2f8932addf96..39217c96e1df 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -5743,6 +5743,8 @@ static int link_create(union bpf_attr *attr, bpfptr_t uattr)
 			ret = bpf_iter_link_attach(attr, uattr, prog);
 		else if (prog->expected_attach_type == BPF_LSM_CGROUP)
 			ret = cgroup_bpf_link_attach(attr, prog);
+		else if (is_tracing_multi(prog->expected_attach_type))
+			ret = bpf_tracing_multi_attach(prog, attr);
 		else
 			ret = bpf_tracing_prog_attach(prog,
 						      attr->link_create.target_fd,
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index f7baeb8278ca..82e625aa04e8 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -3593,3 +3593,108 @@ __bpf_kfunc int bpf_copy_from_user_task_str_dynptr(struct bpf_dynptr *dptr, u64
 }
 
 __bpf_kfunc_end_defs();
+
+#if defined(CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS) && defined(CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS)
+
+static void bpf_tracing_multi_link_release(struct bpf_link *link)
+{
+	struct bpf_tracing_multi_link *tr_link =
+		container_of(link, struct bpf_tracing_multi_link, link);
+
+	bpf_trampoline_multi_detach(link->prog, tr_link);
+}
+
+static void bpf_tracing_multi_link_dealloc(struct bpf_link *link)
+{
+	struct bpf_tracing_multi_link *tr_link =
+		container_of(link, struct bpf_tracing_multi_link, link);
+
+	kfree(tr_link);
+}
+
+static void bpf_tracing_multi_link_show_fdinfo(const struct bpf_link *link,
+					       struct seq_file *seq)
+{
+	struct bpf_tracing_multi_link *tr_link =
+		container_of(link, struct bpf_tracing_multi_link, link);
+
+	seq_printf(seq, "attach_type:\t%d\n", tr_link->attach_type);
+}
+
+static int bpf_tracing_multi_link_fill_link_info(const struct bpf_link *link,
+						 struct bpf_link_info *info)
+{
+	struct bpf_tracing_multi_link *tr_link =
+		container_of(link, struct bpf_tracing_multi_link, link);
+
+	info->tracing.attach_type = tr_link->attach_type;
+	return 0;
+}
+
+static const struct bpf_link_ops bpf_tracing_multi_link_lops = {
+	.release = bpf_tracing_multi_link_release,
+	.dealloc = bpf_tracing_multi_link_dealloc,
+	.show_fdinfo = bpf_tracing_multi_link_show_fdinfo,
+	.fill_link_info = bpf_tracing_multi_link_fill_link_info,
+};
+
+int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
+{
+	struct bpf_tracing_multi_link *link = NULL;
+	struct bpf_link_primer link_primer;
+	u32 cnt, *ids = NULL;
+	u32 __user *uids;
+	int err;
+
+	uids = u64_to_user_ptr(attr->link_create.tracing_multi.btf_ids);
+	cnt = attr->link_create.tracing_multi.btf_ids_cnt;
+
+	if (!cnt || !uids)
+		return -EINVAL;
+
+	ids = kvmalloc_array(cnt, sizeof(*ids), GFP_KERNEL);
+	if (!ids)
+		return -ENOMEM;
+
+	if (copy_from_user(ids, uids, cnt * sizeof(*ids))) {
+		err = -EFAULT;
+		goto error;
+	}
+
+	link = kzalloc(struct_size(link, nodes, cnt), GFP_KERNEL);
+	if (!link) {
+		err = -ENOMEM;
+		goto error;
+	}
+
+	link->nodes_cnt = cnt;
+
+	bpf_link_init(&link->link, BPF_LINK_TYPE_TRACING_MULTI,
+		      &bpf_tracing_multi_link_lops, prog, prog->expected_attach_type);
+
+	err = bpf_link_prime(&link->link, &link_primer);
+	if (err)
+		goto error;
+
+	err = bpf_trampoline_multi_attach(prog, ids, link);
+	kvfree(ids);
+	if (err) {
+		bpf_link_cleanup(&link_primer);
+		return err;
+	}
+	return bpf_link_settle(&link_primer);
+
+error:
+	kvfree(ids);
+	kfree(link);
+	return err;
+}
+
+#else
+
+int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
+{
+	return -EOPNOTSUPP;
+}
+
+#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS) && CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS */
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 61f0fe5bc0aa..f54e830d9aae 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1180,6 +1180,7 @@ enum bpf_link_type {
 	BPF_LINK_TYPE_UPROBE_MULTI = 12,
 	BPF_LINK_TYPE_NETKIT = 13,
 	BPF_LINK_TYPE_SOCKMAP = 14,
+	BPF_LINK_TYPE_TRACING_MULTI = 15,
 	__MAX_BPF_LINK_TYPE,
 };
 
@@ -1863,6 +1864,10 @@ union bpf_attr {
 				};
 				__u64		expected_revision;
 			} cgroup;
+			struct {
+				__aligned_u64	btf_ids;	/* addresses to attach */
+				__u32		btf_ids_cnt;	/* addresses count */
+			} tracing_multi;
 		};
 	} link_create;
 
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [RFC bpf-next 08/12] libbpf: Add btf__find_by_glob_kind function
  2026-02-03  9:38 [RFC bpf-next 00/12] bpf: tracing_multi link Jiri Olsa
                   ` (6 preceding siblings ...)
  2026-02-03  9:38 ` [RFC bpf-next 07/12] bpf: Add support to create tracing multi link Jiri Olsa
@ 2026-02-03  9:38 ` Jiri Olsa
  2026-02-03 10:14   ` bot+bpf-ci
  2026-02-04 19:04   ` Andrii Nakryiko
  2026-02-03  9:38 ` [RFC bpf-next 09/12] libbpf: Add support to create tracing multi link Jiri Olsa
                   ` (4 subsequent siblings)
  12 siblings, 2 replies; 54+ messages in thread
From: Jiri Olsa @ 2026-02-03  9:38 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
	Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt

Adding btf__find_by_glob_kind function that returns array of
BTF ids that match given kind and allow/deny patterns.

int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
                           const char *allow_pattern,
                           const char *deny_pattern,
                           __u32 **__ids);

The __ids array is allocated and needs to be manually freed.

The pattern check is done by glob_match function.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 tools/lib/bpf/btf.c | 41 +++++++++++++++++++++++++++++++++++++++++
 tools/lib/bpf/btf.h |  3 +++
 2 files changed, 44 insertions(+)

diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
index 83fe79ffcb8f..64502b3ef38a 100644
--- a/tools/lib/bpf/btf.c
+++ b/tools/lib/bpf/btf.c
@@ -1010,6 +1010,47 @@ __s32 btf__find_by_name_kind(const struct btf *btf, const char *type_name,
 	return btf_find_by_name_kind(btf, 1, type_name, kind);
 }
 
+int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
+			   const char *allow_pattern, const char *deny_pattern,
+			   __u32 **__ids)
+{
+	__u32 i, nr_types = btf__type_cnt(btf);
+	int cnt = 0, alloc = 0;
+	__u32 *ids = NULL;
+
+	for (i = 1; i < nr_types; i++) {
+		const struct btf_type *t = btf__type_by_id(btf, i);
+		const char *name;
+		__u32 *p;
+
+		if (btf_kind(t) != kind)
+			continue;
+		name = btf__name_by_offset(btf, t->name_off);
+		if (!name)
+			continue;
+
+		if (deny_pattern && glob_match(name, deny_pattern))
+			continue;
+		if (allow_pattern && !glob_match(name, allow_pattern))
+			continue;
+
+		if (cnt == alloc) {
+			alloc = max(16, alloc * 3 / 2);
+			p = libbpf_reallocarray(ids, alloc, sizeof(__u32));
+			if (!p) {
+				free(ids);
+				return -ENOMEM;
+			}
+			ids = p;
+		}
+		ids[cnt] = i;
+		cnt++;
+	}
+
+	*__ids = ids;
+	return cnt;
+}
+
 static bool btf_is_modifiable(const struct btf *btf)
 {
 	return (void *)btf->hdr != btf->raw_data;
diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
index b30008c267c0..d7b47bb0ba99 100644
--- a/tools/lib/bpf/btf.h
+++ b/tools/lib/bpf/btf.h
@@ -661,6 +661,9 @@ static inline struct btf_decl_tag *btf_decl_tag(const struct btf_type *t)
 	return (struct btf_decl_tag *)(t + 1);
 }
 
+int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
+			   const char *allow_pattern, const char *deny_pattern,
+			   __u32 **__ids);
 #ifdef __cplusplus
 } /* extern "C" */
 #endif
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [RFC bpf-next 09/12] libbpf: Add support to create tracing multi link
  2026-02-03  9:38 [RFC bpf-next 00/12] bpf: tracing_multi link Jiri Olsa
                   ` (7 preceding siblings ...)
  2026-02-03  9:38 ` [RFC bpf-next 08/12] libbpf: Add btf__find_by_glob_kind function Jiri Olsa
@ 2026-02-03  9:38 ` Jiri Olsa
  2026-02-03 10:14   ` bot+bpf-ci
  2026-02-04 19:05   ` Andrii Nakryiko
  2026-02-03  9:38 ` [RFC bpf-next 10/12] selftests/bpf: Add fentry tracing multi func test Jiri Olsa
                   ` (3 subsequent siblings)
  12 siblings, 2 replies; 54+ messages in thread
From: Jiri Olsa @ 2026-02-03  9:38 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
	Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt

Adding new interface function to attach programs with tracing
multi link:

  bpf_program__attach_tracing_multi(const struct bpf_program *prog,
                                    const char *pattern,
                                    const struct bpf_tracing_multi_opts *opts);

The program is attach to functions specified by pattern or by
btf IDs specified in bpf_tracing_multi_opts object.

Adding support for new sections to attach programs with above
functions:

   fentry.multi/pattern
   fexit.multi/pattern

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 tools/lib/bpf/bpf.c      |  7 ++++
 tools/lib/bpf/bpf.h      |  4 ++
 tools/lib/bpf/libbpf.c   | 87 ++++++++++++++++++++++++++++++++++++++++
 tools/lib/bpf/libbpf.h   | 14 +++++++
 tools/lib/bpf/libbpf.map |  1 +
 5 files changed, 113 insertions(+)

diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index 5846de364209..cee1fc6bbfd6 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -790,6 +790,13 @@ int bpf_link_create(int prog_fd, int target_fd,
 		if (!OPTS_ZEROED(opts, uprobe_multi))
 			return libbpf_err(-EINVAL);
 		break;
+	case BPF_TRACE_FENTRY_MULTI:
+	case BPF_TRACE_FEXIT_MULTI:
+		attr.link_create.tracing_multi.btf_ids = (__u64) OPTS_GET(opts, tracing_multi.btf_ids, 0);
+		attr.link_create.tracing_multi.btf_ids_cnt = OPTS_GET(opts, tracing_multi.btf_ids_cnt, 0);
+		if (!OPTS_ZEROED(opts, tracing_multi))
+			return libbpf_err(-EINVAL);
+		break;
 	case BPF_TRACE_RAW_TP:
 	case BPF_TRACE_FENTRY:
 	case BPF_TRACE_FEXIT:
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index 2c8e88ddb674..005f884f9a0c 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -454,6 +454,10 @@ struct bpf_link_create_opts {
 			__u32 relative_id;
 			__u64 expected_revision;
 		} cgroup;
+		struct {
+			__u32 *btf_ids;
+			__u32  btf_ids_cnt;
+		} tracing_multi;
 	};
 	size_t :0;
 };
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 0c8bf0b5cce4..a16243300083 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -136,6 +136,8 @@ static const char * const attach_type_name[] = {
 	[BPF_NETKIT_PEER]		= "netkit_peer",
 	[BPF_TRACE_KPROBE_SESSION]	= "trace_kprobe_session",
 	[BPF_TRACE_UPROBE_SESSION]	= "trace_uprobe_session",
+	[BPF_TRACE_FENTRY_MULTI]	= "trace_fentry_multi",
+	[BPF_TRACE_FEXIT_MULTI]		= "trace_fexit_multi",
 };
 
 static const char * const link_type_name[] = {
@@ -154,6 +156,7 @@ static const char * const link_type_name[] = {
 	[BPF_LINK_TYPE_UPROBE_MULTI]		= "uprobe_multi",
 	[BPF_LINK_TYPE_NETKIT]			= "netkit",
 	[BPF_LINK_TYPE_SOCKMAP]			= "sockmap",
+	[BPF_LINK_TYPE_TRACING_MULTI]		= "tracing_multi",
 };
 
 static const char * const map_type_name[] = {
@@ -9814,6 +9817,7 @@ static int attach_kprobe_session(const struct bpf_program *prog, long cookie, st
 static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link);
 static int attach_lsm(const struct bpf_program *prog, long cookie, struct bpf_link **link);
 static int attach_iter(const struct bpf_program *prog, long cookie, struct bpf_link **link);
+static int attach_tracing_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link);
 
 static const struct bpf_sec_def section_defs[] = {
 	SEC_DEF("socket",		SOCKET_FILTER, 0, SEC_NONE),
@@ -9862,6 +9866,8 @@ static const struct bpf_sec_def section_defs[] = {
 	SEC_DEF("fexit.s+",		TRACING, BPF_TRACE_FEXIT, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace),
 	SEC_DEF("fsession+",		TRACING, BPF_TRACE_FSESSION, SEC_ATTACH_BTF, attach_trace),
 	SEC_DEF("fsession.s+",		TRACING, BPF_TRACE_FSESSION, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace),
+	SEC_DEF("fentry.multi+",	TRACING, BPF_TRACE_FENTRY_MULTI, 0, attach_tracing_multi),
+	SEC_DEF("fexit.multi+",		TRACING, BPF_TRACE_FEXIT_MULTI, 0, attach_tracing_multi),
 	SEC_DEF("freplace+",		EXT, 0, SEC_ATTACH_BTF, attach_trace),
 	SEC_DEF("lsm+",			LSM, BPF_LSM_MAC, SEC_ATTACH_BTF, attach_lsm),
 	SEC_DEF("lsm.s+",		LSM, BPF_LSM_MAC, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_lsm),
@@ -12237,6 +12243,87 @@ static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, stru
 	return ret;
 }
 
+struct bpf_link *
+bpf_program__attach_tracing_multi(const struct bpf_program *prog, const char *pattern,
+				  const struct bpf_tracing_multi_opts *opts)
+{
+	LIBBPF_OPTS(bpf_link_create_opts, lopts);
+	__u32 *btf_ids, cnt, *free_ids = NULL;
+	int prog_fd, link_fd, err;
+	struct bpf_link *link;
+
+	btf_ids = OPTS_GET(opts, btf_ids, false);
+	cnt = OPTS_GET(opts, cnt, false);
+
+	if (!pattern && !btf_ids && !cnt)
+		return libbpf_err_ptr(-EINVAL);
+	if (pattern && (btf_ids || cnt))
+		return libbpf_err_ptr(-EINVAL);
+
+	if (pattern) {
+		err = bpf_object__load_vmlinux_btf(prog->obj, true);
+		if (err)
+			return libbpf_err_ptr(err);
+
+		cnt = btf__find_by_glob_kind(prog->obj->btf_vmlinux, BTF_KIND_FUNC,
+					     pattern, NULL, &btf_ids);
+		if (cnt <= 0)
+			return libbpf_err_ptr(-EINVAL);
+		free_ids = btf_ids;
+	}
+
+	lopts.tracing_multi.btf_ids = btf_ids;
+	lopts.tracing_multi.btf_ids_cnt = cnt;
+
+	link = calloc(1, sizeof(*link));
+	if (!link)
+		return libbpf_err_ptr(-ENOMEM);
+	link->detach = &bpf_link__detach_fd;
+
+	prog_fd = bpf_program__fd(prog);
+	link_fd = bpf_link_create(prog_fd, 0, prog->expected_attach_type, &lopts);
+	if (link_fd < 0) {
+		err = -errno;
+		pr_warn("prog '%s': failed to attach: %s\n", prog->name, errstr(err));
+		goto error;
+	}
+	link->fd = link_fd;
+	free(free_ids);
+	return link;
+error:
+	free(link);
+	free(free_ids);
+	return libbpf_err_ptr(err);
+}
+
+static int attach_tracing_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link)
+{
+	const char *spec;
+	char *pattern;
+	bool is_fexit;
+	int n;
+
+	/* no auto-attach for SEC("fentry.multi") and SEC("fexit.multi") */
+	if (strcmp(prog->sec_name, "fentry.multi") == 0 ||
+	    strcmp(prog->sec_name, "fexit.multi") == 0)
+		return 0;
+
+	is_fexit = str_has_pfx(prog->sec_name, "fexit.multi/");
+	if (is_fexit)
+		spec = prog->sec_name + sizeof("fexit.multi/") - 1;
+	else
+		spec = prog->sec_name + sizeof("fentry.multi/") - 1;
+
+	n = sscanf(spec, "%m[a-zA-Z0-9_.*?]", &pattern);
+	if (n < 1) {
+		pr_warn("tracing multi pattern is invalid: %s\n", pattern);
+		return -EINVAL;
+	}
+
+	*link = bpf_program__attach_tracing_multi(prog, pattern, NULL);
+	return libbpf_get_error(*link);
+}
+
 static inline int add_uprobe_event_legacy(const char *probe_name, bool retprobe,
 					  const char *binary_path, size_t offset)
 {
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index dfc37a615578..fa74a88f6c4a 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -701,6 +701,20 @@ bpf_program__attach_ksyscall(const struct bpf_program *prog,
 			     const char *syscall_name,
 			     const struct bpf_ksyscall_opts *opts);
 
+struct bpf_tracing_multi_opts {
+	/* size of this struct, for forward/backward compatibility */
+	size_t sz;
+	__u32 *btf_ids;
+	size_t cnt;
+	size_t :0;
+};
+
+#define bpf_tracing_multi_opts__last_field cnt
+
+LIBBPF_API struct bpf_link *
+bpf_program__attach_tracing_multi(const struct bpf_program *prog, const char *pattern,
+				  const struct bpf_tracing_multi_opts *opts);
+
 struct bpf_uprobe_opts {
 	/* size of this struct, for forward/backward compatibility */
 	size_t sz;
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index d18fbcea7578..a3ffb21270e9 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -358,6 +358,7 @@ LIBBPF_1.0.0 {
 		bpf_program__attach_ksyscall;
 		bpf_program__autoattach;
 		bpf_program__set_autoattach;
+		bpf_program__attach_tracing_multi;
 		btf__add_enum64;
 		btf__add_enum64_value;
 		libbpf_bpf_attach_type_str;
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [RFC bpf-next 10/12] selftests/bpf: Add fentry tracing multi func test
  2026-02-03  9:38 [RFC bpf-next 00/12] bpf: tracing_multi link Jiri Olsa
                   ` (8 preceding siblings ...)
  2026-02-03  9:38 ` [RFC bpf-next 09/12] libbpf: Add support to create tracing multi link Jiri Olsa
@ 2026-02-03  9:38 ` Jiri Olsa
  2026-02-03 10:13   ` bot+bpf-ci
  2026-02-03  9:38 ` [RFC bpf-next 11/12] selftests/bpf: Add fentry intersected " Jiri Olsa
                   ` (2 subsequent siblings)
  12 siblings, 1 reply; 54+ messages in thread
From: Jiri Olsa @ 2026-02-03  9:38 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
	Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt

Adding selftest for fentry multi func test that attaches to
bpf_fentry_test* functions and checks argument values based
on the processed function.

We need to cast to real arguments types in multi_arg_check,
because the checked value can be shorter than u64.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 tools/testing/selftests/bpf/Makefile          |   3 +-
 .../selftests/bpf/prog_tests/tracing_multi.c  |  48 +++++++
 .../selftests/bpf/progs/tracing_multi_check.c | 132 ++++++++++++++++++
 .../bpf/progs/tracing_multi_fentry.c          |  17 +++
 4 files changed, 199 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/tracing_multi.c
 create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_check.c
 create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_fentry.c

diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index c6bf4dfb1495..b12d7b0e7828 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -481,7 +481,7 @@ SKEL_BLACKLIST := btf__% test_pinning_invalid.c test_sk_assign.c
 LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h		\
 		linked_vars.skel.h linked_maps.skel.h 			\
 		test_subskeleton.skel.h test_subskeleton_lib.skel.h	\
-		test_usdt.skel.h
+		test_usdt.skel.h tracing_multi_fentry_test.skel.h
 
 LSKELS := fexit_sleep.c trace_printk.c trace_vprintk.c map_ptr_kern.c 	\
 	core_kern.c core_kern_overflow.c test_ringbuf.c			\
@@ -507,6 +507,7 @@ test_usdt.skel.h-deps := test_usdt.bpf.o test_usdt_multispec.bpf.o
 xsk_xdp_progs.skel.h-deps := xsk_xdp_progs.bpf.o
 xdp_hw_metadata.skel.h-deps := xdp_hw_metadata.bpf.o
 xdp_features.skel.h-deps := xdp_features.bpf.o
+tracing_multi_fentry_test.skel.h-deps := tracing_multi_fentry.bpf.o tracing_multi_check.bpf.o
 
 LINKED_BPF_OBJS := $(foreach skel,$(LINKED_SKELS),$($(skel)-deps))
 LINKED_BPF_SRCS := $(patsubst %.bpf.o,%.c,$(LINKED_BPF_OBJS))
diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
new file mode 100644
index 000000000000..6d45147f0730
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
@@ -0,0 +1,48 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <test_progs.h>
+
+#ifdef __x86_64__
+#include "tracing_multi_fentry_test.skel.h"
+#include "trace_helpers.h"
+
+static void multi_fentry_test(void)
+{
+	LIBBPF_OPTS(bpf_test_run_opts, topts);
+	struct tracing_multi_fentry_test *skel = NULL;
+	int err, prog_fd;
+
+	skel = tracing_multi_fentry_test__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "fentry_multi_skel_load"))
+		goto cleanup;
+
+	err = tracing_multi_fentry_test__attach(skel);
+	if (!ASSERT_OK(err, "fentry_attach"))
+		goto cleanup;
+
+	prog_fd = bpf_program__fd(skel->progs.test);
+	err = bpf_prog_test_run_opts(prog_fd, &topts);
+	ASSERT_OK(err, "test_run");
+
+	ASSERT_EQ(skel->bss->test_result_1, 8, "test_result");
+
+cleanup:
+	tracing_multi_fentry_test__destroy(skel);
+}
+
+void __test_tracing_multi_test(void)
+{
+	if (test__start_subtest("fentry/simple"))
+		multi_fentry_test();
+}
+#else
+void __test_tracing_multi_test(void)
+{
+	test__skip();
+}
+#endif /* __x86_64__ */
+
+void test_tracing_multi_test(void)
+{
+	__test_tracing_multi_test();
+}
diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_check.c b/tools/testing/selftests/bpf/progs/tracing_multi_check.c
new file mode 100644
index 000000000000..e5efa9884dfd
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/tracing_multi_check.c
@@ -0,0 +1,132 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+extern const void bpf_fentry_test1 __ksym;
+extern const void bpf_fentry_test2 __ksym;
+extern const void bpf_fentry_test3 __ksym;
+extern const void bpf_fentry_test4 __ksym;
+extern const void bpf_fentry_test5 __ksym;
+extern const void bpf_fentry_test6 __ksym;
+extern const void bpf_fentry_test7 __ksym;
+extern const void bpf_fentry_test8 __ksym;
+
+void multi_arg_check(__u64 *ctx, __u64 *test_result)
+{
+	void *ip = (void *) bpf_get_func_ip(ctx);
+	__u64 value = 0;
+
+	if (ip == &bpf_fentry_test1) {
+		int a;
+
+		if (bpf_get_func_arg(ctx, 0, &value))
+			return;
+		a = (int) value;
+
+		*test_result += a == 1;
+	} else if (ip == &bpf_fentry_test2) {
+		__u64 b;
+		int a;
+
+		if (bpf_get_func_arg(ctx, 0, &value))
+			return;
+		a = (int) value;
+		if (bpf_get_func_arg(ctx, 1, &value))
+			return;
+		b = value;
+
+		*test_result += a == 2 && b == 3;
+	} else if (ip == &bpf_fentry_test3) {
+		char a, b;
+		__u64 c;
+
+		if (bpf_get_func_arg(ctx, 0, &value))
+			return;
+		a = (int) value;
+		if (bpf_get_func_arg(ctx, 1, &value))
+			return;
+		b = (int) value;
+		if (bpf_get_func_arg(ctx, 2, &value))
+			return;
+		c = value;
+
+		*test_result += a == 4 && b == 5 && c == 6;
+	} else if (ip == &bpf_fentry_test4) {
+		void *a;
+		char b;
+		int c;
+		__u64 d;
+
+		if (bpf_get_func_arg(ctx, 0, &value))
+			return;
+		a = (void*) value;
+		if (bpf_get_func_arg(ctx, 1, &value))
+			return;
+		b = (char) value;
+		if (bpf_get_func_arg(ctx, 2, &value))
+			return;
+		c = (int) value;
+		if (bpf_get_func_arg(ctx, 3, &value))
+			return;
+		d = value;
+
+		*test_result += a == (void *) 7 && b == 8 && c == 9 && d == 10;
+	} else if (ip == &bpf_fentry_test5) {
+		__u64 a;
+		void *b;
+		short c;
+		int d;
+		__u64 e;
+
+		if (bpf_get_func_arg(ctx, 0, &value))
+			return;
+		a = value;
+		if (bpf_get_func_arg(ctx, 1, &value))
+			return;
+		b = (void*) value;
+		if (bpf_get_func_arg(ctx, 2, &value))
+			return;
+		c = (short) value;
+		if (bpf_get_func_arg(ctx, 3, &value))
+			return;
+		d = (int) value;
+		if (bpf_get_func_arg(ctx, 4, &value))
+			return;
+		e = value;
+
+		*test_result += a == 11 && b == (void *) 12 && c == 13 && d == 14 && e == 15;
+	} else if (ip == &bpf_fentry_test6) {
+		__u64 a;
+		void *b;
+		short c;
+		int d;
+		void *e;
+		__u64 f;
+
+		if (bpf_get_func_arg(ctx, 0, &value))
+			return;
+		a = value;
+		if (bpf_get_func_arg(ctx, 1, &value))
+			return;
+		b = (void*) value;
+		if (bpf_get_func_arg(ctx, 2, &value))
+			return;
+		c = (short) value;
+		if (bpf_get_func_arg(ctx, 3, &value))
+			return;
+		d = (int) value;
+		if (bpf_get_func_arg(ctx, 4, &value))
+			return;
+		e = (void*) value;;
+		if (bpf_get_func_arg(ctx, 5, &value))
+			return;
+		f = value;;
+
+		*test_result += a == 16 && b == (void *) 17 && c == 18 && d == 19 && e == (void *) 20 && f == 21;
+	} else if (ip == &bpf_fentry_test7) {
+		*test_result += 1;
+	} else if (ip == &bpf_fentry_test8) {
+		*test_result += 1;
+	}
+}
diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_fentry.c b/tools/testing/selftests/bpf/progs/tracing_multi_fentry.c
new file mode 100644
index 000000000000..628734596114
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/tracing_multi_fentry.c
@@ -0,0 +1,17 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+__u64 test_result_1 = 0;
+
+__hidden extern void multi_arg_check(__u64 *ctx, __u64 *test_result);
+
+SEC("fentry.multi/bpf_fentry_test*")
+int BPF_PROG(test, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f)
+{
+	multi_arg_check(ctx, &test_result_1);
+	return 0;
+}
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [RFC bpf-next 11/12] selftests/bpf: Add fentry intersected tracing multi func test
  2026-02-03  9:38 [RFC bpf-next 00/12] bpf: tracing_multi link Jiri Olsa
                   ` (9 preceding siblings ...)
  2026-02-03  9:38 ` [RFC bpf-next 10/12] selftests/bpf: Add fentry tracing multi func test Jiri Olsa
@ 2026-02-03  9:38 ` Jiri Olsa
  2026-02-03  9:38 ` [RFC bpf-next 12/12] selftests/bpf: Add tracing multi benchmark test Jiri Olsa
  2026-02-03 23:17 ` [RFC bpf-next 00/12] bpf: tracing_multi link Alexei Starovoitov
  12 siblings, 0 replies; 54+ messages in thread
From: Jiri Olsa @ 2026-02-03  9:38 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
	Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt

Adding test for attachment of 2 multi tracing links attached
to intersecting functions and making sure both programs are
called properly with proper arguments.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../selftests/bpf/prog_tests/tracing_multi.c  | 129 ++++++++++++++++++
 .../bpf/progs/tracing_multi_fentry.c          |  16 +++
 2 files changed, 145 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
index 6d45147f0730..3ccf0d4ed1af 100644
--- a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
+++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
@@ -3,8 +3,12 @@
 #include <test_progs.h>
 
 #ifdef __x86_64__
+#include <bpf/btf.h>
+#include <linux/btf.h>
+#include <search.h>
 #include "tracing_multi_fentry_test.skel.h"
 #include "trace_helpers.h"
+#include "bpf/libbpf_internal.h"
 
 static void multi_fentry_test(void)
 {
@@ -30,10 +34,135 @@ static void multi_fentry_test(void)
 	tracing_multi_fentry_test__destroy(skel);
 }
 
+static int compare(const void *pa, const void *pb)
+{
+	return strcmp((char *) pa, (char *) pb);
+}
+
+static __u32 *get_ids(const char *funcs[], int funcs_cnt)
+{
+	size_t cap = 0, cnt = 0;
+	__u32 nr, type_id;
+	void *root = NULL;
+	__u32 *ids = NULL;
+	struct btf *btf;
+	int i, err = -1;
+
+	btf = btf__load_vmlinux_btf();
+	if (!ASSERT_OK_PTR(btf, "btf__load_vmlinux_btf"))
+		return NULL;
+
+	for (i = 0; i < funcs_cnt; i++)
+		tsearch(funcs[i], &root, compare);
+
+	nr = btf__type_cnt(btf);
+	for (type_id = 1; type_id < nr; type_id++) {
+		const struct btf_type *type;
+		const char *str;
+
+		type = btf__type_by_id(btf, type_id);
+		if (!type) {
+			err = -1;
+			break;
+		}
+
+		if (BTF_INFO_KIND(type->info) != BTF_KIND_FUNC)
+			continue;
+
+		str = btf__name_by_offset(btf, type->name_off);
+		if (!str) {
+			err = -1;
+			break;
+		}
+
+		if (!tfind(str, &root, compare))
+			continue;
+
+		err = libbpf_ensure_mem((void **) &ids, &cap, sizeof(*ids), cnt + 1);
+		if (err)
+			break;
+
+		ids[cnt++] = type_id;
+	}
+
+	if (err)
+		free(ids);
+	btf__free(btf);
+	return ids;
+}
+
+static void multi_fentry_intersected_test(void)
+{
+	struct tracing_multi_fentry_test *skel = NULL;
+	LIBBPF_OPTS(bpf_tracing_multi_opts, opts);
+	LIBBPF_OPTS(bpf_test_run_opts, topts);
+	const char *funcs_1[] = {
+		"bpf_fentry_test1",
+		"bpf_fentry_test2",
+		"bpf_fentry_test3",
+		"bpf_fentry_test4",
+		"bpf_fentry_test5",
+	};
+	const char *funcs_2[] = {
+		"bpf_fentry_test4",
+		"bpf_fentry_test5",
+		"bpf_fentry_test6",
+		"bpf_fentry_test7",
+		"bpf_fentry_test8",
+	};
+	__u32 *ids_1 = NULL, *ids_2 = NULL;
+	size_t cnt_1 = ARRAY_SIZE(funcs_1);
+	size_t cnt_2 = ARRAY_SIZE(funcs_2);
+	struct bpf_link *link_1 = NULL;
+	struct bpf_link *link_2 = NULL;
+	int err, prog_fd;
+
+	skel = tracing_multi_fentry_test__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "fentry_multi_skel_load"))
+		goto cleanup;
+
+	ids_1 = get_ids(funcs_1, cnt_1);
+	if (!ASSERT_OK_PTR(ids_1, "get_ids"))
+		goto cleanup;
+	ids_2 = get_ids(funcs_2, cnt_2);
+	if (!ASSERT_OK_PTR(ids_2, "get_ids"))
+		goto cleanup;
+
+	opts.btf_ids = ids_1;
+	opts.cnt = cnt_1;
+
+	link_1 = bpf_program__attach_tracing_multi(skel->progs.test_1, NULL, &opts);
+	if (!ASSERT_OK_PTR(link_1, "bpf_program__attach_tracing_multi"))
+		goto cleanup;
+
+	opts.btf_ids = ids_2;
+	opts.cnt = cnt_2;
+
+	link_2 = bpf_program__attach_tracing_multi(skel->progs.test_2, NULL, &opts);
+	if (!ASSERT_OK_PTR(link_2, "bpf_program__attach_tracing_multi"))
+		goto cleanup;
+
+	prog_fd = bpf_program__fd(skel->progs.test);
+	err = bpf_prog_test_run_opts(prog_fd, &topts);
+	ASSERT_OK(err, "test_run");
+
+	ASSERT_EQ(skel->bss->test_result_2, 5, "test_result");
+	ASSERT_EQ(skel->bss->test_result_3, 5, "test_result");
+
+cleanup:
+	free(ids_1);
+	free(ids_2);
+	bpf_link__destroy(link_1);
+	bpf_link__destroy(link_2);
+	tracing_multi_fentry_test__destroy(skel);
+}
+
 void __test_tracing_multi_test(void)
 {
 	if (test__start_subtest("fentry/simple"))
 		multi_fentry_test();
+	if (test__start_subtest("fentry/intersected"))
+		multi_fentry_intersected_test();
 }
 #else
 void __test_tracing_multi_test(void)
diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_fentry.c b/tools/testing/selftests/bpf/progs/tracing_multi_fentry.c
index 628734596114..47857209bf9f 100644
--- a/tools/testing/selftests/bpf/progs/tracing_multi_fentry.c
+++ b/tools/testing/selftests/bpf/progs/tracing_multi_fentry.c
@@ -6,6 +6,8 @@
 char _license[] SEC("license") = "GPL";
 
 __u64 test_result_1 = 0;
+__u64 test_result_2 = 0;
+__u64 test_result_3 = 0;
 
 __hidden extern void multi_arg_check(__u64 *ctx, __u64 *test_result);
 
@@ -15,3 +17,17 @@ int BPF_PROG(test, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f)
 	multi_arg_check(ctx, &test_result_1);
 	return 0;
 }
+
+SEC("fentry.multi")
+int BPF_PROG(test_1, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f)
+{
+	multi_arg_check(ctx, &test_result_2);
+	return 0;
+}
+
+SEC("fentry.multi")
+int BPF_PROG(test_2, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f)
+{
+	multi_arg_check(ctx, &test_result_3);
+	return 0;
+}
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [RFC bpf-next 12/12] selftests/bpf: Add tracing multi benchmark test
  2026-02-03  9:38 [RFC bpf-next 00/12] bpf: tracing_multi link Jiri Olsa
                   ` (10 preceding siblings ...)
  2026-02-03  9:38 ` [RFC bpf-next 11/12] selftests/bpf: Add fentry intersected " Jiri Olsa
@ 2026-02-03  9:38 ` Jiri Olsa
  2026-02-03 10:13   ` bot+bpf-ci
  2026-02-03 23:17 ` [RFC bpf-next 00/12] bpf: tracing_multi link Alexei Starovoitov
  12 siblings, 1 reply; 54+ messages in thread
From: Jiri Olsa @ 2026-02-03  9:38 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
	Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt

Adding benchmark for attaching 20k functions.

  # ./test_progs -t tracing_multi/fentry/bench -v
  bpf_testmod.ko is already unloaded.
  Loading bpf_testmod.ko...
  Successfully loaded bpf_testmod.ko.
  multi_fentry_bench_test:PASS:btf__load_vmlinux_btf 0 nsec
  multi_fentry_bench_test:PASS:fentry_multi_skel_load 0 nsec
  multi_fentry_bench_test:PASS:get_syms 0 nsec
  multi_fentry_bench_test:PASS:bpf_program__attach_tracing_multi 0 nsec
  multi_fentry_bench_test: found 20000 functions
  multi_fentry_bench_test: attached in   0.466s
  multi_fentry_bench_test: detached in   0.066s
  #500/3   tracing_multi_test/fentry/bench:OK
  #500     tracing_multi_test:OK
  Summary: 1/1 PASSED, 0 SKIPPED, 0 FAILED
  Successfully unloaded bpf_testmod.ko.

I tried also for 40k:

  multi_fentry_bench_test: found 40000 functions
  multi_fentry_bench_test: attached in   0.964s
  multi_fentry_bench_test: detached in   0.170s

and for 60k (50995 of attachable functions):

  multi_fentry_bench_test: found 50995 functions
  multi_fentry_bench_test: attached in   1.256s
  multi_fentry_bench_test: detached in   0.241s

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../selftests/bpf/prog_tests/tracing_multi.c  | 186 ++++++++++++++++++
 .../bpf/progs/tracing_multi_fentry.c          |   6 +
 2 files changed, 192 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
index 3ccf0d4ed1af..575454e31bf6 100644
--- a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
+++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
@@ -6,6 +6,9 @@
 #include <bpf/btf.h>
 #include <linux/btf.h>
 #include <search.h>
+#include <bpf/btf.h>
+#include <linux/btf.h>
+#include <search.h>
 #include "tracing_multi_fentry_test.skel.h"
 #include "trace_helpers.h"
 #include "bpf/libbpf_internal.h"
@@ -157,12 +160,195 @@ static void multi_fentry_intersected_test(void)
 	tracing_multi_fentry_test__destroy(skel);
 }
 
+static bool skip_entry(char *name)
+{
+	/*
+	 * We attach to almost all kernel functions and some of them
+	 * will cause 'suspicious RCU usage' when fprobe is attached
+	 * to them. Filter out the current culprits - arch_cpu_idle
+	 * default_idle and rcu_* functions.
+	 */
+	if (!strcmp(name, "arch_cpu_idle"))
+		return true;
+	if (!strcmp(name, "default_idle"))
+		return true;
+	if (!strncmp(name, "rcu_", 4))
+		return true;
+	if (!strcmp(name, "bpf_dispatcher_xdp_func"))
+		return true;
+	if (!strncmp(name, "__ftrace_invalid_address__",
+		     sizeof("__ftrace_invalid_address__") - 1))
+		return true;
+	return false;
+}
+
+#define MAX_BPF_FUNC_ARGS 12
+
+static bool btf_type_is_modifier(const struct btf_type *t)
+{
+	switch (BTF_INFO_KIND(t->info)) {
+	case BTF_KIND_TYPEDEF:
+	case BTF_KIND_VOLATILE:
+	case BTF_KIND_CONST:
+	case BTF_KIND_RESTRICT:
+	case BTF_KIND_TYPE_TAG:
+		return true;
+	}
+	return false;
+}
+
+static bool is_allowed_func(const struct btf *btf, const struct btf_type *t)
+{
+	const struct btf_type *proto;
+	const struct btf_param *args;
+	__u32 i, nargs;
+	__s64 ret;
+
+	proto = btf_type_by_id(btf, t->type);
+	if (BTF_INFO_KIND(proto->info) != BTF_KIND_FUNC_PROTO)
+		return false;
+
+	args = (const struct btf_param *)(proto + 1);
+	nargs = btf_vlen(proto);
+	if (nargs > MAX_BPF_FUNC_ARGS)
+		return false;
+
+	t = btf__type_by_id(btf, proto->type);
+        while (t && btf_type_is_modifier(t))
+		t = btf__type_by_id(btf, t->type);
+
+	if (btf_is_struct(t))
+		return false;
+
+	for (i = 0; i < nargs; i++) {
+		/* No support for variable args */
+		if (i == nargs - 1 && args[i].type == 0)
+			return false;
+
+		/* No support of struct argument size greater than 16 bytes */
+		ret = btf__resolve_size(btf, args[i].type);
+		if (ret < 0 || ret > 16)
+			return false;
+	}
+
+	return true;
+}
+
+static void multi_fentry_bench_test(void)
+{
+	struct tracing_multi_fentry_test *skel = NULL;
+	long attach_start_ns, attach_end_ns;
+	long detach_start_ns, detach_end_ns;
+	double attach_delta, detach_delta;
+	struct bpf_link *link = NULL;
+	size_t i, syms_cnt;
+	char **syms;
+	void *root = NULL;
+	__u32 nr, type_id;
+	struct btf *btf;
+	__u32 *ids = NULL;
+	size_t cap = 0, cnt = 0;
+	int err;
+	LIBBPF_OPTS(bpf_tracing_multi_opts, opts);
+
+	btf = btf__load_vmlinux_btf();
+	if (!ASSERT_OK_PTR(btf, "btf__load_vmlinux_btf"))
+		return;
+
+	skel = tracing_multi_fentry_test__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "fentry_multi_skel_load"))
+		goto cleanup;
+
+	if (!ASSERT_OK(bpf_get_ksyms(&syms, &syms_cnt, true), "get_syms"))
+		goto cleanup;
+
+	for (i = 0; i < syms_cnt; i++) {
+		if (strstr(syms[i], "rcu"))
+			continue;
+		if (strstr(syms[i], "trace"))
+			continue;
+		if (strstr(syms[i], "irq"))
+			continue;
+		if (strstr(syms[i], "bpf_lsm_"))
+			continue;
+		if (!strcmp("migrate_enable", syms[i]))
+			continue;
+		if (!strcmp("migrate_disable", syms[i]))
+			continue;
+		if (!strcmp("__bpf_prog_enter_recur", syms[i]))
+			continue;
+		if (!strcmp("__bpf_prog_exit_recur", syms[i]))
+			continue;
+		if (!strcmp("preempt_count_sub", syms[i]))
+			continue;
+		if (!strcmp("preempt_count_add", syms[i]))
+			continue;
+		if (skip_entry(syms[i]))
+			continue;
+		tsearch(syms[i], &root, compare);
+	}
+
+	nr = btf__type_cnt(btf);
+	for (type_id = 1; type_id < nr && cnt < 20000; type_id++) {
+		const struct btf_type *type;
+		const char *str;
+
+		type = btf__type_by_id(btf, type_id);
+		if (!type)
+			break;
+
+		if (BTF_INFO_KIND(type->info) != BTF_KIND_FUNC)
+			continue;
+
+		str = btf__name_by_offset(btf, type->name_off);
+		if (!str)
+			break;
+
+		if (!tfind(str, &root, compare))
+			continue;
+
+		if (!is_allowed_func(btf, type))
+			continue;
+
+		err = libbpf_ensure_mem((void **) &ids, &cap, sizeof(*ids), cnt + 1);
+		if (err)
+			break;
+
+		ids[cnt++] = type_id;
+	}
+
+	opts.btf_ids = ids;
+	opts.cnt = cnt;
+
+	attach_start_ns = get_time_ns();
+	link = bpf_program__attach_tracing_multi(skel->progs.bench, NULL, &opts);
+	attach_end_ns = get_time_ns();
+	if (!ASSERT_OK_PTR(link, "bpf_program__attach_tracing_multi"))
+		goto cleanup;
+
+	detach_start_ns = get_time_ns();
+	bpf_link__destroy(link);
+	detach_end_ns = get_time_ns();
+
+	attach_delta = (attach_end_ns - attach_start_ns) / 1000000000.0;
+	detach_delta = (detach_end_ns - detach_start_ns) / 1000000000.0;
+
+	printf("%s: found %lu functions\n", __func__, cnt);
+	printf("%s: attached in %7.3lfs\n", __func__, attach_delta);
+	printf("%s: detached in %7.3lfs\n", __func__, detach_delta);
+
+cleanup:
+	tracing_multi_fentry_test__destroy(skel);
+}
+
 void __test_tracing_multi_test(void)
 {
 	if (test__start_subtest("fentry/simple"))
 		multi_fentry_test();
 	if (test__start_subtest("fentry/intersected"))
 		multi_fentry_intersected_test();
+	if (test__start_subtest("fentry/bench"))
+		multi_fentry_bench_test();
 }
 #else
 void __test_tracing_multi_test(void)
diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_fentry.c b/tools/testing/selftests/bpf/progs/tracing_multi_fentry.c
index 47857209bf9f..0f39c446792d 100644
--- a/tools/testing/selftests/bpf/progs/tracing_multi_fentry.c
+++ b/tools/testing/selftests/bpf/progs/tracing_multi_fentry.c
@@ -31,3 +31,9 @@ int BPF_PROG(test_2, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f)
 	multi_arg_check(ctx, &test_result_3);
 	return 0;
 }
+
+SEC("fentry.multi")
+int BPF_PROG(bench, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f)
+{
+	return 0;
+}
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 05/12] bpf: Add multi tracing attach types
  2026-02-03  9:38 ` [RFC bpf-next 05/12] bpf: Add multi tracing attach types Jiri Olsa
@ 2026-02-03 10:13   ` bot+bpf-ci
  2026-02-17 22:05     ` Jiri Olsa
  2026-02-04  2:20   ` Leon Hwang
  1 sibling, 1 reply; 54+ messages in thread
From: bot+bpf-ci @ 2026-02-03 10:13 UTC (permalink / raw)
  To: jolsa, ast, daniel, andrii
  Cc: bpf, linux-trace-kernel, kafai, eddyz87, songliubraving, yhs,
	menglong8.dong, rostedt, ast, andrii, daniel, martin.lau, eddyz87,
	yonghong.song, clm, ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 1386 bytes --]

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 6b62b6d57..fb52ba2f7 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c

[ ... ]

> @@ -25069,7 +25074,8 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
>  		return 0;
>  	} else if (prog->expected_attach_type == BPF_TRACE_ITER) {
>  		return bpf_iter_prog_supported(prog);
> -	}
> +	} else if (is_tracing_multi(prog->expected_attach_type))
> +		return prog->type == BPF_PROG_TYPE_TRACING ? 0 : -EINVAL;

This isn't a bug, but should this branch have braces for consistency with the
preceding branches in the same if/else chain? The BPF_TRACE_RAW_TP and
BPF_TRACE_ITER branches both use braces.


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21625440628

AI-authorship-score: low
AI-authorship-explanation: The commit message contains natural colloquial language and personal attribution typical of human kernel developers, and the code follows established BPF patterns with typical human oversights in style consistency.
issues-found: 1
issue-severity-score: low
issue-severity-explanation: Minor coding style inconsistency with missing braces in an else-if branch that does not affect functionality.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 10/12] selftests/bpf: Add fentry tracing multi func test
  2026-02-03  9:38 ` [RFC bpf-next 10/12] selftests/bpf: Add fentry tracing multi func test Jiri Olsa
@ 2026-02-03 10:13   ` bot+bpf-ci
  2026-02-17 22:06     ` Jiri Olsa
  0 siblings, 1 reply; 54+ messages in thread
From: bot+bpf-ci @ 2026-02-03 10:13 UTC (permalink / raw)
  To: jolsa, ast, daniel, andrii
  Cc: bpf, linux-trace-kernel, kafai, eddyz87, songliubraving, yhs,
	menglong8.dong, rostedt, ast, andrii, daniel, martin.lau, eddyz87,
	yonghong.song, clm, ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 1732 bytes --]

> diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_check.c b/tools/testing/selftests/bpf/progs/tracing_multi_check.c
> new file mode 100644
> index 000000000..e5efa9884
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/tracing_multi_check.c

[ ... ]

> +	} else if (ip == &bpf_fentry_test3) {
> +		char a, b;
                      ^

The commit message states "We need to cast to real arguments types in
multi_arg_check, because the checked value can be shorter than u64."

Should variable b be declared as int instead of char?

The kernel function signature is:

    noinline int bpf_fentry_test3(char a, int b, u64 c)

The second parameter is int, not char. While the test passes with the
current test value of 5, this appears inconsistent with the stated goal
of matching argument types.

> +		__u64 c;

[ ... ]

> +	} else if (ip == &bpf_fentry_test6) {

[ ... ]

> +		e = (void*) value;;
> +		if (bpf_get_func_arg(ctx, 5, &value))
> +			return;
> +		f = value;;
                      ^^

This isn't a bug, but there are extra semicolons on these two lines.


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21625440628

AI-authorship-score: low
AI-authorship-explanation: The commit follows standard kernel conventions with minor human-typical inconsistencies, and the author is a known BPF subsystem developer.
issues-found: 2
issue-severity-score: low
issue-severity-explanation: Minor code quality issues in test code: a type mismatch in variable declaration and extraneous semicolons, neither affecting test functionality.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 12/12] selftests/bpf: Add tracing multi benchmark test
  2026-02-03  9:38 ` [RFC bpf-next 12/12] selftests/bpf: Add tracing multi benchmark test Jiri Olsa
@ 2026-02-03 10:13   ` bot+bpf-ci
  2026-02-17 22:06     ` Jiri Olsa
  0 siblings, 1 reply; 54+ messages in thread
From: bot+bpf-ci @ 2026-02-03 10:13 UTC (permalink / raw)
  To: jolsa, ast, daniel, andrii
  Cc: bpf, linux-trace-kernel, kafai, eddyz87, songliubraving, yhs,
	menglong8.dong, rostedt, ast, andrii, daniel, martin.lau, eddyz87,
	yonghong.song, clm, ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 1485 bytes --]

> diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
> index 3ccf0d4ed..575454e31 100644
> --- a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
> +++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
> @@ -6,6 +6,9 @@
>  #include <bpf/btf.h>
>  #include <linux/btf.h>
>  #include <search.h>
> +#include <bpf/btf.h>
> +#include <linux/btf.h>
> +#include <search.h>

These three includes are duplicates of the lines immediately above them.
Were they added by mistake?

[ ... ]

> +	t = btf__type_by_id(btf, proto->type);
> +        while (t && btf_type_is_modifier(t))
> +		t = btf__type_by_id(btf, t->type);

The while line uses spaces for indentation instead of a tab.


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21625440628

AI-authorship-score: low
AI-authorship-explanation: The commit shows characteristics typical of human-written code including copy-paste errors (duplicate includes), inconsistent whitespace, informal commit message language, and practical iterative testing with real measured results.
issues-found: 2
issue-severity-score: low
issue-severity-explanation: Both issues are minor code quality problems in test code: duplicate header includes and inconsistent whitespace (spaces vs tabs).

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 07/12] bpf: Add support to create tracing multi link
  2026-02-03  9:38 ` [RFC bpf-next 07/12] bpf: Add support to create tracing multi link Jiri Olsa
@ 2026-02-03 10:13   ` bot+bpf-ci
  2026-02-17 22:05     ` Jiri Olsa
  2026-02-04 19:05   ` Andrii Nakryiko
  1 sibling, 1 reply; 54+ messages in thread
From: bot+bpf-ci @ 2026-02-03 10:13 UTC (permalink / raw)
  To: jolsa, ast, daniel, andrii
  Cc: bpf, linux-trace-kernel, kafai, eddyz87, songliubraving, yhs,
	menglong8.dong, rostedt, ast, andrii, daniel, martin.lau, eddyz87,
	yonghong.song, clm, ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 3355 bytes --]

> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 68600972a..010785246 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h

[ ... ]

> @@ -1863,6 +1864,10 @@ union bpf_attr {
> 				};
> 				__u64		expected_revision;
> 			} cgroup;
> +			struct {
> +				__aligned_u64	btf_ids;	/* addresses to attach */
> +				__u32		btf_ids_cnt;	/* addresses count */
> +			} tracing_multi;

The comments say "addresses to attach" and "addresses count", but the field is
named btf_ids and contains BTF type IDs, not addresses. Should these comments
say "BTF IDs to attach" and "BTF IDs count" instead?

> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index f7baeb827..82e625aa0 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c

[ ... ]

> +static void bpf_tracing_multi_link_show_fdinfo(const struct bpf_link *link,
> +					       struct seq_file *seq)
> +{
> +	struct bpf_tracing_multi_link *tr_link =
> +		container_of(link, struct bpf_tracing_multi_link, link);
> +
> +	seq_printf(seq, "attach_type:\t%d\n", tr_link->attach_type);
> +}
> +
> +static int bpf_tracing_multi_link_fill_link_info(const struct bpf_link *link,
> +						 struct bpf_link_info *info)
> +{
> +	struct bpf_tracing_multi_link *tr_link =
> +		container_of(link, struct bpf_tracing_multi_link, link);
> +
> +	info->tracing.attach_type = tr_link->attach_type;
> +	return 0;
> +}

Both functions above use tr_link->attach_type, but looking at
bpf_tracing_multi_attach() below, this field is never initialized:

> +int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
> +{

[ ... ]

> +	link = kzalloc(struct_size(link, nodes, cnt), GFP_KERNEL);
> +	if (!link) {
> +		err = -ENOMEM;
> +		goto error;
> +	}
> +
> +	link->nodes_cnt = cnt;
> +
> +	bpf_link_init(&link->link, BPF_LINK_TYPE_TRACING_MULTI,
> +		      &bpf_tracing_multi_link_lops, prog, prog->expected_attach_type);

bpf_link_init() sets link->link.attach_type (inside the embedded struct bpf_link),
but struct bpf_tracing_multi_link has a separate attach_type field that remains
zero-initialized from kzalloc.

The show_fdinfo and fill_link_info callbacks read tr_link->attach_type which
will always be 0 (BPF_CGROUP_INET_INGRESS) instead of the actual attach type
(BPF_TRACE_FENTRY_MULTI or BPF_TRACE_FEXIT_MULTI).

Looking at the existing bpf_tracing_link_show_fdinfo() and
bpf_tracing_link_fill_link_info() in kernel/bpf/syscall.c, they use
link->attach_type (the inner struct's field). Should tr_link->attach_type
either be removed and link->attach_type used instead, or should it be
explicitly initialized to prog->expected_attach_type?


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21625440628

AI-authorship-score: low
AI-authorship-explanation: Code style and structure are consistent with established kernel BPF patterns, with typical kernel author conventions in commit message and implementation.
issues-found: 2
issue-severity-score: medium
issue-severity-explanation: Uninitialized attach_type field causes wrong data to be returned to userspace via fdinfo and bpf_link_info, affecting observability but not causing crashes.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 08/12] libbpf: Add btf__find_by_glob_kind function
  2026-02-03  9:38 ` [RFC bpf-next 08/12] libbpf: Add btf__find_by_glob_kind function Jiri Olsa
@ 2026-02-03 10:14   ` bot+bpf-ci
  2026-02-04 19:04   ` Andrii Nakryiko
  1 sibling, 0 replies; 54+ messages in thread
From: bot+bpf-ci @ 2026-02-03 10:14 UTC (permalink / raw)
  To: jolsa, ast, daniel, andrii
  Cc: bpf, linux-trace-kernel, kafai, eddyz87, songliubraving, yhs,
	menglong8.dong, rostedt, ast, andrii, daniel, martin.lau, eddyz87,
	yonghong.song, clm, ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 2764 bytes --]

> diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
> index 83fe79ffc..64502b3ef 100644
> --- a/tools/lib/bpf/btf.c
> +++ b/tools/lib/bpf/btf.c

[ ... ]

> +int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
> +			   const char *allow_pattern, const char *deny_pattern,
> +			   __u32 **__ids)
> +{
> +	__u32 i, nr_types = btf__type_cnt(btf);
> +	int cnt = 0, alloc = 0;
> +	__u32 *ids = NULL;
> +
> +	for (i = 1; i < nr_types; i++) {
> +		const struct btf_type *t = btf__type_by_id(btf, i);
> +		const char *name;
> +		__u32 *p;
> +
> +		if (btf_kind(t) != kind)
> +			continue;
> +		name = btf__name_by_offset(btf, t->name_off);
> +		if (!name)
> +			continue;
> +
> +		if (deny_pattern && glob_match(name, deny_pattern))
> +			continue;
> +		if (allow_pattern && !glob_match(name, allow_pattern))
> +			continue;
> +
> +		if (cnt == alloc) {
> +			alloc = max(16, alloc * 3 / 2);
> +			p = libbpf_reallocarray(ids, alloc, sizeof(__u32));
> +			if (!p) {
> +				free(ids);
> +				return -ENOMEM;

Should this use libbpf_err(-ENOMEM) to set errno for userspace callers?
Other similar functions in btf.c like btf_find_by_name_kind() use
libbpf_err() for their error returns:

    return libbpf_err(-ENOENT);

> +			}
> +			ids = p;
> +		}
> +		ids[cnt] = i;
> +		cnt++;
> +	}
> +
> +	*__ids = ids;
> +	return cnt;
> +}

> diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
> index b30008c26..d7b47bb0b 100644
> --- a/tools/lib/bpf/btf.h
> +++ b/tools/lib/bpf/btf.h

[ ... ]

> +int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
> +			   const char *allow_pattern, const char *deny_pattern,
> +			   __u32 **__ids);

Is this intended to be a public API? The btf__ prefix and placement in btf.h
suggest it should be public, but it is missing:

1) The LIBBPF_API macro that all other btf__* functions in this header have,
   for example:

    LIBBPF_API __s32 btf__find_by_name_kind(const struct btf *btf,
                                            const char *type_name, __u32 kind);

2) An entry in libbpf.map to export the symbol from the shared library.


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21625440628

AI-authorship-score: low
AI-authorship-explanation: The code and commit message follow established kernel and libbpf conventions without telltale signs of AI generation, and Jiri Olsa is a well-known BPF kernel maintainer.
issues-found: 2
issue-severity-score: low
issue-severity-explanation: Missing libbpf_err wrapper and incomplete public API declaration are API consistency issues that do not cause crashes or security problems.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 06/12] bpf: Add bpf_trampoline_multi_attach/detach functions
  2026-02-03  9:38 ` [RFC bpf-next 06/12] bpf: Add bpf_trampoline_multi_attach/detach functions Jiri Olsa
@ 2026-02-03 10:14   ` bot+bpf-ci
  2026-02-17 22:05     ` Jiri Olsa
  2026-02-05  9:16   ` Menglong Dong
  1 sibling, 1 reply; 54+ messages in thread
From: bot+bpf-ci @ 2026-02-03 10:14 UTC (permalink / raw)
  To: jolsa, ast, daniel, andrii
  Cc: bpf, linux-trace-kernel, kafai, eddyz87, songliubraving, yhs,
	menglong8.dong, rostedt, ast, andrii, daniel, martin.lau, eddyz87,
	yonghong.song, clm, ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 2453 bytes --]

> diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> index 2be2f1d0b..b76bb5450 100644
> --- a/kernel/bpf/trampoline.c
> +++ b/kernel/bpf/trampoline.c

[ ... ]

> +int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
> +				struct bpf_tracing_multi_link *link)
> +{
> +	struct bpf_attach_target_info tgt_info = {};
> +	struct bpf_tracing_multi_node *mnode;
> +	int j, i, err, cnt = link->nodes_cnt;
> +	struct fentry_multi_data data = {};
> +	struct bpf_trampoline *tr;
> +	u64 key;
> +
> +	data.reg = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
> +	if (!data.reg)
> +		return -ENOMEM;
> +
> +	data.modify = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
> +	if (!data.modify) {
> +		free_ftrace_hash(data.reg);
> +		return -ENOMEM;
> +	}
> +
> +	for (i = 0; i < cnt; i++) {
> +		mnode = &link->nodes[i];
> +		err = bpf_check_attach_target(NULL, prog, NULL, ids[i], &tgt_info);
> +		if (err)
> +			goto rollback_put;
> +
> +		key = bpf_trampoline_compute_key(NULL, prog->aux->attach_btf, ids[i]);
> +
> +		tr = bpf_trampoline_get(key, &tgt_info);
> +		if (!tr)
> +			goto rollback_put;

When bpf_trampoline_get() returns NULL here, the code jumps to
rollback_put without setting err. If bpf_check_attach_target() succeeded
on this iteration (setting err to 0), the function will return 0 instead
of an error code.

Should this be something like:

    tr = bpf_trampoline_get(key, &tgt_info);
    if (!tr) {
        err = -ENOMEM;
        goto rollback_put;
    }

> +
> +		mnode->trampoline = tr;
> +		mnode->node.prog = prog;
> +	}

[ ... ]

> +rollback_put:
> +	for (j = 0; j < i; j++) {
> +		mnode = &link->nodes[j];
> +		bpf_trampoline_put(mnode->trampoline);
> +	}
> +
> +	free_fentry_multi_data(&data);
> +	return err;
> +}


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21625440628

AI-authorship-score: low
AI-authorship-explanation: The code follows established kernel patterns and contains a human-typical oversight error (missing error assignment); the commit message has a minor copy-paste error typical of human editing.
issues-found: 1
issue-severity-score: low
issue-severity-explanation: Missing error code when bpf_trampoline_get fails causes silent failure under memory pressure, but does not cause crashes or data corruption.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 09/12] libbpf: Add support to create tracing multi link
  2026-02-03  9:38 ` [RFC bpf-next 09/12] libbpf: Add support to create tracing multi link Jiri Olsa
@ 2026-02-03 10:14   ` bot+bpf-ci
  2026-02-17 22:05     ` Jiri Olsa
  2026-02-04 19:05   ` Andrii Nakryiko
  1 sibling, 1 reply; 54+ messages in thread
From: bot+bpf-ci @ 2026-02-03 10:14 UTC (permalink / raw)
  To: jolsa, ast, daniel, andrii
  Cc: bpf, linux-trace-kernel, kafai, eddyz87, songliubraving, yhs,
	menglong8.dong, rostedt, ast, andrii, daniel, martin.lau, eddyz87,
	yonghong.song, clm, ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 4183 bytes --]

> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 0c8bf0b5c..a16243300 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c

[ ... ]

> +struct bpf_link *
> +bpf_program__attach_tracing_multi(const struct bpf_program *prog, const char *pattern,
> +				  const struct bpf_tracing_multi_opts *opts)
> +{
> +	LIBBPF_OPTS(bpf_link_create_opts, lopts);
> +	__u32 *btf_ids, cnt, *free_ids = NULL;
> +	int prog_fd, link_fd, err;
> +	struct bpf_link *link;
> +
> +	btf_ids = OPTS_GET(opts, btf_ids, false);
> +	cnt = OPTS_GET(opts, cnt, false);
> +
> +	if (!pattern && !btf_ids && !cnt)
> +		return libbpf_err_ptr(-EINVAL);
> +	if (pattern && (btf_ids || cnt))
> +		return libbpf_err_ptr(-EINVAL);
> +
> +	if (pattern) {
> +		err = bpf_object__load_vmlinux_btf(prog->obj, true);
> +		if (err)
> +			return libbpf_err_ptr(err);
> +
> +		cnt = btf__find_by_glob_kind(prog->obj->btf_vmlinux, BTF_KIND_FUNC,
> +					     pattern, NULL, &btf_ids);
> +		if (cnt <= 0)
> +			return libbpf_err_ptr(-EINVAL);
> +		free_ids = btf_ids;
> +	}
> +
> +	lopts.tracing_multi.btf_ids = btf_ids;
> +	lopts.tracing_multi.btf_ids_cnt = cnt;
> +
> +	link = calloc(1, sizeof(*link));
> +	if (!link)
> +		return libbpf_err_ptr(-ENOMEM);

If calloc fails here and pattern was provided, btf__find_by_glob_kind() will
have allocated btf_ids (tracked by free_ids). Does this leak free_ids? The
error label below frees it properly, but this return bypasses that path.

Something like:

    if (!link) {
        free(free_ids);
        return libbpf_err_ptr(-ENOMEM);
    }

or using goto error would fix this.

> +	link->detach = &bpf_link__detach_fd;
> +
> +	prog_fd = bpf_program__fd(prog);
> +	link_fd = bpf_link_create(prog_fd, 0, prog->expected_attach_type, &lopts);
> +	if (link_fd < 0) {
> +		err = -errno;
> +		pr_warn("prog '%s': failed to attach: %s\n", prog->name, errstr(err));
> +		goto error;
> +	}
> +	link->fd = link_fd;
> +	free(free_ids);
> +	return link;
> +error:
> +	free(link);
> +	free(free_ids);
> +	return libbpf_err_ptr(err);
> +}
> +
> +static int attach_tracing_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link)
> +{
> +	const char *spec;
> +	char *pattern;
> +	bool is_fexit;
> +	int n;
> +
> +	/* no auto-attach for SEC("fentry.multi") and SEC("fexit.multi") */
> +	if (strcmp(prog->sec_name, "fentry.multi") == 0 ||
> +	    strcmp(prog->sec_name, "fexit.multi") == 0)
> +		return 0;
> +
> +	is_fexit = str_has_pfx(prog->sec_name, "fexit.multi/");
> +	if (is_fexit)
> +		spec = prog->sec_name + sizeof("fexit.multi/") - 1;
> +	else
> +		spec = prog->sec_name + sizeof("fentry.multi/") - 1;
> +
> +	n = sscanf(spec, "%m[a-zA-Z0-9_.*?]", &pattern);
> +	if (n < 1) {
> +		pr_warn("tracing multi pattern is invalid: %s\n", pattern);

When sscanf fails to match (n < 1), pattern is never assigned because %m only
allocates on successful match. Should this print spec instead of the
uninitialized pattern? Compare with attach_kprobe_multi() which uses:

    pr_warn("kprobe multi pattern is invalid: %s\n", spec);

> +		return -EINVAL;
> +	}
> +
> +	*link = bpf_program__attach_tracing_multi(prog, pattern, NULL);
> +	return libbpf_get_error(*link);

Is pattern being leaked here? sscanf with %m allocates the string, but there
is no free(pattern) before returning. Compare with attach_kprobe_multi():

    *link = bpf_program__attach_kprobe_multi_opts(prog, pattern, &opts);
    free(pattern);
    return libbpf_get_error(*link);

> +}


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21625440628

AI-authorship-score: low
AI-authorship-explanation: The commit contains typical human copy-paste errors (forgetting to add free(), using wrong variable in error message) and a minor grammatical mistake, suggesting human authorship.
issues-found: 3
issue-severity-score: medium
issue-severity-explanation: Memory leaks on every use of fentry.multi/fexit.multi auto-attach and potential undefined behavior from using uninitialized variable in error path.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 01/12] ftrace: Add ftrace_hash_count function
  2026-02-03  9:38 ` [RFC bpf-next 01/12] ftrace: Add ftrace_hash_count function Jiri Olsa
@ 2026-02-03 15:40   ` Steven Rostedt
  2026-02-04 12:06     ` Jiri Olsa
  0 siblings, 1 reply; 54+ messages in thread
From: Steven Rostedt @ 2026-02-03 15:40 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, Menglong Dong

On Tue,  3 Feb 2026 10:38:08 +0100
Jiri Olsa <jolsa@kernel.org> wrote:

> diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
> index 705db0a6d995..6dade0eaee46 100644
> --- a/include/linux/ftrace.h
> +++ b/include/linux/ftrace.h
> @@ -413,6 +413,7 @@ struct ftrace_hash *alloc_ftrace_hash(int size_bits);
>  void free_ftrace_hash(struct ftrace_hash *hash);
>  struct ftrace_func_entry *add_ftrace_hash_entry_direct(struct ftrace_hash *hash,
>  						       unsigned long ip, unsigned long direct);
> +unsigned long ftrace_hash_count(struct ftrace_hash *hash);
>  
>  /* The hash used to know what functions callbacks trace */
>  struct ftrace_ops_hash {
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index b12dbd93ae1c..be9e0ac1fd95 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -6284,7 +6284,7 @@ int modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
>  }
>  EXPORT_SYMBOL_GPL(modify_ftrace_direct);
>  
> -static unsigned long hash_count(struct ftrace_hash *hash)
> +unsigned long ftrace_hash_count(struct ftrace_hash *hash)
>  {
>  	return hash ? hash->count : 0;
>  }

I think this may make it less likely to inline this function, so let's just
add an external function, and even add a "inline" to the original:

static inline unsigned long hash_count(struct ftrace_hash *hash)
{
	return hash ? hash->count : 0;
}

unsigned long ftrace_hash_count(struct ftrace_hash *hash)
{
	return hash_count(hash);
}

And don't modify anything else.

-- Steve


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 00/12] bpf: tracing_multi link
  2026-02-03  9:38 [RFC bpf-next 00/12] bpf: tracing_multi link Jiri Olsa
                   ` (11 preceding siblings ...)
  2026-02-03  9:38 ` [RFC bpf-next 12/12] selftests/bpf: Add tracing multi benchmark test Jiri Olsa
@ 2026-02-03 23:17 ` Alexei Starovoitov
  2026-02-04 12:36   ` Jiri Olsa
  12 siblings, 1 reply; 54+ messages in thread
From: Alexei Starovoitov @ 2026-02-03 23:17 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, Menglong Dong, Steven Rostedt

On Tue, Feb 3, 2026 at 1:38 AM Jiri Olsa <jolsa@kernel.org> wrote:
>
> hi,
> as an option to Meglong's change [1] I'm sending proposal for tracing_multi
> link that does not add static trampoline but attaches program to all needed
> trampolines.
>
> This approach keeps the same performance but has some drawbacks:
>
>  - when attaching 20k functions we allocate and attach 20k trampolines
>  - during attachment we hold each trampoline mutex, so for above
>    20k functions we will hold 20k mutexes during the attachment,
>    should be very prone to deadlock, but haven't hit it yet

If you check that it's sorted and always take them in the same order
then there will be no deadlock.
Or just grab one global mutex first and then grab trampolines mutexes
next in any order. The global one will serialize this attach operation.

> It looks the trampoline allocations/generation might not be big a problem
> and I'll try to find a solution for holding that many mutexes. If there's
> no better solution I think having one read/write mutex for tracing multi
> link attach/detach should work.

If you mean to have one global mutex as I proposed above then I don't see
a downside. It only serializes multiple libbpf calls.

overall makes sense to me.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 05/12] bpf: Add multi tracing attach types
  2026-02-03  9:38 ` [RFC bpf-next 05/12] bpf: Add multi tracing attach types Jiri Olsa
  2026-02-03 10:13   ` bot+bpf-ci
@ 2026-02-04  2:20   ` Leon Hwang
  2026-02-04 12:41     ` Jiri Olsa
  1 sibling, 1 reply; 54+ messages in thread
From: Leon Hwang @ 2026-02-04  2:20 UTC (permalink / raw)
  To: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
	Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt



On 3/2/26 17:38, Jiri Olsa wrote:
> Adding new attach type to identify multi tracing attachment:
>   BPF_TRACE_FENTRY_MULTI
>   BPF_TRACE_FEXIT_MULTI
> 
At this point, should we also introduce BPF_TRACE_FSESSION_MULTI
alongside BPF_TRACE_FENTRY_MULTI and BPF_TRACE_FEXIT_MULTI?

Thanks,
Leon


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 01/12] ftrace: Add ftrace_hash_count function
  2026-02-03 15:40   ` Steven Rostedt
@ 2026-02-04 12:06     ` Jiri Olsa
  0 siblings, 0 replies; 54+ messages in thread
From: Jiri Olsa @ 2026-02-04 12:06 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, Menglong Dong

On Tue, Feb 03, 2026 at 10:40:32AM -0500, Steven Rostedt wrote:
> On Tue,  3 Feb 2026 10:38:08 +0100
> Jiri Olsa <jolsa@kernel.org> wrote:
> 
> > diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
> > index 705db0a6d995..6dade0eaee46 100644
> > --- a/include/linux/ftrace.h
> > +++ b/include/linux/ftrace.h
> > @@ -413,6 +413,7 @@ struct ftrace_hash *alloc_ftrace_hash(int size_bits);
> >  void free_ftrace_hash(struct ftrace_hash *hash);
> >  struct ftrace_func_entry *add_ftrace_hash_entry_direct(struct ftrace_hash *hash,
> >  						       unsigned long ip, unsigned long direct);
> > +unsigned long ftrace_hash_count(struct ftrace_hash *hash);
> >  
> >  /* The hash used to know what functions callbacks trace */
> >  struct ftrace_ops_hash {
> > diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> > index b12dbd93ae1c..be9e0ac1fd95 100644
> > --- a/kernel/trace/ftrace.c
> > +++ b/kernel/trace/ftrace.c
> > @@ -6284,7 +6284,7 @@ int modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
> >  }
> >  EXPORT_SYMBOL_GPL(modify_ftrace_direct);
> >  
> > -static unsigned long hash_count(struct ftrace_hash *hash)
> > +unsigned long ftrace_hash_count(struct ftrace_hash *hash)
> >  {
> >  	return hash ? hash->count : 0;
> >  }
> 
> I think this may make it less likely to inline this function, so let's just
> add an external function, and even add a "inline" to the original:
> 
> static inline unsigned long hash_count(struct ftrace_hash *hash)
> {
> 	return hash ? hash->count : 0;
> }
> 
> unsigned long ftrace_hash_count(struct ftrace_hash *hash)
> {
> 	return hash_count(hash);
> }
> 
> And don't modify anything else.

ok, will change

thanks,
jirka

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 00/12] bpf: tracing_multi link
  2026-02-03 23:17 ` [RFC bpf-next 00/12] bpf: tracing_multi link Alexei Starovoitov
@ 2026-02-04 12:36   ` Jiri Olsa
  2026-02-04 16:06     ` Alexei Starovoitov
  0 siblings, 1 reply; 54+ messages in thread
From: Jiri Olsa @ 2026-02-04 12:36 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, Menglong Dong, Steven Rostedt

On Tue, Feb 03, 2026 at 03:17:05PM -0800, Alexei Starovoitov wrote:
> On Tue, Feb 3, 2026 at 1:38 AM Jiri Olsa <jolsa@kernel.org> wrote:
> >
> > hi,
> > as an option to Meglong's change [1] I'm sending proposal for tracing_multi
> > link that does not add static trampoline but attaches program to all needed
> > trampolines.
> >
> > This approach keeps the same performance but has some drawbacks:
> >
> >  - when attaching 20k functions we allocate and attach 20k trampolines
> >  - during attachment we hold each trampoline mutex, so for above
> >    20k functions we will hold 20k mutexes during the attachment,
> >    should be very prone to deadlock, but haven't hit it yet
> 
> If you check that it's sorted and always take them in the same order
> then there will be no deadlock.
> Or just grab one global mutex first and then grab trampolines mutexes
> next in any order. The global one will serialize this attach operation.
> 
> > It looks the trampoline allocations/generation might not be big a problem
> > and I'll try to find a solution for holding that many mutexes. If there's
> > no better solution I think having one read/write mutex for tracing multi
> > link attach/detach should work.
> 
> If you mean to have one global mutex as I proposed above then I don't see
> a downside. It only serializes multiple libbpf calls.

we also need to serialize it with standard single trampoline attach,
because the direct ftrace update is now done under trampoline->mutex:

  bpf_trampoline_link_prog(tr)
  {
    mutex_lock(&tr->mutex);
    ...
    update_ftrace_direct_*
    ...
    mutex_unlock(&tr->mutex);
  }

for tracing_multi we would link the program first (with tr->mutex)
and do the bulk ftrace update later (without tr->mutex)

  {
    for each involved trampoline:
      bpf_trampoline_link_prog

    --> and here we could race with some other thread doing single
        trampoline attach

    update_ftrace_direct_*
  }

note the current version locks all tr->mutex instances all the way
through the update_ftrace_direct_* update

I think we could use global rwsem and take read lock on single
trampoline attach path and write lock on tracing_multi attach,

I thought we could take direct_mutex early, but that would mean
different order with trampoline mutex than we already have in
single attach path

or just sort those btf ids

jirka

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 05/12] bpf: Add multi tracing attach types
  2026-02-04  2:20   ` Leon Hwang
@ 2026-02-04 12:41     ` Jiri Olsa
  0 siblings, 0 replies; 54+ messages in thread
From: Jiri Olsa @ 2026-02-04 12:41 UTC (permalink / raw)
  To: Leon Hwang
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, Menglong Dong, Steven Rostedt

On Wed, Feb 04, 2026 at 10:20:29AM +0800, Leon Hwang wrote:
> 
> 
> On 3/2/26 17:38, Jiri Olsa wrote:
> > Adding new attach type to identify multi tracing attachment:
> >   BPF_TRACE_FENTRY_MULTI
> >   BPF_TRACE_FEXIT_MULTI
> > 
> At this point, should we also introduce BPF_TRACE_FSESSION_MULTI
> alongside BPF_TRACE_FENTRY_MULTI and BPF_TRACE_FEXIT_MULTI?

good catch, will add it

thanks,
jirka

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 00/12] bpf: tracing_multi link
  2026-02-04 12:36   ` Jiri Olsa
@ 2026-02-04 16:06     ` Alexei Starovoitov
  2026-02-05  8:55       ` Jiri Olsa
  0 siblings, 1 reply; 54+ messages in thread
From: Alexei Starovoitov @ 2026-02-04 16:06 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, Menglong Dong, Steven Rostedt

On Wed, Feb 4, 2026 at 4:36 AM Jiri Olsa <olsajiri@gmail.com> wrote:
>
> On Tue, Feb 03, 2026 at 03:17:05PM -0800, Alexei Starovoitov wrote:
> > On Tue, Feb 3, 2026 at 1:38 AM Jiri Olsa <jolsa@kernel.org> wrote:
> > >
> > > hi,
> > > as an option to Meglong's change [1] I'm sending proposal for tracing_multi
> > > link that does not add static trampoline but attaches program to all needed
> > > trampolines.
> > >
> > > This approach keeps the same performance but has some drawbacks:
> > >
> > >  - when attaching 20k functions we allocate and attach 20k trampolines
> > >  - during attachment we hold each trampoline mutex, so for above
> > >    20k functions we will hold 20k mutexes during the attachment,
> > >    should be very prone to deadlock, but haven't hit it yet
> >
> > If you check that it's sorted and always take them in the same order
> > then there will be no deadlock.
> > Or just grab one global mutex first and then grab trampolines mutexes
> > next in any order. The global one will serialize this attach operation.
> >
> > > It looks the trampoline allocations/generation might not be big a problem
> > > and I'll try to find a solution for holding that many mutexes. If there's
> > > no better solution I think having one read/write mutex for tracing multi
> > > link attach/detach should work.
> >
> > If you mean to have one global mutex as I proposed above then I don't see
> > a downside. It only serializes multiple libbpf calls.
>
> we also need to serialize it with standard single trampoline attach,
> because the direct ftrace update is now done under trampoline->mutex:
>
>   bpf_trampoline_link_prog(tr)
>   {
>     mutex_lock(&tr->mutex);
>     ...
>     update_ftrace_direct_*
>     ...
>     mutex_unlock(&tr->mutex);
>   }
>
> for tracing_multi we would link the program first (with tr->mutex)
> and do the bulk ftrace update later (without tr->mutex)
>
>   {
>     for each involved trampoline:
>       bpf_trampoline_link_prog
>
>     --> and here we could race with some other thread doing single
>         trampoline attach
>
>     update_ftrace_direct_*
>   }
>
> note the current version locks all tr->mutex instances all the way
> through the update_ftrace_direct_* update
>
> I think we could use global rwsem and take read lock on single
> trampoline attach path and write lock on tracing_multi attach,
>
> I thought we could take direct_mutex early, but that would mean
> different order with trampoline mutex than we already have in
> single attach path

I feel we're talking past each other.
I meant:

For multi:
1. take some global mutex
2. take N tramp mutexes in any order

For single:
1. take that 1 specific tramp mutex.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 04/12] bpf: Add struct bpf_tramp_node object
  2026-02-03  9:38 ` [RFC bpf-next 04/12] bpf: Add struct bpf_tramp_node object Jiri Olsa
@ 2026-02-04 19:00   ` Andrii Nakryiko
  2026-02-05  8:57     ` Jiri Olsa
  0 siblings, 1 reply; 54+ messages in thread
From: Andrii Nakryiko @ 2026-02-04 19:00 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, Menglong Dong, Steven Rostedt

On Tue, Feb 3, 2026 at 1:39 AM Jiri Olsa <jolsa@kernel.org> wrote:
>
> Adding struct bpf_tramp_node to decouple the link out of the trampoline
> attachment info.
>
> At the moment the object for attaching bpf program to the trampoline is
> 'struct bpf_tramp_link':
>
>   struct bpf_tramp_link {
>        struct bpf_link link;
>        struct hlist_node tramp_hlist;
>        u64 cookie;
>   }
>
> The link holds the bpf_prog pointer and forces one link - one program
> binding logic. In following changes we want to attach program to multiple
> trampolines but have just one bpf_link object.
>
> Splitting struct bpf_tramp_link into:
>
>   struct bpf_tramp_link {
>        struct bpf_link link;
>        struct bpf_tramp_node node;
>   };
>
>   struct bpf_tramp_node {
>        struct hlist_node tramp_hlist;
>        struct bpf_prog *prog;
>        u64 cookie;
>   };

I'm a bit confused here. For singular fentry/fexit attachment we have
one trampoline and one program, right? For multi-fentry, we have
multiple trampoline, but still one program pointer, no? So why put a
prog pointer into tramp_node?.. You do want cookie in tramp_node, yes,
but not the program. Because then there is also a question what is
bpf_link's prog pointing to?...


>
> where 'struct bpf_tramp_link' defines standard single trampoline link,
> and 'struct bpf_tramp_node' is the attachment trampoline object. This
> will allow us to define link for multiple trampolines, like:
>
>   struct bpf_tracing_multi_link {
>        struct bpf_link link;
>        ...
>        int nodes_cnt;
>        struct bpf_tracing_multi_node nodes[] __counted_by(nodes_cnt);
>   };
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  arch/arm64/net/bpf_jit_comp.c  |  58 +++++++++----------
>  arch/s390/net/bpf_jit_comp.c   |  42 +++++++-------
>  arch/x86/net/bpf_jit_comp.c    |  54 ++++++++---------
>  include/linux/bpf.h            |  47 ++++++++-------
>  kernel/bpf/bpf_struct_ops.c    |  24 ++++----
>  kernel/bpf/syscall.c           |  25 ++++----
>  kernel/bpf/trampoline.c        | 102 ++++++++++++++++-----------------
>  net/bpf/bpf_dummy_struct_ops.c |  11 ++--
>  8 files changed, 185 insertions(+), 178 deletions(-)
>

[...]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 08/12] libbpf: Add btf__find_by_glob_kind function
  2026-02-03  9:38 ` [RFC bpf-next 08/12] libbpf: Add btf__find_by_glob_kind function Jiri Olsa
  2026-02-03 10:14   ` bot+bpf-ci
@ 2026-02-04 19:04   ` Andrii Nakryiko
  2026-02-05  8:57     ` Jiri Olsa
  1 sibling, 1 reply; 54+ messages in thread
From: Andrii Nakryiko @ 2026-02-04 19:04 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, Menglong Dong, Steven Rostedt

On Tue, Feb 3, 2026 at 1:39 AM Jiri Olsa <jolsa@kernel.org> wrote:
>
> Adding btf__find_by_glob_kind function that returns array of
> BTF ids that match given kind and allow/deny patterns.
>
> int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
>                            const char *allow_pattern,
>                            const char *deny_pattern,
>                            __u32 **__ids);
>
> The __ids array is allocated and needs to be manually freed.
>
> The pattern check is done by glob_match function.
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  tools/lib/bpf/btf.c | 41 +++++++++++++++++++++++++++++++++++++++++
>  tools/lib/bpf/btf.h |  3 +++
>  2 files changed, 44 insertions(+)
>
> diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
> index 83fe79ffcb8f..64502b3ef38a 100644
> --- a/tools/lib/bpf/btf.c
> +++ b/tools/lib/bpf/btf.c
> @@ -1010,6 +1010,47 @@ __s32 btf__find_by_name_kind(const struct btf *btf, const char *type_name,
>         return btf_find_by_name_kind(btf, 1, type_name, kind);
>  }
>
> +int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
> +                          const char *allow_pattern, const char *deny_pattern,
> +                          __u32 **__ids)
> +{
> +       __u32 i, nr_types = btf__type_cnt(btf);
> +       int cnt = 0, alloc = 0;
> +       __u32 *ids = NULL;
> +
> +       for (i = 1; i < nr_types; i++) {
> +               const struct btf_type *t = btf__type_by_id(btf, i);
> +               const char *name;
> +               __u32 *p;
> +
> +               if (btf_kind(t) != kind)
> +                       continue;
> +               name = btf__name_by_offset(btf, t->name_off);
> +               if (!name)
> +                       continue;
> +
> +               if (deny_pattern && glob_match(name, deny_pattern))
> +                       continue;
> +               if (allow_pattern && !glob_match(name, allow_pattern))
> +                       continue;
> +
> +               if (cnt == alloc) {
> +                       alloc = max(16, alloc * 3 / 2);
> +                       p = libbpf_reallocarray(ids, alloc, sizeof(__u32));
> +                       if (!p) {
> +                               free(ids);
> +                               return -ENOMEM;
> +                       }
> +                       ids = p;
> +               }
> +               ids[cnt] = i;
> +               cnt++;
> +       }
> +
> +       *__ids = ids;
> +       return cnt;
> +}
> +
>  static bool btf_is_modifiable(const struct btf *btf)
>  {
>         return (void *)btf->hdr != btf->raw_data;
> diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
> index b30008c267c0..d7b47bb0ba99 100644
> --- a/tools/lib/bpf/btf.h
> +++ b/tools/lib/bpf/btf.h
> @@ -661,6 +661,9 @@ static inline struct btf_decl_tag *btf_decl_tag(const struct btf_type *t)
>         return (struct btf_decl_tag *)(t + 1);
>  }
>
> +int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
> +                          const char *allow_pattern, const char *deny_pattern,
> +                          __u32 **__ids);


as AI pointed out, this should be an internal helper, no? Let's also
not use double underscore pattern here,
"collect_btf_ids_by_glob_kind()" perhaps?

Also, you don't seem to be using deny_pattern, where you planning to?

Also, are there functions that we'll have BTF for, but they won't be
attachable? What if I do SEC("fentry.multi/*")? Will it attach or fail
to attach some functions (and thus fail the overall attachment)?

>  #ifdef __cplusplus
>  } /* extern "C" */
>  #endif
> --
> 2.52.0
>

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 07/12] bpf: Add support to create tracing multi link
  2026-02-03  9:38 ` [RFC bpf-next 07/12] bpf: Add support to create tracing multi link Jiri Olsa
  2026-02-03 10:13   ` bot+bpf-ci
@ 2026-02-04 19:05   ` Andrii Nakryiko
  2026-02-05  8:55     ` Jiri Olsa
  1 sibling, 1 reply; 54+ messages in thread
From: Andrii Nakryiko @ 2026-02-04 19:05 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, Menglong Dong, Steven Rostedt

On Tue, Feb 3, 2026 at 1:39 AM Jiri Olsa <jolsa@kernel.org> wrote:
>
> Adding new link to allow to attach program to multiple function
> BTF IDs. The link is represented by struct bpf_tracing_multi_link.
>
> To configure the link, new fields are added to bpf_attr::link_create
> to pass array of BTF IDs;
>
>   struct {
>       __aligned_u64   btf_ids;        /* addresses to attach */
>       __u32           btf_ids_cnt;    /* addresses count */

cookies suspiciously missing?

>   } tracing_multi;
>
> Each BTF ID represents function (BTF_KIND_FUNC) that the link will
> attach bpf program to.
>
> We use previously added bpf_trampoline_multi_attach/detach functions
> to attach/detach the link.
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  include/linux/trace_events.h   |   6 ++
>  include/uapi/linux/bpf.h       |   5 ++
>  kernel/bpf/syscall.c           |   2 +
>  kernel/trace/bpf_trace.c       | 105 +++++++++++++++++++++++++++++++++
>  tools/include/uapi/linux/bpf.h |   5 ++
>  5 files changed, 123 insertions(+)
>

[...]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 09/12] libbpf: Add support to create tracing multi link
  2026-02-03  9:38 ` [RFC bpf-next 09/12] libbpf: Add support to create tracing multi link Jiri Olsa
  2026-02-03 10:14   ` bot+bpf-ci
@ 2026-02-04 19:05   ` Andrii Nakryiko
  2026-02-17 22:06     ` Jiri Olsa
  1 sibling, 1 reply; 54+ messages in thread
From: Andrii Nakryiko @ 2026-02-04 19:05 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, Menglong Dong, Steven Rostedt

On Tue, Feb 3, 2026 at 1:40 AM Jiri Olsa <jolsa@kernel.org> wrote:
>
> Adding new interface function to attach programs with tracing
> multi link:
>
>   bpf_program__attach_tracing_multi(const struct bpf_program *prog,
>                                     const char *pattern,
>                                     const struct bpf_tracing_multi_opts *opts);
>
> The program is attach to functions specified by pattern or by
> btf IDs specified in bpf_tracing_multi_opts object.
>
> Adding support for new sections to attach programs with above
> functions:
>
>    fentry.multi/pattern
>    fexit.multi/pattern
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  tools/lib/bpf/bpf.c      |  7 ++++
>  tools/lib/bpf/bpf.h      |  4 ++
>  tools/lib/bpf/libbpf.c   | 87 ++++++++++++++++++++++++++++++++++++++++
>  tools/lib/bpf/libbpf.h   | 14 +++++++
>  tools/lib/bpf/libbpf.map |  1 +
>  5 files changed, 113 insertions(+)

[...]

>  static const char * const map_type_name[] = {
> @@ -9814,6 +9817,7 @@ static int attach_kprobe_session(const struct bpf_program *prog, long cookie, st
>  static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link);
>  static int attach_lsm(const struct bpf_program *prog, long cookie, struct bpf_link **link);
>  static int attach_iter(const struct bpf_program *prog, long cookie, struct bpf_link **link);
> +static int attach_tracing_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link);
>
>  static const struct bpf_sec_def section_defs[] = {
>         SEC_DEF("socket",               SOCKET_FILTER, 0, SEC_NONE),
> @@ -9862,6 +9866,8 @@ static const struct bpf_sec_def section_defs[] = {
>         SEC_DEF("fexit.s+",             TRACING, BPF_TRACE_FEXIT, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace),
>         SEC_DEF("fsession+",            TRACING, BPF_TRACE_FSESSION, SEC_ATTACH_BTF, attach_trace),
>         SEC_DEF("fsession.s+",          TRACING, BPF_TRACE_FSESSION, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace),
> +       SEC_DEF("fentry.multi+",        TRACING, BPF_TRACE_FENTRY_MULTI, 0, attach_tracing_multi),
> +       SEC_DEF("fexit.multi+",         TRACING, BPF_TRACE_FEXIT_MULTI, 0, attach_tracing_multi),
>         SEC_DEF("freplace+",            EXT, 0, SEC_ATTACH_BTF, attach_trace),
>         SEC_DEF("lsm+",                 LSM, BPF_LSM_MAC, SEC_ATTACH_BTF, attach_lsm),
>         SEC_DEF("lsm.s+",               LSM, BPF_LSM_MAC, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_lsm),
> @@ -12237,6 +12243,87 @@ static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, stru
>         return ret;
>  }
>
> +struct bpf_link *
> +bpf_program__attach_tracing_multi(const struct bpf_program *prog, const char *pattern,
> +                                 const struct bpf_tracing_multi_opts *opts)
> +{
> +       LIBBPF_OPTS(bpf_link_create_opts, lopts);
> +       __u32 *btf_ids, cnt, *free_ids = NULL;
> +       int prog_fd, link_fd, err;
> +       struct bpf_link *link;
> +
> +       btf_ids = OPTS_GET(opts, btf_ids, false);
> +       cnt = OPTS_GET(opts, cnt, false);
> +
> +       if (!pattern && !btf_ids && !cnt)

let's check that either both btf_ids and cnt are specified or none

then we can check that either pattern or btf_ids are specified

still two checks, but will capture all the bad cases

> +               return libbpf_err_ptr(-EINVAL);
> +       if (pattern && (btf_ids || cnt))
> +               return libbpf_err_ptr(-EINVAL);
> +

[...]

>  struct bpf_uprobe_opts {
>         /* size of this struct, for forward/backward compatibility */
>         size_t sz;
> diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> index d18fbcea7578..a3ffb21270e9 100644
> --- a/tools/lib/bpf/libbpf.map
> +++ b/tools/lib/bpf/libbpf.map
> @@ -358,6 +358,7 @@ LIBBPF_1.0.0 {
>                 bpf_program__attach_ksyscall;
>                 bpf_program__autoattach;
>                 bpf_program__set_autoattach;
> +               bpf_program__attach_tracing_multi;

stuck in the past? ;) we are in 1.7 cycle


>                 btf__add_enum64;
>                 btf__add_enum64_value;
>                 libbpf_bpf_attach_type_str;
> --
> 2.52.0
>

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 07/12] bpf: Add support to create tracing multi link
  2026-02-04 19:05   ` Andrii Nakryiko
@ 2026-02-05  8:55     ` Jiri Olsa
  0 siblings, 0 replies; 54+ messages in thread
From: Jiri Olsa @ 2026-02-05  8:55 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, Menglong Dong, Steven Rostedt

On Wed, Feb 04, 2026 at 11:05:09AM -0800, Andrii Nakryiko wrote:
> On Tue, Feb 3, 2026 at 1:39 AM Jiri Olsa <jolsa@kernel.org> wrote:
> >
> > Adding new link to allow to attach program to multiple function
> > BTF IDs. The link is represented by struct bpf_tracing_multi_link.
> >
> > To configure the link, new fields are added to bpf_attr::link_create
> > to pass array of BTF IDs;
> >
> >   struct {
> >       __aligned_u64   btf_ids;        /* addresses to attach */
> >       __u32           btf_ids_cnt;    /* addresses count */
> 
> cookies suspiciously missing?

right, need to be added, no mystery there ;-)

we will just assign it to the bpf_tramp_node object for each trampoline/id

thanks,
jirka


> 
> >   } tracing_multi;
> >
> > Each BTF ID represents function (BTF_KIND_FUNC) that the link will
> > attach bpf program to.
> >
> > We use previously added bpf_trampoline_multi_attach/detach functions
> > to attach/detach the link.
> >
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  include/linux/trace_events.h   |   6 ++
> >  include/uapi/linux/bpf.h       |   5 ++
> >  kernel/bpf/syscall.c           |   2 +
> >  kernel/trace/bpf_trace.c       | 105 +++++++++++++++++++++++++++++++++
> >  tools/include/uapi/linux/bpf.h |   5 ++
> >  5 files changed, 123 insertions(+)
> >
> 
> [...]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 00/12] bpf: tracing_multi link
  2026-02-04 16:06     ` Alexei Starovoitov
@ 2026-02-05  8:55       ` Jiri Olsa
  2026-02-05 15:55         ` Alexei Starovoitov
  0 siblings, 1 reply; 54+ messages in thread
From: Jiri Olsa @ 2026-02-05  8:55 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
	Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt

On Wed, Feb 04, 2026 at 08:06:50AM -0800, Alexei Starovoitov wrote:
> On Wed, Feb 4, 2026 at 4:36 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> >
> > On Tue, Feb 03, 2026 at 03:17:05PM -0800, Alexei Starovoitov wrote:
> > > On Tue, Feb 3, 2026 at 1:38 AM Jiri Olsa <jolsa@kernel.org> wrote:
> > > >
> > > > hi,
> > > > as an option to Meglong's change [1] I'm sending proposal for tracing_multi
> > > > link that does not add static trampoline but attaches program to all needed
> > > > trampolines.
> > > >
> > > > This approach keeps the same performance but has some drawbacks:
> > > >
> > > >  - when attaching 20k functions we allocate and attach 20k trampolines
> > > >  - during attachment we hold each trampoline mutex, so for above
> > > >    20k functions we will hold 20k mutexes during the attachment,
> > > >    should be very prone to deadlock, but haven't hit it yet
> > >
> > > If you check that it's sorted and always take them in the same order
> > > then there will be no deadlock.
> > > Or just grab one global mutex first and then grab trampolines mutexes
> > > next in any order. The global one will serialize this attach operation.
> > >
> > > > It looks the trampoline allocations/generation might not be big a problem
> > > > and I'll try to find a solution for holding that many mutexes. If there's
> > > > no better solution I think having one read/write mutex for tracing multi
> > > > link attach/detach should work.
> > >
> > > If you mean to have one global mutex as I proposed above then I don't see
> > > a downside. It only serializes multiple libbpf calls.
> >
> > we also need to serialize it with standard single trampoline attach,
> > because the direct ftrace update is now done under trampoline->mutex:
> >
> >   bpf_trampoline_link_prog(tr)
> >   {
> >     mutex_lock(&tr->mutex);
> >     ...
> >     update_ftrace_direct_*
> >     ...
> >     mutex_unlock(&tr->mutex);
> >   }
> >
> > for tracing_multi we would link the program first (with tr->mutex)
> > and do the bulk ftrace update later (without tr->mutex)
> >
> >   {
> >     for each involved trampoline:
> >       bpf_trampoline_link_prog
> >
> >     --> and here we could race with some other thread doing single
> >         trampoline attach
> >
> >     update_ftrace_direct_*
> >   }
> >
> > note the current version locks all tr->mutex instances all the way
> > through the update_ftrace_direct_* update
> >
> > I think we could use global rwsem and take read lock on single
> > trampoline attach path and write lock on tracing_multi attach,
> >
> > I thought we could take direct_mutex early, but that would mean
> > different order with trampoline mutex than we already have in
> > single attach path
> 
> I feel we're talking past each other.
> I meant:
> 
> For multi:
> 1. take some global mutex
> 2. take N tramp mutexes in any order
> 
> For single:
> 1. take that 1 specific tramp mutex.

ah ok, I understand, it's to prevent the lockup but keep holding all
the trampolines locks.. the rwsem I mentioned was for the 'fix', where
we do not take all the trampolines locks

thanks,
jirka

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 04/12] bpf: Add struct bpf_tramp_node object
  2026-02-04 19:00   ` Andrii Nakryiko
@ 2026-02-05  8:57     ` Jiri Olsa
  2026-02-05 22:27       ` Andrii Nakryiko
  0 siblings, 1 reply; 54+ messages in thread
From: Jiri Olsa @ 2026-02-05  8:57 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, Menglong Dong, Steven Rostedt

On Wed, Feb 04, 2026 at 11:00:57AM -0800, Andrii Nakryiko wrote:
> On Tue, Feb 3, 2026 at 1:39 AM Jiri Olsa <jolsa@kernel.org> wrote:
> >
> > Adding struct bpf_tramp_node to decouple the link out of the trampoline
> > attachment info.
> >
> > At the moment the object for attaching bpf program to the trampoline is
> > 'struct bpf_tramp_link':
> >
> >   struct bpf_tramp_link {
> >        struct bpf_link link;
> >        struct hlist_node tramp_hlist;
> >        u64 cookie;
> >   }
> >
> > The link holds the bpf_prog pointer and forces one link - one program
> > binding logic. In following changes we want to attach program to multiple
> > trampolines but have just one bpf_link object.
> >
> > Splitting struct bpf_tramp_link into:
> >
> >   struct bpf_tramp_link {
> >        struct bpf_link link;
> >        struct bpf_tramp_node node;
> >   };
> >
> >   struct bpf_tramp_node {
> >        struct hlist_node tramp_hlist;
> >        struct bpf_prog *prog;
> >        u64 cookie;
> >   };
> 
> I'm a bit confused here. For singular fentry/fexit attachment we have
> one trampoline and one program, right? For multi-fentry, we have
> multiple trampoline, but still one program pointer, no? So why put a
> prog pointer into tramp_node?.. You do want cookie in tramp_node, yes,
> but not the program.

yes, but both links:
  - single link 'struct bpf_tramp_link'
  - multi link  'struct bpf_tracing_multi_link'

are using same code to attach that code needs to have a hlist_node to
link the program to the trampoline and be able to reach the bpf_prog
(like in invoke_bpf_prog)

current code is passing whole bpf_tramp_link object so it has access
to both, but multi link needs to keep link to each trampoline (nodes
below):

struct bpf_tracing_multi_link {
       struct bpf_link link;
       enum bpf_attach_type attach_type;
       int nodes_cnt;
       struct bpf_tracing_multi_node nodes[] __counted_by(nodes_cnt);
};

and we can't get get from &nodes[x] to bpf_tracing_multi_link.link.prog

it's bit redundant, but not sure what else we can do

> Because then there is also a question what is
> bpf_link's prog pointing to?...

bpf_link.prog is still keeping the prog, I don't think we can remove that

jirka

> 
> 
> >
> > where 'struct bpf_tramp_link' defines standard single trampoline link,
> > and 'struct bpf_tramp_node' is the attachment trampoline object. This
> > will allow us to define link for multiple trampolines, like:
> >
> >   struct bpf_tracing_multi_link {
> >        struct bpf_link link;
> >        ...
> >        int nodes_cnt;
> >        struct bpf_tracing_multi_node nodes[] __counted_by(nodes_cnt);
> >   };
> >
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  arch/arm64/net/bpf_jit_comp.c  |  58 +++++++++----------
> >  arch/s390/net/bpf_jit_comp.c   |  42 +++++++-------
> >  arch/x86/net/bpf_jit_comp.c    |  54 ++++++++---------
> >  include/linux/bpf.h            |  47 ++++++++-------
> >  kernel/bpf/bpf_struct_ops.c    |  24 ++++----
> >  kernel/bpf/syscall.c           |  25 ++++----
> >  kernel/bpf/trampoline.c        | 102 ++++++++++++++++-----------------
> >  net/bpf/bpf_dummy_struct_ops.c |  11 ++--
> >  8 files changed, 185 insertions(+), 178 deletions(-)
> >
> 
> [...]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 08/12] libbpf: Add btf__find_by_glob_kind function
  2026-02-04 19:04   ` Andrii Nakryiko
@ 2026-02-05  8:57     ` Jiri Olsa
  2026-02-05 22:45       ` Andrii Nakryiko
  0 siblings, 1 reply; 54+ messages in thread
From: Jiri Olsa @ 2026-02-05  8:57 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, Menglong Dong, Steven Rostedt

On Wed, Feb 04, 2026 at 11:04:09AM -0800, Andrii Nakryiko wrote:
> On Tue, Feb 3, 2026 at 1:39 AM Jiri Olsa <jolsa@kernel.org> wrote:
> >
> > Adding btf__find_by_glob_kind function that returns array of
> > BTF ids that match given kind and allow/deny patterns.
> >
> > int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
> >                            const char *allow_pattern,
> >                            const char *deny_pattern,
> >                            __u32 **__ids);
> >
> > The __ids array is allocated and needs to be manually freed.
> >
> > The pattern check is done by glob_match function.
> >
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  tools/lib/bpf/btf.c | 41 +++++++++++++++++++++++++++++++++++++++++
> >  tools/lib/bpf/btf.h |  3 +++
> >  2 files changed, 44 insertions(+)
> >
> > diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
> > index 83fe79ffcb8f..64502b3ef38a 100644
> > --- a/tools/lib/bpf/btf.c
> > +++ b/tools/lib/bpf/btf.c
> > @@ -1010,6 +1010,47 @@ __s32 btf__find_by_name_kind(const struct btf *btf, const char *type_name,
> >         return btf_find_by_name_kind(btf, 1, type_name, kind);
> >  }
> >
> > +int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
> > +                          const char *allow_pattern, const char *deny_pattern,
> > +                          __u32 **__ids)
> > +{
> > +       __u32 i, nr_types = btf__type_cnt(btf);
> > +       int cnt = 0, alloc = 0;
> > +       __u32 *ids = NULL;
> > +
> > +       for (i = 1; i < nr_types; i++) {
> > +               const struct btf_type *t = btf__type_by_id(btf, i);
> > +               const char *name;
> > +               __u32 *p;
> > +
> > +               if (btf_kind(t) != kind)
> > +                       continue;
> > +               name = btf__name_by_offset(btf, t->name_off);
> > +               if (!name)
> > +                       continue;
> > +
> > +               if (deny_pattern && glob_match(name, deny_pattern))
> > +                       continue;
> > +               if (allow_pattern && !glob_match(name, allow_pattern))
> > +                       continue;
> > +
> > +               if (cnt == alloc) {
> > +                       alloc = max(16, alloc * 3 / 2);
> > +                       p = libbpf_reallocarray(ids, alloc, sizeof(__u32));
> > +                       if (!p) {
> > +                               free(ids);
> > +                               return -ENOMEM;
> > +                       }
> > +                       ids = p;
> > +               }
> > +               ids[cnt] = i;
> > +               cnt++;
> > +       }
> > +
> > +       *__ids = ids;
> > +       return cnt;
> > +}
> > +
> >  static bool btf_is_modifiable(const struct btf *btf)
> >  {
> >         return (void *)btf->hdr != btf->raw_data;
> > diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
> > index b30008c267c0..d7b47bb0ba99 100644
> > --- a/tools/lib/bpf/btf.h
> > +++ b/tools/lib/bpf/btf.h
> > @@ -661,6 +661,9 @@ static inline struct btf_decl_tag *btf_decl_tag(const struct btf_type *t)
> >         return (struct btf_decl_tag *)(t + 1);
> >  }
> >
> > +int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
> > +                          const char *allow_pattern, const char *deny_pattern,
> > +                          __u32 **__ids);
> 
> 
> as AI pointed out, this should be an internal helper, no? Let's also
> not use double underscore pattern here,
> "collect_btf_ids_by_glob_kind()" perhaps?

ok

> 
> Also, you don't seem to be using deny_pattern, where you planning to?

the tests are just rudimentary before we agree we want to do it this way

but I'm not sure I have a usecase for deny_pattern.. I think we added it
just to be complete, I recall we copied that function from somewhere,
it's long time ago ;-)

> 
> Also, are there functions that we'll have BTF for, but they won't be
> attachable? What if I do SEC("fentry.multi/*")? Will it attach or fail
> to attach some functions (and thus fail the overall attachment)?

yes, for the benchmark tests I had to add is_allowed_func which mimics
btf_distill_func_proto and denies attach for some functions

also I had to filter out some core kernel functions like rcu*,trace*,..
which seemed to cause trouble when you attach them

jirka

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 06/12] bpf: Add bpf_trampoline_multi_attach/detach functions
  2026-02-03  9:38 ` [RFC bpf-next 06/12] bpf: Add bpf_trampoline_multi_attach/detach functions Jiri Olsa
  2026-02-03 10:14   ` bot+bpf-ci
@ 2026-02-05  9:16   ` Menglong Dong
  2026-02-05 13:45     ` Jiri Olsa
  1 sibling, 1 reply; 54+ messages in thread
From: Menglong Dong @ 2026-02-05  9:16 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, Menglong Dong, Steven Rostedt

On 2026/2/3 17:38 Jiri Olsa <jolsa@kernel.org> write:
> Adding bpf_trampoline_multi_attach/detach functions that allows
> to attach/detach multi tracing trampoline.
> 
> The attachment is defined with bpf_program and array of BTF ids
> of functions to attach the bpf program to.
> 
[...]
> @@ -367,7 +367,11 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key, unsigned long ip)
>  	head = &trampoline_ip_table[hash_64(tr->ip, TRAMPOLINE_HASH_BITS)];
>  	hlist_add_head(&tr->hlist_ip, head);
>  	refcount_set(&tr->refcnt, 1);
> +#ifdef CONFIG_LOCKDEP
> +	mutex_init_with_key(&tr->mutex, &__lockdep_no_track__);
> +#else
>  	mutex_init(&tr->mutex);
> +#endif
>  	for (i = 0; i < BPF_TRAMP_MAX; i++)
>  		INIT_HLIST_HEAD(&tr->progs_hlist[i]);
>  out:
> @@ -1400,6 +1404,188 @@ int __weak arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
>  	return -ENOTSUPP;
>  }
>  
> +#if defined(CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS) && defined(CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS)

Hi, Jiri. It's great to see your tracing_multi link finally. It looks great ;)

After analyzing a little deeper on the SINGLE_FTRACE_DIRECT_OPS, I
understand why it is only supported on x86_64 for now. It seems that
it's a little hard to implement it in the other arch, as we need to
restructure the implement of ftrace direct call.

So do we need some more ftrace API here to make the tracing multi-link
independent from SINGLE_FTRACE_DIRECT_OPS? Otherwise, we can only
use it on x86_64.

Have you ever tried to implement the SINGLE_FTRACE_DIRECT_OPS on arm64?
The direct call on arm64 is so complex, and I didn't work it out :/

Thanks!
Menglong Dong

> +
> +struct fentry_multi_data {
> +	struct ftrace_hash *unreg;
> +	struct ftrace_hash *modify;
> +	struct ftrace_hash *reg;
> +};
> +
[...]
> 
> 
> 





^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 06/12] bpf: Add bpf_trampoline_multi_attach/detach functions
  2026-02-05  9:16   ` Menglong Dong
@ 2026-02-05 13:45     ` Jiri Olsa
  2026-02-11  8:04       ` Menglong Dong
  0 siblings, 1 reply; 54+ messages in thread
From: Jiri Olsa @ 2026-02-05 13:45 UTC (permalink / raw)
  To: Menglong Dong
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, Menglong Dong, Steven Rostedt, Mark Rutland

On Thu, Feb 05, 2026 at 05:16:49PM +0800, Menglong Dong wrote:
> On 2026/2/3 17:38 Jiri Olsa <jolsa@kernel.org> write:
> > Adding bpf_trampoline_multi_attach/detach functions that allows
> > to attach/detach multi tracing trampoline.
> > 
> > The attachment is defined with bpf_program and array of BTF ids
> > of functions to attach the bpf program to.
> > 
> [...]
> > @@ -367,7 +367,11 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key, unsigned long ip)
> >  	head = &trampoline_ip_table[hash_64(tr->ip, TRAMPOLINE_HASH_BITS)];
> >  	hlist_add_head(&tr->hlist_ip, head);
> >  	refcount_set(&tr->refcnt, 1);
> > +#ifdef CONFIG_LOCKDEP
> > +	mutex_init_with_key(&tr->mutex, &__lockdep_no_track__);
> > +#else
> >  	mutex_init(&tr->mutex);
> > +#endif
> >  	for (i = 0; i < BPF_TRAMP_MAX; i++)
> >  		INIT_HLIST_HEAD(&tr->progs_hlist[i]);
> >  out:
> > @@ -1400,6 +1404,188 @@ int __weak arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
> >  	return -ENOTSUPP;
> >  }
> >  
> > +#if defined(CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS) && defined(CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS)
> 
> Hi, Jiri. It's great to see your tracing_multi link finally. It looks great ;)

heya, thanks ;-)

> 
> After analyzing a little deeper on the SINGLE_FTRACE_DIRECT_OPS, I
> understand why it is only supported on x86_64 for now. It seems that
> it's a little hard to implement it in the other arch, as we need to
> restructure the implement of ftrace direct call.
> 
> So do we need some more ftrace API here to make the tracing multi-link
> independent from SINGLE_FTRACE_DIRECT_OPS? Otherwise, we can only
> use it on x86_64.

I tried to describe it in commit [2] changelog:

    At the moment we can enable this only on x86 arch, because arm relies
    on ftrace_ops object representing just single trampoline image (stored
    in ftrace_ops::direct_call). Archs that do not support this will continue
    to use *_ftrace_direct api.

> 
> Have you ever tried to implement the SINGLE_FTRACE_DIRECT_OPS on arm64?
> The direct call on arm64 is so complex, and I didn't work it out :/

yes, it seems to be difficult atm, Mark commented on that in [1],
I don't know arm that good to be of much help in here, cc-ing Mark

jirka


[1] https://lore.kernel.org/bpf/aIyNOd18TRLu8EpY@J2N7QTR9R3/
[2] 424f6a361096 ("bpf,x86: Use single ftrace_ops for direct calls")

> 
> Thanks!
> Menglong Dong
> 
> > +
> > +struct fentry_multi_data {
> > +	struct ftrace_hash *unreg;
> > +	struct ftrace_hash *modify;
> > +	struct ftrace_hash *reg;
> > +};
> > +
> [...]
> > 
> > 
> > 
> 
> 
> 
> 

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 00/12] bpf: tracing_multi link
  2026-02-05  8:55       ` Jiri Olsa
@ 2026-02-05 15:55         ` Alexei Starovoitov
  2026-02-06  8:18           ` Jiri Olsa
  0 siblings, 1 reply; 54+ messages in thread
From: Alexei Starovoitov @ 2026-02-05 15:55 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, Menglong Dong, Steven Rostedt

On Thu, Feb 5, 2026 at 12:55 AM Jiri Olsa <olsajiri@gmail.com> wrote:
>
> On Wed, Feb 04, 2026 at 08:06:50AM -0800, Alexei Starovoitov wrote:
> > On Wed, Feb 4, 2026 at 4:36 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> > >
> > > On Tue, Feb 03, 2026 at 03:17:05PM -0800, Alexei Starovoitov wrote:
> > > > On Tue, Feb 3, 2026 at 1:38 AM Jiri Olsa <jolsa@kernel.org> wrote:
> > > > >
> > > > > hi,
> > > > > as an option to Meglong's change [1] I'm sending proposal for tracing_multi
> > > > > link that does not add static trampoline but attaches program to all needed
> > > > > trampolines.
> > > > >
> > > > > This approach keeps the same performance but has some drawbacks:
> > > > >
> > > > >  - when attaching 20k functions we allocate and attach 20k trampolines
> > > > >  - during attachment we hold each trampoline mutex, so for above
> > > > >    20k functions we will hold 20k mutexes during the attachment,
> > > > >    should be very prone to deadlock, but haven't hit it yet
> > > >
> > > > If you check that it's sorted and always take them in the same order
> > > > then there will be no deadlock.
> > > > Or just grab one global mutex first and then grab trampolines mutexes
> > > > next in any order. The global one will serialize this attach operation.
> > > >
> > > > > It looks the trampoline allocations/generation might not be big a problem
> > > > > and I'll try to find a solution for holding that many mutexes. If there's
> > > > > no better solution I think having one read/write mutex for tracing multi
> > > > > link attach/detach should work.
> > > >
> > > > If you mean to have one global mutex as I proposed above then I don't see
> > > > a downside. It only serializes multiple libbpf calls.
> > >
> > > we also need to serialize it with standard single trampoline attach,
> > > because the direct ftrace update is now done under trampoline->mutex:
> > >
> > >   bpf_trampoline_link_prog(tr)
> > >   {
> > >     mutex_lock(&tr->mutex);
> > >     ...
> > >     update_ftrace_direct_*
> > >     ...
> > >     mutex_unlock(&tr->mutex);
> > >   }
> > >
> > > for tracing_multi we would link the program first (with tr->mutex)
> > > and do the bulk ftrace update later (without tr->mutex)
> > >
> > >   {
> > >     for each involved trampoline:
> > >       bpf_trampoline_link_prog
> > >
> > >     --> and here we could race with some other thread doing single
> > >         trampoline attach
> > >
> > >     update_ftrace_direct_*
> > >   }
> > >
> > > note the current version locks all tr->mutex instances all the way
> > > through the update_ftrace_direct_* update
> > >
> > > I think we could use global rwsem and take read lock on single
> > > trampoline attach path and write lock on tracing_multi attach,
> > >
> > > I thought we could take direct_mutex early, but that would mean
> > > different order with trampoline mutex than we already have in
> > > single attach path
> >
> > I feel we're talking past each other.
> > I meant:
> >
> > For multi:
> > 1. take some global mutex
> > 2. take N tramp mutexes in any order
> >
> > For single:
> > 1. take that 1 specific tramp mutex.
>
> ah ok, I understand, it's to prevent the lockup but keep holding all
> the trampolines locks.. the rwsem I mentioned was for the 'fix', where
> we do not take all the trampolines locks

I don't understand how rwsem would help.
All the operations on trampoline are protected by mutex.
Switching to rw makes sense only if we can designate certain
operations as "read" and others as "write" and number of "reads"
dominate. This won't be the case with multi-fentry.
And we still need to take all of them as "write" to update trampoline.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 04/12] bpf: Add struct bpf_tramp_node object
  2026-02-05  8:57     ` Jiri Olsa
@ 2026-02-05 22:27       ` Andrii Nakryiko
  2026-02-06  8:27         ` Jiri Olsa
  0 siblings, 1 reply; 54+ messages in thread
From: Andrii Nakryiko @ 2026-02-05 22:27 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, Menglong Dong, Steven Rostedt

On Thu, Feb 5, 2026 at 12:57 AM Jiri Olsa <olsajiri@gmail.com> wrote:
>
> On Wed, Feb 04, 2026 at 11:00:57AM -0800, Andrii Nakryiko wrote:
> > On Tue, Feb 3, 2026 at 1:39 AM Jiri Olsa <jolsa@kernel.org> wrote:
> > >
> > > Adding struct bpf_tramp_node to decouple the link out of the trampoline
> > > attachment info.
> > >
> > > At the moment the object for attaching bpf program to the trampoline is
> > > 'struct bpf_tramp_link':
> > >
> > >   struct bpf_tramp_link {
> > >        struct bpf_link link;
> > >        struct hlist_node tramp_hlist;
> > >        u64 cookie;
> > >   }
> > >
> > > The link holds the bpf_prog pointer and forces one link - one program
> > > binding logic. In following changes we want to attach program to multiple
> > > trampolines but have just one bpf_link object.
> > >
> > > Splitting struct bpf_tramp_link into:
> > >
> > >   struct bpf_tramp_link {
> > >        struct bpf_link link;
> > >        struct bpf_tramp_node node;
> > >   };
> > >
> > >   struct bpf_tramp_node {
> > >        struct hlist_node tramp_hlist;
> > >        struct bpf_prog *prog;
> > >        u64 cookie;
> > >   };
> >
> > I'm a bit confused here. For singular fentry/fexit attachment we have
> > one trampoline and one program, right? For multi-fentry, we have
> > multiple trampoline, but still one program pointer, no? So why put a
> > prog pointer into tramp_node?.. You do want cookie in tramp_node, yes,
> > but not the program.
>
> yes, but both links:
>   - single link 'struct bpf_tramp_link'
>   - multi link  'struct bpf_tracing_multi_link'
>
> are using same code to attach that code needs to have a hlist_node to
> link the program to the trampoline and be able to reach the bpf_prog
> (like in invoke_bpf_prog)
>
> current code is passing whole bpf_tramp_link object so it has access
> to both, but multi link needs to keep link to each trampoline (nodes
> below):
>
> struct bpf_tracing_multi_link {
>        struct bpf_link link;
>        enum bpf_attach_type attach_type;
>        int nodes_cnt;
>        struct bpf_tracing_multi_node nodes[] __counted_by(nodes_cnt);
> };
>
> and we can't get get from &nodes[x] to bpf_tracing_multi_link.link.prog
>
> it's bit redundant, but not sure what else we can do

invoke_bpf_prog() specifically doesn't have to get prog pointer from
bpf_tramp_link, it can be passed prog as a separate argument and then
bpf_tramp_node  with cookie separately as well. I haven't looked at
all other code, but I suspect we can refactor it to accept prog
explicitly and the relevant parts (node+cookie) separately.

Just at the conceptual level, we have single prog and multiple places
to patch (trampolines), so we shouldn't be co-locating in the same
data structure. It feels like a complete hack to duplicate prog just
to make some internal code access it.

>
> > Because then there is also a question what is
> > bpf_link's prog pointing to?...
>
> bpf_link.prog is still keeping the prog, I don't think we can remove that
>
> jirka
>
> >
> >
> > >
> > > where 'struct bpf_tramp_link' defines standard single trampoline link,
> > > and 'struct bpf_tramp_node' is the attachment trampoline object. This
> > > will allow us to define link for multiple trampolines, like:
> > >
> > >   struct bpf_tracing_multi_link {
> > >        struct bpf_link link;
> > >        ...
> > >        int nodes_cnt;
> > >        struct bpf_tracing_multi_node nodes[] __counted_by(nodes_cnt);
> > >   };
> > >
> > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > > ---
> > >  arch/arm64/net/bpf_jit_comp.c  |  58 +++++++++----------
> > >  arch/s390/net/bpf_jit_comp.c   |  42 +++++++-------
> > >  arch/x86/net/bpf_jit_comp.c    |  54 ++++++++---------
> > >  include/linux/bpf.h            |  47 ++++++++-------
> > >  kernel/bpf/bpf_struct_ops.c    |  24 ++++----
> > >  kernel/bpf/syscall.c           |  25 ++++----
> > >  kernel/bpf/trampoline.c        | 102 ++++++++++++++++-----------------
> > >  net/bpf/bpf_dummy_struct_ops.c |  11 ++--
> > >  8 files changed, 185 insertions(+), 178 deletions(-)
> > >
> >
> > [...]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 08/12] libbpf: Add btf__find_by_glob_kind function
  2026-02-05  8:57     ` Jiri Olsa
@ 2026-02-05 22:45       ` Andrii Nakryiko
  2026-02-06  8:43         ` Jiri Olsa
  0 siblings, 1 reply; 54+ messages in thread
From: Andrii Nakryiko @ 2026-02-05 22:45 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, Menglong Dong, Steven Rostedt

On Thu, Feb 5, 2026 at 12:57 AM Jiri Olsa <olsajiri@gmail.com> wrote:
>
> On Wed, Feb 04, 2026 at 11:04:09AM -0800, Andrii Nakryiko wrote:
> > On Tue, Feb 3, 2026 at 1:39 AM Jiri Olsa <jolsa@kernel.org> wrote:
> > >
> > > Adding btf__find_by_glob_kind function that returns array of
> > > BTF ids that match given kind and allow/deny patterns.
> > >
> > > int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
> > >                            const char *allow_pattern,
> > >                            const char *deny_pattern,
> > >                            __u32 **__ids);
> > >
> > > The __ids array is allocated and needs to be manually freed.
> > >
> > > The pattern check is done by glob_match function.
> > >
> > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > > ---
> > >  tools/lib/bpf/btf.c | 41 +++++++++++++++++++++++++++++++++++++++++
> > >  tools/lib/bpf/btf.h |  3 +++
> > >  2 files changed, 44 insertions(+)
> > >
> > > diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
> > > index 83fe79ffcb8f..64502b3ef38a 100644
> > > --- a/tools/lib/bpf/btf.c
> > > +++ b/tools/lib/bpf/btf.c
> > > @@ -1010,6 +1010,47 @@ __s32 btf__find_by_name_kind(const struct btf *btf, const char *type_name,
> > >         return btf_find_by_name_kind(btf, 1, type_name, kind);
> > >  }
> > >
> > > +int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
> > > +                          const char *allow_pattern, const char *deny_pattern,
> > > +                          __u32 **__ids)
> > > +{
> > > +       __u32 i, nr_types = btf__type_cnt(btf);
> > > +       int cnt = 0, alloc = 0;
> > > +       __u32 *ids = NULL;
> > > +
> > > +       for (i = 1; i < nr_types; i++) {
> > > +               const struct btf_type *t = btf__type_by_id(btf, i);
> > > +               const char *name;
> > > +               __u32 *p;
> > > +
> > > +               if (btf_kind(t) != kind)
> > > +                       continue;
> > > +               name = btf__name_by_offset(btf, t->name_off);
> > > +               if (!name)
> > > +                       continue;
> > > +
> > > +               if (deny_pattern && glob_match(name, deny_pattern))
> > > +                       continue;
> > > +               if (allow_pattern && !glob_match(name, allow_pattern))
> > > +                       continue;
> > > +
> > > +               if (cnt == alloc) {
> > > +                       alloc = max(16, alloc * 3 / 2);
> > > +                       p = libbpf_reallocarray(ids, alloc, sizeof(__u32));
> > > +                       if (!p) {
> > > +                               free(ids);
> > > +                               return -ENOMEM;
> > > +                       }
> > > +                       ids = p;
> > > +               }
> > > +               ids[cnt] = i;
> > > +               cnt++;
> > > +       }
> > > +
> > > +       *__ids = ids;
> > > +       return cnt;
> > > +}
> > > +
> > >  static bool btf_is_modifiable(const struct btf *btf)
> > >  {
> > >         return (void *)btf->hdr != btf->raw_data;
> > > diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
> > > index b30008c267c0..d7b47bb0ba99 100644
> > > --- a/tools/lib/bpf/btf.h
> > > +++ b/tools/lib/bpf/btf.h
> > > @@ -661,6 +661,9 @@ static inline struct btf_decl_tag *btf_decl_tag(const struct btf_type *t)
> > >         return (struct btf_decl_tag *)(t + 1);
> > >  }
> > >
> > > +int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
> > > +                          const char *allow_pattern, const char *deny_pattern,
> > > +                          __u32 **__ids);
> >
> >
> > as AI pointed out, this should be an internal helper, no? Let's also
> > not use double underscore pattern here,
> > "collect_btf_ids_by_glob_kind()" perhaps?
>
> ok
>
> >
> > Also, you don't seem to be using deny_pattern, where you planning to?
>
> the tests are just rudimentary before we agree we want to do it this way
>
> but I'm not sure I have a usecase for deny_pattern.. I think we added it
> just to be complete, I recall we copied that function from somewhere,
> it's long time ago ;-)
>
> >
> > Also, are there functions that we'll have BTF for, but they won't be
> > attachable? What if I do SEC("fentry.multi/*")? Will it attach or fail
> > to attach some functions (and thus fail the overall attachment)?
>
> yes, for the benchmark tests I had to add is_allowed_func which mimics
> btf_distill_func_proto and denies attach for some functions
>
> also I had to filter out some core kernel functions like rcu*,trace*,..
> which seemed to cause trouble when you attach them

So the question I'm implying here is if libbpf should do what we do
for kprobes: use libbpf_available_kprobes_parse and intersect?

>
> jirka

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 00/12] bpf: tracing_multi link
  2026-02-05 15:55         ` Alexei Starovoitov
@ 2026-02-06  8:18           ` Jiri Olsa
  2026-02-06 17:03             ` Andrii Nakryiko
  0 siblings, 1 reply; 54+ messages in thread
From: Jiri Olsa @ 2026-02-06  8:18 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
	Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt

On Thu, Feb 05, 2026 at 07:55:19AM -0800, Alexei Starovoitov wrote:
> On Thu, Feb 5, 2026 at 12:55 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> >
> > On Wed, Feb 04, 2026 at 08:06:50AM -0800, Alexei Starovoitov wrote:
> > > On Wed, Feb 4, 2026 at 4:36 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> > > >
> > > > On Tue, Feb 03, 2026 at 03:17:05PM -0800, Alexei Starovoitov wrote:
> > > > > On Tue, Feb 3, 2026 at 1:38 AM Jiri Olsa <jolsa@kernel.org> wrote:
> > > > > >
> > > > > > hi,
> > > > > > as an option to Meglong's change [1] I'm sending proposal for tracing_multi
> > > > > > link that does not add static trampoline but attaches program to all needed
> > > > > > trampolines.
> > > > > >
> > > > > > This approach keeps the same performance but has some drawbacks:
> > > > > >
> > > > > >  - when attaching 20k functions we allocate and attach 20k trampolines
> > > > > >  - during attachment we hold each trampoline mutex, so for above
> > > > > >    20k functions we will hold 20k mutexes during the attachment,
> > > > > >    should be very prone to deadlock, but haven't hit it yet
> > > > >
> > > > > If you check that it's sorted and always take them in the same order
> > > > > then there will be no deadlock.
> > > > > Or just grab one global mutex first and then grab trampolines mutexes
> > > > > next in any order. The global one will serialize this attach operation.
> > > > >
> > > > > > It looks the trampoline allocations/generation might not be big a problem
> > > > > > and I'll try to find a solution for holding that many mutexes. If there's
> > > > > > no better solution I think having one read/write mutex for tracing multi
> > > > > > link attach/detach should work.
> > > > >
> > > > > If you mean to have one global mutex as I proposed above then I don't see
> > > > > a downside. It only serializes multiple libbpf calls.
> > > >
> > > > we also need to serialize it with standard single trampoline attach,
> > > > because the direct ftrace update is now done under trampoline->mutex:
> > > >
> > > >   bpf_trampoline_link_prog(tr)
> > > >   {
> > > >     mutex_lock(&tr->mutex);
> > > >     ...
> > > >     update_ftrace_direct_*
> > > >     ...
> > > >     mutex_unlock(&tr->mutex);
> > > >   }
> > > >
> > > > for tracing_multi we would link the program first (with tr->mutex)
> > > > and do the bulk ftrace update later (without tr->mutex)
> > > >
> > > >   {
> > > >     for each involved trampoline:
> > > >       bpf_trampoline_link_prog
> > > >
> > > >     --> and here we could race with some other thread doing single
> > > >         trampoline attach
> > > >
> > > >     update_ftrace_direct_*
> > > >   }
> > > >
> > > > note the current version locks all tr->mutex instances all the way
> > > > through the update_ftrace_direct_* update
> > > >
> > > > I think we could use global rwsem and take read lock on single
> > > > trampoline attach path and write lock on tracing_multi attach,
> > > >
> > > > I thought we could take direct_mutex early, but that would mean
> > > > different order with trampoline mutex than we already have in
> > > > single attach path
> > >
> > > I feel we're talking past each other.
> > > I meant:
> > >
> > > For multi:
> > > 1. take some global mutex
> > > 2. take N tramp mutexes in any order
> > >
> > > For single:
> > > 1. take that 1 specific tramp mutex.
> >
> > ah ok, I understand, it's to prevent the lockup but keep holding all
> > the trampolines locks.. the rwsem I mentioned was for the 'fix', where
> > we do not take all the trampolines locks
> 
> I don't understand how rwsem would help.
> All the operations on trampoline are protected by mutex.
> Switching to rw makes sense only if we can designate certain
> operations as "read" and others as "write" and number of "reads"
> dominate. This won't be the case with multi-fentry.
> And we still need to take all of them as "write" to update trampoline.

this applies to scenario where we do not hold all the trampoline locks,
in such case we could have race between single and multi attachment,
while single/single attachment race stays safe

as a fix the single attach would take read lock and multi attach would
take write lock, so single/single race is allowed and single/multi is
not ... showed in the patch below

but it might be too much.. in a sense that there's already many locks
involved in trampoline attach/detach, and simple global lock in multi
or just sorting the ids would be enough

jirka


---
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index b76bb545077b..edbc8f133dda 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -30,6 +30,8 @@ static struct hlist_head trampoline_ip_table[TRAMPOLINE_TABLE_SIZE];
 /* serializes access to trampoline tables */
 static DEFINE_MUTEX(trampoline_mutex);
 
+static DECLARE_RWSEM(multi_sem);
+
 struct bpf_trampoline_ops {
 	int (*register_fentry)(struct bpf_trampoline *tr, void *new_addr, void *data);
 	int (*unregister_fentry)(struct bpf_trampoline *tr, u32 orig_flags, void *old_addr, void *data);
@@ -367,11 +369,7 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key, unsigned long ip)
 	head = &trampoline_ip_table[hash_64(tr->ip, TRAMPOLINE_HASH_BITS)];
 	hlist_add_head(&tr->hlist_ip, head);
 	refcount_set(&tr->refcnt, 1);
-#ifdef CONFIG_LOCKDEP
-	mutex_init_with_key(&tr->mutex, &__lockdep_no_track__);
-#else
 	mutex_init(&tr->mutex);
-#endif
 	for (i = 0; i < BPF_TRAMP_MAX; i++)
 		INIT_HLIST_HEAD(&tr->progs_hlist[i]);
 out:
@@ -871,6 +869,8 @@ int bpf_trampoline_link_prog(struct bpf_tramp_node *node,
 {
 	int err;
 
+	guard(rwsem_read)(&multi_sem);
+
 	mutex_lock(&tr->mutex);
 	err = __bpf_trampoline_link_prog(node, tr, tgt_prog, &trampoline_ops, NULL);
 	mutex_unlock(&tr->mutex);
@@ -916,6 +916,8 @@ int bpf_trampoline_unlink_prog(struct bpf_tramp_node *node,
 {
 	int err;
 
+	guard(rwsem_read)(&multi_sem);
+
 	mutex_lock(&tr->mutex);
 	err = __bpf_trampoline_unlink_prog(node, tr, tgt_prog, &trampoline_ops, NULL);
 	mutex_unlock(&tr->mutex);
@@ -1463,6 +1465,8 @@ int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
 	struct bpf_trampoline *tr;
 	u64 key;
 
+	guard(rwsem_write)(&multi_sem);
+
 	data.reg = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
 	if (!data.reg)
 		return -ENOMEM;
@@ -1494,12 +1498,10 @@ int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
 		tr = mnode->trampoline;
 
 		mutex_lock(&tr->mutex);
-
 		err = __bpf_trampoline_link_prog(&mnode->node, tr, NULL, &trampoline_multi_ops, &data);
-		if (err) {
-			mutex_unlock(&tr->mutex);
+		mutex_unlock(&tr->mutex);
+		if (err)
 			goto rollback_unlink;
-		}
 	}
 
 	if (ftrace_hash_count(data.reg)) {
@@ -1516,11 +1518,6 @@ int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
 		}
 	}
 
-	for (i = 0; i < cnt; i++) {
-		tr = link->nodes[i].trampoline;
-		mutex_unlock(&tr->mutex);
-	}
-
 	free_fentry_multi_data(&data);
 	return 0;
 
@@ -1528,6 +1525,7 @@ int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
 	for (j = 0; j < i; j++) {
 		mnode = &link->nodes[j];
 		tr = mnode->trampoline;
+		mutex_lock(&tr->mutex);
 		WARN_ON_ONCE(__bpf_trampoline_unlink_prog(&mnode->node, tr, NULL,
 			     &trampoline_multi_ops, &data));
 		mutex_unlock(&tr->mutex);
@@ -1550,6 +1548,8 @@ int bpf_trampoline_multi_detach(struct bpf_prog *prog, struct bpf_tracing_multi_
 	int i, cnt = link->nodes_cnt;
 	struct bpf_trampoline *tr;
 
+	guard(rwsem_write)(&multi_sem);
+
 	data.unreg = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
 	if (!data.unreg)
 		return -ENOMEM;
@@ -1567,6 +1567,7 @@ int bpf_trampoline_multi_detach(struct bpf_prog *prog, struct bpf_tracing_multi_
 		mutex_lock(&tr->mutex);
 		WARN_ON_ONCE(__bpf_trampoline_unlink_prog(&mnode->node, tr, NULL,
 							  &trampoline_multi_ops, &data));
+		mutex_unlock(&tr->mutex);
 	}
 
 	if (ftrace_hash_count(data.unreg))
@@ -1576,7 +1577,6 @@ int bpf_trampoline_multi_detach(struct bpf_prog *prog, struct bpf_tracing_multi_
 
 	for (i = 0; i < cnt; i++) {
 		tr = link->nodes[i].trampoline;
-		mutex_unlock(&tr->mutex);
 		bpf_trampoline_put(tr);
 	}
 

^ permalink raw reply related	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 04/12] bpf: Add struct bpf_tramp_node object
  2026-02-05 22:27       ` Andrii Nakryiko
@ 2026-02-06  8:27         ` Jiri Olsa
  0 siblings, 0 replies; 54+ messages in thread
From: Jiri Olsa @ 2026-02-06  8:27 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
	Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt

On Thu, Feb 05, 2026 at 02:27:38PM -0800, Andrii Nakryiko wrote:
> On Thu, Feb 5, 2026 at 12:57 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> >
> > On Wed, Feb 04, 2026 at 11:00:57AM -0800, Andrii Nakryiko wrote:
> > > On Tue, Feb 3, 2026 at 1:39 AM Jiri Olsa <jolsa@kernel.org> wrote:
> > > >
> > > > Adding struct bpf_tramp_node to decouple the link out of the trampoline
> > > > attachment info.
> > > >
> > > > At the moment the object for attaching bpf program to the trampoline is
> > > > 'struct bpf_tramp_link':
> > > >
> > > >   struct bpf_tramp_link {
> > > >        struct bpf_link link;
> > > >        struct hlist_node tramp_hlist;
> > > >        u64 cookie;
> > > >   }
> > > >
> > > > The link holds the bpf_prog pointer and forces one link - one program
> > > > binding logic. In following changes we want to attach program to multiple
> > > > trampolines but have just one bpf_link object.
> > > >
> > > > Splitting struct bpf_tramp_link into:
> > > >
> > > >   struct bpf_tramp_link {
> > > >        struct bpf_link link;
> > > >        struct bpf_tramp_node node;
> > > >   };
> > > >
> > > >   struct bpf_tramp_node {
> > > >        struct hlist_node tramp_hlist;
> > > >        struct bpf_prog *prog;
> > > >        u64 cookie;
> > > >   };
> > >
> > > I'm a bit confused here. For singular fentry/fexit attachment we have
> > > one trampoline and one program, right? For multi-fentry, we have
> > > multiple trampoline, but still one program pointer, no? So why put a
> > > prog pointer into tramp_node?.. You do want cookie in tramp_node, yes,
> > > but not the program.
> >
> > yes, but both links:
> >   - single link 'struct bpf_tramp_link'
> >   - multi link  'struct bpf_tracing_multi_link'
> >
> > are using same code to attach that code needs to have a hlist_node to
> > link the program to the trampoline and be able to reach the bpf_prog
> > (like in invoke_bpf_prog)
> >
> > current code is passing whole bpf_tramp_link object so it has access
> > to both, but multi link needs to keep link to each trampoline (nodes
> > below):
> >
> > struct bpf_tracing_multi_link {
> >        struct bpf_link link;
> >        enum bpf_attach_type attach_type;
> >        int nodes_cnt;
> >        struct bpf_tracing_multi_node nodes[] __counted_by(nodes_cnt);
> > };
> >
> > and we can't get get from &nodes[x] to bpf_tracing_multi_link.link.prog
> >
> > it's bit redundant, but not sure what else we can do
> 
> invoke_bpf_prog() specifically doesn't have to get prog pointer from
> bpf_tramp_link, it can be passed prog as a separate argument and then
> bpf_tramp_node  with cookie separately as well. I haven't looked at
> all other code, but I suspect we can refactor it to accept prog
> explicitly and the relevant parts (node+cookie) separately.

ok, makes sense.. will check on how to refactor that code

for some reason I thought we don't wan't to refactor jit code much,
because it means changes through all the archs code.. but this one
should mostly change just arguments, so it's probably ok

> 
> Just at the conceptual level, we have single prog and multiple places
> to patch (trampolines), so we shouldn't be co-locating in the same
> data structure. It feels like a complete hack to duplicate prog just
> to make some internal code access it.

ook

jirka

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 08/12] libbpf: Add btf__find_by_glob_kind function
  2026-02-05 22:45       ` Andrii Nakryiko
@ 2026-02-06  8:43         ` Jiri Olsa
  2026-02-06 16:58           ` Andrii Nakryiko
  0 siblings, 1 reply; 54+ messages in thread
From: Jiri Olsa @ 2026-02-06  8:43 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	bpf, linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman,
	Song Liu, Yonghong Song, Menglong Dong, Steven Rostedt

On Thu, Feb 05, 2026 at 02:45:14PM -0800, Andrii Nakryiko wrote:
> On Thu, Feb 5, 2026 at 12:57 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> >
> > On Wed, Feb 04, 2026 at 11:04:09AM -0800, Andrii Nakryiko wrote:
> > > On Tue, Feb 3, 2026 at 1:39 AM Jiri Olsa <jolsa@kernel.org> wrote:
> > > >
> > > > Adding btf__find_by_glob_kind function that returns array of
> > > > BTF ids that match given kind and allow/deny patterns.
> > > >
> > > > int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
> > > >                            const char *allow_pattern,
> > > >                            const char *deny_pattern,
> > > >                            __u32 **__ids);
> > > >
> > > > The __ids array is allocated and needs to be manually freed.
> > > >
> > > > The pattern check is done by glob_match function.
> > > >
> > > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > > > ---
> > > >  tools/lib/bpf/btf.c | 41 +++++++++++++++++++++++++++++++++++++++++
> > > >  tools/lib/bpf/btf.h |  3 +++
> > > >  2 files changed, 44 insertions(+)
> > > >
> > > > diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
> > > > index 83fe79ffcb8f..64502b3ef38a 100644
> > > > --- a/tools/lib/bpf/btf.c
> > > > +++ b/tools/lib/bpf/btf.c
> > > > @@ -1010,6 +1010,47 @@ __s32 btf__find_by_name_kind(const struct btf *btf, const char *type_name,
> > > >         return btf_find_by_name_kind(btf, 1, type_name, kind);
> > > >  }
> > > >
> > > > +int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
> > > > +                          const char *allow_pattern, const char *deny_pattern,
> > > > +                          __u32 **__ids)
> > > > +{
> > > > +       __u32 i, nr_types = btf__type_cnt(btf);
> > > > +       int cnt = 0, alloc = 0;
> > > > +       __u32 *ids = NULL;
> > > > +
> > > > +       for (i = 1; i < nr_types; i++) {
> > > > +               const struct btf_type *t = btf__type_by_id(btf, i);
> > > > +               const char *name;
> > > > +               __u32 *p;
> > > > +
> > > > +               if (btf_kind(t) != kind)
> > > > +                       continue;
> > > > +               name = btf__name_by_offset(btf, t->name_off);
> > > > +               if (!name)
> > > > +                       continue;
> > > > +
> > > > +               if (deny_pattern && glob_match(name, deny_pattern))
> > > > +                       continue;
> > > > +               if (allow_pattern && !glob_match(name, allow_pattern))
> > > > +                       continue;
> > > > +
> > > > +               if (cnt == alloc) {
> > > > +                       alloc = max(16, alloc * 3 / 2);
> > > > +                       p = libbpf_reallocarray(ids, alloc, sizeof(__u32));
> > > > +                       if (!p) {
> > > > +                               free(ids);
> > > > +                               return -ENOMEM;
> > > > +                       }
> > > > +                       ids = p;
> > > > +               }
> > > > +               ids[cnt] = i;
> > > > +               cnt++;
> > > > +       }
> > > > +
> > > > +       *__ids = ids;
> > > > +       return cnt;
> > > > +}
> > > > +
> > > >  static bool btf_is_modifiable(const struct btf *btf)
> > > >  {
> > > >         return (void *)btf->hdr != btf->raw_data;
> > > > diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
> > > > index b30008c267c0..d7b47bb0ba99 100644
> > > > --- a/tools/lib/bpf/btf.h
> > > > +++ b/tools/lib/bpf/btf.h
> > > > @@ -661,6 +661,9 @@ static inline struct btf_decl_tag *btf_decl_tag(const struct btf_type *t)
> > > >         return (struct btf_decl_tag *)(t + 1);
> > > >  }
> > > >
> > > > +int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
> > > > +                          const char *allow_pattern, const char *deny_pattern,
> > > > +                          __u32 **__ids);
> > >
> > >
> > > as AI pointed out, this should be an internal helper, no? Let's also
> > > not use double underscore pattern here,
> > > "collect_btf_ids_by_glob_kind()" perhaps?
> >
> > ok
> >
> > >
> > > Also, you don't seem to be using deny_pattern, where you planning to?
> >
> > the tests are just rudimentary before we agree we want to do it this way
> >
> > but I'm not sure I have a usecase for deny_pattern.. I think we added it
> > just to be complete, I recall we copied that function from somewhere,
> > it's long time ago ;-)
> >
> > >
> > > Also, are there functions that we'll have BTF for, but they won't be
> > > attachable? What if I do SEC("fentry.multi/*")? Will it attach or fail
> > > to attach some functions (and thus fail the overall attachment)?
> >
> > yes, for the benchmark tests I had to add is_allowed_func which mimics
> > btf_distill_func_proto and denies attach for some functions
> >
> > also I had to filter out some core kernel functions like rcu*,trace*,..
> > which seemed to cause trouble when you attach them
> 
> So the question I'm implying here is if libbpf should do what we do
> for kprobes: use libbpf_available_kprobes_parse and intersect?

right, I think it's good idea.. and in addition (just for patterns) we would
filter out functions that:

  - won't attach (is_allowed_func == false)
  - might cause problems (rcu*,trace*), maybe for that we could have
    opts config bool

jirka

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 08/12] libbpf: Add btf__find_by_glob_kind function
  2026-02-06  8:43         ` Jiri Olsa
@ 2026-02-06 16:58           ` Andrii Nakryiko
  0 siblings, 0 replies; 54+ messages in thread
From: Andrii Nakryiko @ 2026-02-06 16:58 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, Menglong Dong, Steven Rostedt

On Fri, Feb 6, 2026 at 12:43 AM Jiri Olsa <olsajiri@gmail.com> wrote:
>
> On Thu, Feb 05, 2026 at 02:45:14PM -0800, Andrii Nakryiko wrote:
> > On Thu, Feb 5, 2026 at 12:57 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> > >
> > > On Wed, Feb 04, 2026 at 11:04:09AM -0800, Andrii Nakryiko wrote:
> > > > On Tue, Feb 3, 2026 at 1:39 AM Jiri Olsa <jolsa@kernel.org> wrote:
> > > > >
> > > > > Adding btf__find_by_glob_kind function that returns array of
> > > > > BTF ids that match given kind and allow/deny patterns.
> > > > >
> > > > > int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
> > > > >                            const char *allow_pattern,
> > > > >                            const char *deny_pattern,
> > > > >                            __u32 **__ids);
> > > > >
> > > > > The __ids array is allocated and needs to be manually freed.
> > > > >
> > > > > The pattern check is done by glob_match function.
> > > > >
> > > > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > > > > ---
> > > > >  tools/lib/bpf/btf.c | 41 +++++++++++++++++++++++++++++++++++++++++
> > > > >  tools/lib/bpf/btf.h |  3 +++
> > > > >  2 files changed, 44 insertions(+)
> > > > >
> > > > > diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
> > > > > index 83fe79ffcb8f..64502b3ef38a 100644
> > > > > --- a/tools/lib/bpf/btf.c
> > > > > +++ b/tools/lib/bpf/btf.c
> > > > > @@ -1010,6 +1010,47 @@ __s32 btf__find_by_name_kind(const struct btf *btf, const char *type_name,
> > > > >         return btf_find_by_name_kind(btf, 1, type_name, kind);
> > > > >  }
> > > > >
> > > > > +int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
> > > > > +                          const char *allow_pattern, const char *deny_pattern,
> > > > > +                          __u32 **__ids)
> > > > > +{
> > > > > +       __u32 i, nr_types = btf__type_cnt(btf);
> > > > > +       int cnt = 0, alloc = 0;
> > > > > +       __u32 *ids = NULL;
> > > > > +
> > > > > +       for (i = 1; i < nr_types; i++) {
> > > > > +               const struct btf_type *t = btf__type_by_id(btf, i);
> > > > > +               const char *name;
> > > > > +               __u32 *p;
> > > > > +
> > > > > +               if (btf_kind(t) != kind)
> > > > > +                       continue;
> > > > > +               name = btf__name_by_offset(btf, t->name_off);
> > > > > +               if (!name)
> > > > > +                       continue;
> > > > > +
> > > > > +               if (deny_pattern && glob_match(name, deny_pattern))
> > > > > +                       continue;
> > > > > +               if (allow_pattern && !glob_match(name, allow_pattern))
> > > > > +                       continue;
> > > > > +
> > > > > +               if (cnt == alloc) {
> > > > > +                       alloc = max(16, alloc * 3 / 2);
> > > > > +                       p = libbpf_reallocarray(ids, alloc, sizeof(__u32));
> > > > > +                       if (!p) {
> > > > > +                               free(ids);
> > > > > +                               return -ENOMEM;
> > > > > +                       }
> > > > > +                       ids = p;
> > > > > +               }
> > > > > +               ids[cnt] = i;
> > > > > +               cnt++;
> > > > > +       }
> > > > > +
> > > > > +       *__ids = ids;
> > > > > +       return cnt;
> > > > > +}
> > > > > +
> > > > >  static bool btf_is_modifiable(const struct btf *btf)
> > > > >  {
> > > > >         return (void *)btf->hdr != btf->raw_data;
> > > > > diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
> > > > > index b30008c267c0..d7b47bb0ba99 100644
> > > > > --- a/tools/lib/bpf/btf.h
> > > > > +++ b/tools/lib/bpf/btf.h
> > > > > @@ -661,6 +661,9 @@ static inline struct btf_decl_tag *btf_decl_tag(const struct btf_type *t)
> > > > >         return (struct btf_decl_tag *)(t + 1);
> > > > >  }
> > > > >
> > > > > +int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
> > > > > +                          const char *allow_pattern, const char *deny_pattern,
> > > > > +                          __u32 **__ids);
> > > >
> > > >
> > > > as AI pointed out, this should be an internal helper, no? Let's also
> > > > not use double underscore pattern here,
> > > > "collect_btf_ids_by_glob_kind()" perhaps?
> > >
> > > ok
> > >
> > > >
> > > > Also, you don't seem to be using deny_pattern, where you planning to?
> > >
> > > the tests are just rudimentary before we agree we want to do it this way
> > >
> > > but I'm not sure I have a usecase for deny_pattern.. I think we added it
> > > just to be complete, I recall we copied that function from somewhere,
> > > it's long time ago ;-)
> > >
> > > >
> > > > Also, are there functions that we'll have BTF for, but they won't be
> > > > attachable? What if I do SEC("fentry.multi/*")? Will it attach or fail
> > > > to attach some functions (and thus fail the overall attachment)?
> > >
> > > yes, for the benchmark tests I had to add is_allowed_func which mimics
> > > btf_distill_func_proto and denies attach for some functions
> > >
> > > also I had to filter out some core kernel functions like rcu*,trace*,..
> > > which seemed to cause trouble when you attach them
> >
> > So the question I'm implying here is if libbpf should do what we do
> > for kprobes: use libbpf_available_kprobes_parse and intersect?
>
> right, I think it's good idea.. and in addition (just for patterns) we would
> filter out functions that:
>
>   - won't attach (is_allowed_func == false)
>   - might cause problems (rcu*,trace*), maybe for that we could have
>     opts config bool

Well, who's going to maintain the "might cause problems" list in
libbpf? Let's not try to be too smart. If some functions are not safe
to be attached, we should mark them as such in the kernel, IMO.

>
> jirka

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 00/12] bpf: tracing_multi link
  2026-02-06  8:18           ` Jiri Olsa
@ 2026-02-06 17:03             ` Andrii Nakryiko
  2026-02-08 20:54               ` Jiri Olsa
  0 siblings, 1 reply; 54+ messages in thread
From: Andrii Nakryiko @ 2026-02-06 17:03 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, bpf, linux-trace-kernel, Martin KaFai Lau,
	Eduard Zingerman, Song Liu, Yonghong Song, Menglong Dong,
	Steven Rostedt

On Fri, Feb 6, 2026 at 12:18 AM Jiri Olsa <olsajiri@gmail.com> wrote:
>
> On Thu, Feb 05, 2026 at 07:55:19AM -0800, Alexei Starovoitov wrote:
> > On Thu, Feb 5, 2026 at 12:55 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> > >
> > > On Wed, Feb 04, 2026 at 08:06:50AM -0800, Alexei Starovoitov wrote:
> > > > On Wed, Feb 4, 2026 at 4:36 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> > > > >
> > > > > On Tue, Feb 03, 2026 at 03:17:05PM -0800, Alexei Starovoitov wrote:
> > > > > > On Tue, Feb 3, 2026 at 1:38 AM Jiri Olsa <jolsa@kernel.org> wrote:
> > > > > > >
> > > > > > > hi,
> > > > > > > as an option to Meglong's change [1] I'm sending proposal for tracing_multi
> > > > > > > link that does not add static trampoline but attaches program to all needed
> > > > > > > trampolines.
> > > > > > >
> > > > > > > This approach keeps the same performance but has some drawbacks:
> > > > > > >
> > > > > > >  - when attaching 20k functions we allocate and attach 20k trampolines
> > > > > > >  - during attachment we hold each trampoline mutex, so for above
> > > > > > >    20k functions we will hold 20k mutexes during the attachment,
> > > > > > >    should be very prone to deadlock, but haven't hit it yet
> > > > > >
> > > > > > If you check that it's sorted and always take them in the same order
> > > > > > then there will be no deadlock.
> > > > > > Or just grab one global mutex first and then grab trampolines mutexes
> > > > > > next in any order. The global one will serialize this attach operation.
> > > > > >
> > > > > > > It looks the trampoline allocations/generation might not be big a problem
> > > > > > > and I'll try to find a solution for holding that many mutexes. If there's
> > > > > > > no better solution I think having one read/write mutex for tracing multi
> > > > > > > link attach/detach should work.
> > > > > >
> > > > > > If you mean to have one global mutex as I proposed above then I don't see
> > > > > > a downside. It only serializes multiple libbpf calls.
> > > > >
> > > > > we also need to serialize it with standard single trampoline attach,
> > > > > because the direct ftrace update is now done under trampoline->mutex:
> > > > >
> > > > >   bpf_trampoline_link_prog(tr)
> > > > >   {
> > > > >     mutex_lock(&tr->mutex);
> > > > >     ...
> > > > >     update_ftrace_direct_*
> > > > >     ...
> > > > >     mutex_unlock(&tr->mutex);
> > > > >   }
> > > > >
> > > > > for tracing_multi we would link the program first (with tr->mutex)
> > > > > and do the bulk ftrace update later (without tr->mutex)
> > > > >
> > > > >   {
> > > > >     for each involved trampoline:
> > > > >       bpf_trampoline_link_prog
> > > > >
> > > > >     --> and here we could race with some other thread doing single
> > > > >         trampoline attach
> > > > >
> > > > >     update_ftrace_direct_*
> > > > >   }
> > > > >
> > > > > note the current version locks all tr->mutex instances all the way
> > > > > through the update_ftrace_direct_* update
> > > > >
> > > > > I think we could use global rwsem and take read lock on single
> > > > > trampoline attach path and write lock on tracing_multi attach,
> > > > >
> > > > > I thought we could take direct_mutex early, but that would mean
> > > > > different order with trampoline mutex than we already have in
> > > > > single attach path
> > > >
> > > > I feel we're talking past each other.
> > > > I meant:
> > > >
> > > > For multi:
> > > > 1. take some global mutex
> > > > 2. take N tramp mutexes in any order
> > > >
> > > > For single:
> > > > 1. take that 1 specific tramp mutex.
> > >
> > > ah ok, I understand, it's to prevent the lockup but keep holding all
> > > the trampolines locks.. the rwsem I mentioned was for the 'fix', where
> > > we do not take all the trampolines locks
> >
> > I don't understand how rwsem would help.
> > All the operations on trampoline are protected by mutex.
> > Switching to rw makes sense only if we can designate certain
> > operations as "read" and others as "write" and number of "reads"
> > dominate. This won't be the case with multi-fentry.
> > And we still need to take all of them as "write" to update trampoline.
>
> this applies to scenario where we do not hold all the trampoline locks,
> in such case we could have race between single and multi attachment,
> while single/single attachment race stays safe
>
> as a fix the single attach would take read lock and multi attach would
> take write lock, so single/single race is allowed and single/multi is
> not ... showed in the patch below
>
> but it might be too much.. in a sense that there's already many locks
> involved in trampoline attach/detach, and simple global lock in multi
> or just sorting the ids would be enough
>

I'll just throw this idea here, but we don't have to do it right away.
What if instead of having a per-trampoline lock, we just have a common
relatively small pool of locks that all trampolines share based on
some hash (i.e., we deterministically map trampoline to one of the
locks). Then multi-attach can just go and grab all of them in
predefined order, while singular trampoline attaches will just get
their own one. We won't need to sort anything, we reduce the amount of
different locks. I don't think lock contention (due to lock sharing
for some trampolines) is a real issue to be worried about either.

> jirka
>
>
> ---
> diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> index b76bb545077b..edbc8f133dda 100644
> --- a/kernel/bpf/trampoline.c
> +++ b/kernel/bpf/trampoline.c
> @@ -30,6 +30,8 @@ static struct hlist_head trampoline_ip_table[TRAMPOLINE_TABLE_SIZE];
>  /* serializes access to trampoline tables */
>  static DEFINE_MUTEX(trampoline_mutex);
>
> +static DECLARE_RWSEM(multi_sem);
> +
>  struct bpf_trampoline_ops {
>         int (*register_fentry)(struct bpf_trampoline *tr, void *new_addr, void *data);
>         int (*unregister_fentry)(struct bpf_trampoline *tr, u32 orig_flags, void *old_addr, void *data);
> @@ -367,11 +369,7 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key, unsigned long ip)
>         head = &trampoline_ip_table[hash_64(tr->ip, TRAMPOLINE_HASH_BITS)];
>         hlist_add_head(&tr->hlist_ip, head);
>         refcount_set(&tr->refcnt, 1);
> -#ifdef CONFIG_LOCKDEP
> -       mutex_init_with_key(&tr->mutex, &__lockdep_no_track__);
> -#else
>         mutex_init(&tr->mutex);
> -#endif
>         for (i = 0; i < BPF_TRAMP_MAX; i++)
>                 INIT_HLIST_HEAD(&tr->progs_hlist[i]);
>  out:
> @@ -871,6 +869,8 @@ int bpf_trampoline_link_prog(struct bpf_tramp_node *node,
>  {
>         int err;
>
> +       guard(rwsem_read)(&multi_sem);
> +
>         mutex_lock(&tr->mutex);
>         err = __bpf_trampoline_link_prog(node, tr, tgt_prog, &trampoline_ops, NULL);
>         mutex_unlock(&tr->mutex);
> @@ -916,6 +916,8 @@ int bpf_trampoline_unlink_prog(struct bpf_tramp_node *node,
>  {
>         int err;
>
> +       guard(rwsem_read)(&multi_sem);
> +
>         mutex_lock(&tr->mutex);
>         err = __bpf_trampoline_unlink_prog(node, tr, tgt_prog, &trampoline_ops, NULL);
>         mutex_unlock(&tr->mutex);
> @@ -1463,6 +1465,8 @@ int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
>         struct bpf_trampoline *tr;
>         u64 key;
>
> +       guard(rwsem_write)(&multi_sem);
> +
>         data.reg = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
>         if (!data.reg)
>                 return -ENOMEM;
> @@ -1494,12 +1498,10 @@ int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
>                 tr = mnode->trampoline;
>
>                 mutex_lock(&tr->mutex);
> -
>                 err = __bpf_trampoline_link_prog(&mnode->node, tr, NULL, &trampoline_multi_ops, &data);
> -               if (err) {
> -                       mutex_unlock(&tr->mutex);
> +               mutex_unlock(&tr->mutex);
> +               if (err)
>                         goto rollback_unlink;
> -               }
>         }
>
>         if (ftrace_hash_count(data.reg)) {
> @@ -1516,11 +1518,6 @@ int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
>                 }
>         }
>
> -       for (i = 0; i < cnt; i++) {
> -               tr = link->nodes[i].trampoline;
> -               mutex_unlock(&tr->mutex);
> -       }
> -
>         free_fentry_multi_data(&data);
>         return 0;
>
> @@ -1528,6 +1525,7 @@ int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
>         for (j = 0; j < i; j++) {
>                 mnode = &link->nodes[j];
>                 tr = mnode->trampoline;
> +               mutex_lock(&tr->mutex);
>                 WARN_ON_ONCE(__bpf_trampoline_unlink_prog(&mnode->node, tr, NULL,
>                              &trampoline_multi_ops, &data));
>                 mutex_unlock(&tr->mutex);
> @@ -1550,6 +1548,8 @@ int bpf_trampoline_multi_detach(struct bpf_prog *prog, struct bpf_tracing_multi_
>         int i, cnt = link->nodes_cnt;
>         struct bpf_trampoline *tr;
>
> +       guard(rwsem_write)(&multi_sem);
> +
>         data.unreg = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
>         if (!data.unreg)
>                 return -ENOMEM;
> @@ -1567,6 +1567,7 @@ int bpf_trampoline_multi_detach(struct bpf_prog *prog, struct bpf_tracing_multi_
>                 mutex_lock(&tr->mutex);
>                 WARN_ON_ONCE(__bpf_trampoline_unlink_prog(&mnode->node, tr, NULL,
>                                                           &trampoline_multi_ops, &data));
> +               mutex_unlock(&tr->mutex);
>         }
>
>         if (ftrace_hash_count(data.unreg))
> @@ -1576,7 +1577,6 @@ int bpf_trampoline_multi_detach(struct bpf_prog *prog, struct bpf_tracing_multi_
>
>         for (i = 0; i < cnt; i++) {
>                 tr = link->nodes[i].trampoline;
> -               mutex_unlock(&tr->mutex);
>                 bpf_trampoline_put(tr);
>         }
>

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 00/12] bpf: tracing_multi link
  2026-02-06 17:03             ` Andrii Nakryiko
@ 2026-02-08 20:54               ` Jiri Olsa
  0 siblings, 0 replies; 54+ messages in thread
From: Jiri Olsa @ 2026-02-08 20:54 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Jiri Olsa, Alexei Starovoitov, Alexei Starovoitov,
	Daniel Borkmann, Andrii Nakryiko, bpf, linux-trace-kernel,
	Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song,
	Menglong Dong, Steven Rostedt

On Fri, Feb 06, 2026 at 09:03:29AM -0800, Andrii Nakryiko wrote:
> On Fri, Feb 6, 2026 at 12:18 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> >
> > On Thu, Feb 05, 2026 at 07:55:19AM -0800, Alexei Starovoitov wrote:
> > > On Thu, Feb 5, 2026 at 12:55 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> > > >
> > > > On Wed, Feb 04, 2026 at 08:06:50AM -0800, Alexei Starovoitov wrote:
> > > > > On Wed, Feb 4, 2026 at 4:36 AM Jiri Olsa <olsajiri@gmail.com> wrote:
> > > > > >
> > > > > > On Tue, Feb 03, 2026 at 03:17:05PM -0800, Alexei Starovoitov wrote:
> > > > > > > On Tue, Feb 3, 2026 at 1:38 AM Jiri Olsa <jolsa@kernel.org> wrote:
> > > > > > > >
> > > > > > > > hi,
> > > > > > > > as an option to Meglong's change [1] I'm sending proposal for tracing_multi
> > > > > > > > link that does not add static trampoline but attaches program to all needed
> > > > > > > > trampolines.
> > > > > > > >
> > > > > > > > This approach keeps the same performance but has some drawbacks:
> > > > > > > >
> > > > > > > >  - when attaching 20k functions we allocate and attach 20k trampolines
> > > > > > > >  - during attachment we hold each trampoline mutex, so for above
> > > > > > > >    20k functions we will hold 20k mutexes during the attachment,
> > > > > > > >    should be very prone to deadlock, but haven't hit it yet
> > > > > > >
> > > > > > > If you check that it's sorted and always take them in the same order
> > > > > > > then there will be no deadlock.
> > > > > > > Or just grab one global mutex first and then grab trampolines mutexes
> > > > > > > next in any order. The global one will serialize this attach operation.
> > > > > > >
> > > > > > > > It looks the trampoline allocations/generation might not be big a problem
> > > > > > > > and I'll try to find a solution for holding that many mutexes. If there's
> > > > > > > > no better solution I think having one read/write mutex for tracing multi
> > > > > > > > link attach/detach should work.
> > > > > > >
> > > > > > > If you mean to have one global mutex as I proposed above then I don't see
> > > > > > > a downside. It only serializes multiple libbpf calls.
> > > > > >
> > > > > > we also need to serialize it with standard single trampoline attach,
> > > > > > because the direct ftrace update is now done under trampoline->mutex:
> > > > > >
> > > > > >   bpf_trampoline_link_prog(tr)
> > > > > >   {
> > > > > >     mutex_lock(&tr->mutex);
> > > > > >     ...
> > > > > >     update_ftrace_direct_*
> > > > > >     ...
> > > > > >     mutex_unlock(&tr->mutex);
> > > > > >   }
> > > > > >
> > > > > > for tracing_multi we would link the program first (with tr->mutex)
> > > > > > and do the bulk ftrace update later (without tr->mutex)
> > > > > >
> > > > > >   {
> > > > > >     for each involved trampoline:
> > > > > >       bpf_trampoline_link_prog
> > > > > >
> > > > > >     --> and here we could race with some other thread doing single
> > > > > >         trampoline attach
> > > > > >
> > > > > >     update_ftrace_direct_*
> > > > > >   }
> > > > > >
> > > > > > note the current version locks all tr->mutex instances all the way
> > > > > > through the update_ftrace_direct_* update
> > > > > >
> > > > > > I think we could use global rwsem and take read lock on single
> > > > > > trampoline attach path and write lock on tracing_multi attach,
> > > > > >
> > > > > > I thought we could take direct_mutex early, but that would mean
> > > > > > different order with trampoline mutex than we already have in
> > > > > > single attach path
> > > > >
> > > > > I feel we're talking past each other.
> > > > > I meant:
> > > > >
> > > > > For multi:
> > > > > 1. take some global mutex
> > > > > 2. take N tramp mutexes in any order
> > > > >
> > > > > For single:
> > > > > 1. take that 1 specific tramp mutex.
> > > >
> > > > ah ok, I understand, it's to prevent the lockup but keep holding all
> > > > the trampolines locks.. the rwsem I mentioned was for the 'fix', where
> > > > we do not take all the trampolines locks
> > >
> > > I don't understand how rwsem would help.
> > > All the operations on trampoline are protected by mutex.
> > > Switching to rw makes sense only if we can designate certain
> > > operations as "read" and others as "write" and number of "reads"
> > > dominate. This won't be the case with multi-fentry.
> > > And we still need to take all of them as "write" to update trampoline.
> >
> > this applies to scenario where we do not hold all the trampoline locks,
> > in such case we could have race between single and multi attachment,
> > while single/single attachment race stays safe
> >
> > as a fix the single attach would take read lock and multi attach would
> > take write lock, so single/single race is allowed and single/multi is
> > not ... showed in the patch below
> >
> > but it might be too much.. in a sense that there's already many locks
> > involved in trampoline attach/detach, and simple global lock in multi
> > or just sorting the ids would be enough
> >
> 
> I'll just throw this idea here, but we don't have to do it right away.
> What if instead of having a per-trampoline lock, we just have a common
> relatively small pool of locks that all trampolines share based on
> some hash (i.e., we deterministically map trampoline to one of the
> locks). Then multi-attach can just go and grab all of them in
> predefined order, while singular trampoline attaches will just get
> their own one. We won't need to sort anything, we reduce the amount of
> different locks. I don't think lock contention (due to lock sharing
> for some trampolines) is a real issue to be worried about either.

nice idea, I'll check on that

thanks,
jirka

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 06/12] bpf: Add bpf_trampoline_multi_attach/detach functions
  2026-02-05 13:45     ` Jiri Olsa
@ 2026-02-11  8:04       ` Menglong Dong
  0 siblings, 0 replies; 54+ messages in thread
From: Menglong Dong @ 2026-02-11  8:04 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, Menglong Dong, Steven Rostedt, Mark Rutland

On 2026/2/5 21:45, Jiri Olsa wrote:
> On Thu, Feb 05, 2026 at 05:16:49PM +0800, Menglong Dong wrote:
> > On 2026/2/3 17:38 Jiri Olsa <jolsa@kernel.org> write:
> > > Adding bpf_trampoline_multi_attach/detach functions that allows
> > > to attach/detach multi tracing trampoline.
> > > 
> > > The attachment is defined with bpf_program and array of BTF ids
> > > of functions to attach the bpf program to.
> > > 
> > [...]
> > > @@ -367,7 +367,11 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key, unsigned long ip)
> > >  	head = &trampoline_ip_table[hash_64(tr->ip, TRAMPOLINE_HASH_BITS)];
> > >  	hlist_add_head(&tr->hlist_ip, head);
> > >  	refcount_set(&tr->refcnt, 1);
> > > +#ifdef CONFIG_LOCKDEP
> > > +	mutex_init_with_key(&tr->mutex, &__lockdep_no_track__);
> > > +#else
> > >  	mutex_init(&tr->mutex);
> > > +#endif
> > >  	for (i = 0; i < BPF_TRAMP_MAX; i++)
> > >  		INIT_HLIST_HEAD(&tr->progs_hlist[i]);
> > >  out:
> > > @@ -1400,6 +1404,188 @@ int __weak arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags,
> > >  	return -ENOTSUPP;
> > >  }
> > >  
> > > +#if defined(CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS) && defined(CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS)
> > 
> > Hi, Jiri. It's great to see your tracing_multi link finally. It looks great ;)
> 
> heya, thanks ;-)
> 
> > 
> > After analyzing a little deeper on the SINGLE_FTRACE_DIRECT_OPS, I
> > understand why it is only supported on x86_64 for now. It seems that
> > it's a little hard to implement it in the other arch, as we need to
> > restructure the implement of ftrace direct call.
> > 
> > So do we need some more ftrace API here to make the tracing multi-link
> > independent from SINGLE_FTRACE_DIRECT_OPS? Otherwise, we can only
> > use it on x86_64.
> 
> I tried to describe it in commit [2] changelog:
> 
>     At the moment we can enable this only on x86 arch, because arm relies
>     on ftrace_ops object representing just single trampoline image (stored
>     in ftrace_ops::direct_call). Archs that do not support this will continue
>     to use *_ftrace_direct api.

Ah, I didn't notice this part before. Thanks for the explain ;)

> 
> > 
> > Have you ever tried to implement the SINGLE_FTRACE_DIRECT_OPS on arm64?
> > The direct call on arm64 is so complex, and I didn't work it out :/
> 
> yes, it seems to be difficult atm, Mark commented on that in [1],
> I don't know arm that good to be of much help in here, cc-ing Mark
> 
> jirka
> 
> 
> [1] https://lore.kernel.org/bpf/aIyNOd18TRLu8EpY@J2N7QTR9R3/
> [2] 424f6a361096 ("bpf,x86: Use single ftrace_ops for direct calls")
> 
> > 
> > Thanks!
> > Menglong Dong
> > 
> > > +
> > > +struct fentry_multi_data {
> > > +	struct ftrace_hash *unreg;
> > > +	struct ftrace_hash *modify;
> > > +	struct ftrace_hash *reg;
> > > +};
> > > +
> > [...]
> > > 
> > > 
> > > 
> > 
> > 
> > 
> > 
> 





^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 05/12] bpf: Add multi tracing attach types
  2026-02-03 10:13   ` bot+bpf-ci
@ 2026-02-17 22:05     ` Jiri Olsa
  0 siblings, 0 replies; 54+ messages in thread
From: Jiri Olsa @ 2026-02-17 22:05 UTC (permalink / raw)
  To: bot+bpf-ci
  Cc: ast, daniel, andrii, bpf, linux-trace-kernel, kafai, eddyz87,
	songliubraving, yhs, menglong8.dong, rostedt, martin.lau,
	yonghong.song, clm, ihor.solodrai

On Tue, Feb 03, 2026 at 10:13:53AM +0000, bot+bpf-ci@kernel.org wrote:
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 6b62b6d57..fb52ba2f7 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> 
> [ ... ]
> 
> > @@ -25069,7 +25074,8 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
> >  		return 0;
> >  	} else if (prog->expected_attach_type == BPF_TRACE_ITER) {
> >  		return bpf_iter_prog_supported(prog);
> > -	}
> > +	} else if (is_tracing_multi(prog->expected_attach_type))
> > +		return prog->type == BPF_PROG_TYPE_TRACING ? 0 : -EINVAL;
> 
> This isn't a bug, but should this branch have braces for consistency with the
> preceding branches in the same if/else chain? The BPF_TRACE_RAW_TP and
> BPF_TRACE_ITER branches both use braces.

right, will add it, thnx

jirka

> 
> 
> ---
> AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
> See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
> 
> CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21625440628
> 
> AI-authorship-score: low
> AI-authorship-explanation: The commit message contains natural colloquial language and personal attribution typical of human kernel developers, and the code follows established BPF patterns with typical human oversights in style consistency.
> issues-found: 1
> issue-severity-score: low
> issue-severity-explanation: Minor coding style inconsistency with missing braces in an else-if branch that does not affect functionality.


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 06/12] bpf: Add bpf_trampoline_multi_attach/detach functions
  2026-02-03 10:14   ` bot+bpf-ci
@ 2026-02-17 22:05     ` Jiri Olsa
  0 siblings, 0 replies; 54+ messages in thread
From: Jiri Olsa @ 2026-02-17 22:05 UTC (permalink / raw)
  To: bot+bpf-ci
  Cc: ast, daniel, andrii, bpf, linux-trace-kernel, kafai, eddyz87,
	songliubraving, yhs, menglong8.dong, rostedt, martin.lau,
	yonghong.song, clm, ihor.solodrai

On Tue, Feb 03, 2026 at 10:14:02AM +0000, bot+bpf-ci@kernel.org wrote:
> > diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> > index 2be2f1d0b..b76bb5450 100644
> > --- a/kernel/bpf/trampoline.c
> > +++ b/kernel/bpf/trampoline.c
> 
> [ ... ]
> 
> > +int bpf_trampoline_multi_attach(struct bpf_prog *prog, u32 *ids,
> > +				struct bpf_tracing_multi_link *link)
> > +{
> > +	struct bpf_attach_target_info tgt_info = {};
> > +	struct bpf_tracing_multi_node *mnode;
> > +	int j, i, err, cnt = link->nodes_cnt;
> > +	struct fentry_multi_data data = {};
> > +	struct bpf_trampoline *tr;
> > +	u64 key;
> > +
> > +	data.reg = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
> > +	if (!data.reg)
> > +		return -ENOMEM;
> > +
> > +	data.modify = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
> > +	if (!data.modify) {
> > +		free_ftrace_hash(data.reg);
> > +		return -ENOMEM;
> > +	}
> > +
> > +	for (i = 0; i < cnt; i++) {
> > +		mnode = &link->nodes[i];
> > +		err = bpf_check_attach_target(NULL, prog, NULL, ids[i], &tgt_info);
> > +		if (err)
> > +			goto rollback_put;
> > +
> > +		key = bpf_trampoline_compute_key(NULL, prog->aux->attach_btf, ids[i]);
> > +
> > +		tr = bpf_trampoline_get(key, &tgt_info);
> > +		if (!tr)
> > +			goto rollback_put;
> 
> When bpf_trampoline_get() returns NULL here, the code jumps to
> rollback_put without setting err. If bpf_check_attach_target() succeeded
> on this iteration (setting err to 0), the function will return 0 instead
> of an error code.
> 
> Should this be something like:
> 
>     tr = bpf_trampoline_get(key, &tgt_info);
>     if (!tr) {
>         err = -ENOMEM;
>         goto rollback_put;
>     }
> 

yep, we should, will add, thnx

jirka

> > +
> > +		mnode->trampoline = tr;
> > +		mnode->node.prog = prog;
> > +	}
> 
> [ ... ]
> 
> > +rollback_put:
> > +	for (j = 0; j < i; j++) {
> > +		mnode = &link->nodes[j];
> > +		bpf_trampoline_put(mnode->trampoline);
> > +	}
> > +
> > +	free_fentry_multi_data(&data);
> > +	return err;
> > +}
> 
> 
> ---
> AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
> See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
> 
> CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21625440628
> 
> AI-authorship-score: low
> AI-authorship-explanation: The code follows established kernel patterns and contains a human-typical oversight error (missing error assignment); the commit message has a minor copy-paste error typical of human editing.
> issues-found: 1
> issue-severity-score: low
> issue-severity-explanation: Missing error code when bpf_trampoline_get fails causes silent failure under memory pressure, but does not cause crashes or data corruption.


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 07/12] bpf: Add support to create tracing multi link
  2026-02-03 10:13   ` bot+bpf-ci
@ 2026-02-17 22:05     ` Jiri Olsa
  0 siblings, 0 replies; 54+ messages in thread
From: Jiri Olsa @ 2026-02-17 22:05 UTC (permalink / raw)
  To: bot+bpf-ci
  Cc: ast, daniel, andrii, bpf, linux-trace-kernel, kafai, eddyz87,
	songliubraving, yhs, menglong8.dong, rostedt, martin.lau,
	yonghong.song, clm, ihor.solodrai

On Tue, Feb 03, 2026 at 10:13:58AM +0000, bot+bpf-ci@kernel.org wrote:
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index 68600972a..010785246 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> 
> [ ... ]
> 
> > @@ -1863,6 +1864,10 @@ union bpf_attr {
> > 				};
> > 				__u64		expected_revision;
> > 			} cgroup;
> > +			struct {
> > +				__aligned_u64	btf_ids;	/* addresses to attach */
> > +				__u32		btf_ids_cnt;	/* addresses count */
> > +			} tracing_multi;
> 
> The comments say "addresses to attach" and "addresses count", but the field is
> named btf_ids and contains BTF type IDs, not addresses. Should these comments
> say "BTF IDs to attach" and "BTF IDs count" instead?

yep, wrong comments, will fix

> 
> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index f7baeb827..82e625aa0 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> 
> [ ... ]
> 
> > +static void bpf_tracing_multi_link_show_fdinfo(const struct bpf_link *link,
> > +					       struct seq_file *seq)
> > +{
> > +	struct bpf_tracing_multi_link *tr_link =
> > +		container_of(link, struct bpf_tracing_multi_link, link);
> > +
> > +	seq_printf(seq, "attach_type:\t%d\n", tr_link->attach_type);
> > +}
> > +
> > +static int bpf_tracing_multi_link_fill_link_info(const struct bpf_link *link,
> > +						 struct bpf_link_info *info)
> > +{
> > +	struct bpf_tracing_multi_link *tr_link =
> > +		container_of(link, struct bpf_tracing_multi_link, link);
> > +
> > +	info->tracing.attach_type = tr_link->attach_type;
> > +	return 0;
> > +}
> 
> Both functions above use tr_link->attach_type, but looking at
> bpf_tracing_multi_attach() below, this field is never initialized:
> 
> > +int bpf_tracing_multi_attach(struct bpf_prog *prog, const union bpf_attr *attr)
> > +{
> 
> [ ... ]
> 
> > +	link = kzalloc(struct_size(link, nodes, cnt), GFP_KERNEL);
> > +	if (!link) {
> > +		err = -ENOMEM;
> > +		goto error;
> > +	}
> > +
> > +	link->nodes_cnt = cnt;
> > +
> > +	bpf_link_init(&link->link, BPF_LINK_TYPE_TRACING_MULTI,
> > +		      &bpf_tracing_multi_link_lops, prog, prog->expected_attach_type);
> 
> bpf_link_init() sets link->link.attach_type (inside the embedded struct bpf_link),
> but struct bpf_tracing_multi_link has a separate attach_type field that remains
> zero-initialized from kzalloc.
> 
> The show_fdinfo and fill_link_info callbacks read tr_link->attach_type which
> will always be 0 (BPF_CGROUP_INET_INGRESS) instead of the actual attach type
> (BPF_TRACE_FENTRY_MULTI or BPF_TRACE_FEXIT_MULTI).
> 
> Looking at the existing bpf_tracing_link_show_fdinfo() and
> bpf_tracing_link_fill_link_info() in kernel/bpf/syscall.c, they use
> link->attach_type (the inner struct's field). Should tr_link->attach_type
> either be removed and link->attach_type used instead, or should it be
> explicitly initialized to prog->expected_attach_type?

right, I'll add linkinfo/fdinfo in separate change, thnx

jirka

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 09/12] libbpf: Add support to create tracing multi link
  2026-02-03 10:14   ` bot+bpf-ci
@ 2026-02-17 22:05     ` Jiri Olsa
  0 siblings, 0 replies; 54+ messages in thread
From: Jiri Olsa @ 2026-02-17 22:05 UTC (permalink / raw)
  To: bot+bpf-ci
  Cc: ast, daniel, andrii, bpf, linux-trace-kernel, kafai, eddyz87,
	songliubraving, yhs, menglong8.dong, rostedt, martin.lau,
	yonghong.song, clm, ihor.solodrai

On Tue, Feb 03, 2026 at 10:14:03AM +0000, bot+bpf-ci@kernel.org wrote:
> > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > index 0c8bf0b5c..a16243300 100644
> > --- a/tools/lib/bpf/libbpf.c
> > +++ b/tools/lib/bpf/libbpf.c
> 
> [ ... ]
> 
> > +struct bpf_link *
> > +bpf_program__attach_tracing_multi(const struct bpf_program *prog, const char *pattern,
> > +				  const struct bpf_tracing_multi_opts *opts)
> > +{
> > +	LIBBPF_OPTS(bpf_link_create_opts, lopts);
> > +	__u32 *btf_ids, cnt, *free_ids = NULL;
> > +	int prog_fd, link_fd, err;
> > +	struct bpf_link *link;
> > +
> > +	btf_ids = OPTS_GET(opts, btf_ids, false);
> > +	cnt = OPTS_GET(opts, cnt, false);
> > +
> > +	if (!pattern && !btf_ids && !cnt)
> > +		return libbpf_err_ptr(-EINVAL);
> > +	if (pattern && (btf_ids || cnt))
> > +		return libbpf_err_ptr(-EINVAL);
> > +
> > +	if (pattern) {
> > +		err = bpf_object__load_vmlinux_btf(prog->obj, true);
> > +		if (err)
> > +			return libbpf_err_ptr(err);
> > +
> > +		cnt = btf__find_by_glob_kind(prog->obj->btf_vmlinux, BTF_KIND_FUNC,
> > +					     pattern, NULL, &btf_ids);
> > +		if (cnt <= 0)
> > +			return libbpf_err_ptr(-EINVAL);
> > +		free_ids = btf_ids;
> > +	}
> > +
> > +	lopts.tracing_multi.btf_ids = btf_ids;
> > +	lopts.tracing_multi.btf_ids_cnt = cnt;
> > +
> > +	link = calloc(1, sizeof(*link));
> > +	if (!link)
> > +		return libbpf_err_ptr(-ENOMEM);
> 
> If calloc fails here and pattern was provided, btf__find_by_glob_kind() will
> have allocated btf_ids (tracked by free_ids). Does this leak free_ids? The
> error label below frees it properly, but this return bypasses that path.
> 
> Something like:
> 
>     if (!link) {
>         free(free_ids);
>         return libbpf_err_ptr(-ENOMEM);
>     }
> 
> or using goto error would fix this.

yep, will fix

> 
> > +	link->detach = &bpf_link__detach_fd;
> > +
> > +	prog_fd = bpf_program__fd(prog);
> > +	link_fd = bpf_link_create(prog_fd, 0, prog->expected_attach_type, &lopts);
> > +	if (link_fd < 0) {
> > +		err = -errno;
> > +		pr_warn("prog '%s': failed to attach: %s\n", prog->name, errstr(err));
> > +		goto error;
> > +	}
> > +	link->fd = link_fd;
> > +	free(free_ids);
> > +	return link;
> > +error:
> > +	free(link);
> > +	free(free_ids);
> > +	return libbpf_err_ptr(err);
> > +}
> > +
> > +static int attach_tracing_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link)
> > +{
> > +	const char *spec;
> > +	char *pattern;
> > +	bool is_fexit;
> > +	int n;
> > +
> > +	/* no auto-attach for SEC("fentry.multi") and SEC("fexit.multi") */
> > +	if (strcmp(prog->sec_name, "fentry.multi") == 0 ||
> > +	    strcmp(prog->sec_name, "fexit.multi") == 0)
> > +		return 0;
> > +
> > +	is_fexit = str_has_pfx(prog->sec_name, "fexit.multi/");
> > +	if (is_fexit)
> > +		spec = prog->sec_name + sizeof("fexit.multi/") - 1;
> > +	else
> > +		spec = prog->sec_name + sizeof("fentry.multi/") - 1;
> > +
> > +	n = sscanf(spec, "%m[a-zA-Z0-9_.*?]", &pattern);
> > +	if (n < 1) {
> > +		pr_warn("tracing multi pattern is invalid: %s\n", pattern);
> 
> When sscanf fails to match (n < 1), pattern is never assigned because %m only
> allocates on successful match. Should this print spec instead of the
> uninitialized pattern? Compare with attach_kprobe_multi() which uses:
> 
>     pr_warn("kprobe multi pattern is invalid: %s\n", spec);

yes

> 
> > +		return -EINVAL;
> > +	}
> > +
> > +	*link = bpf_program__attach_tracing_multi(prog, pattern, NULL);
> > +	return libbpf_get_error(*link);
> 
> Is pattern being leaked here? sscanf with %m allocates the string, but there
> is no free(pattern) before returning. Compare with attach_kprobe_multi():
> 
>     *link = bpf_program__attach_kprobe_multi_opts(prog, pattern, &opts);
>     free(pattern);
>     return libbpf_get_error(*link);

yep, will fix, thnx

jirka

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 09/12] libbpf: Add support to create tracing multi link
  2026-02-04 19:05   ` Andrii Nakryiko
@ 2026-02-17 22:06     ` Jiri Olsa
  0 siblings, 0 replies; 54+ messages in thread
From: Jiri Olsa @ 2026-02-17 22:06 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	linux-trace-kernel, Martin KaFai Lau, Eduard Zingerman, Song Liu,
	Yonghong Song, Menglong Dong, Steven Rostedt

On Wed, Feb 04, 2026 at 11:05:16AM -0800, Andrii Nakryiko wrote:
> On Tue, Feb 3, 2026 at 1:40 AM Jiri Olsa <jolsa@kernel.org> wrote:
> >
> > Adding new interface function to attach programs with tracing
> > multi link:
> >
> >   bpf_program__attach_tracing_multi(const struct bpf_program *prog,
> >                                     const char *pattern,
> >                                     const struct bpf_tracing_multi_opts *opts);
> >
> > The program is attach to functions specified by pattern or by
> > btf IDs specified in bpf_tracing_multi_opts object.
> >
> > Adding support for new sections to attach programs with above
> > functions:
> >
> >    fentry.multi/pattern
> >    fexit.multi/pattern
> >
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  tools/lib/bpf/bpf.c      |  7 ++++
> >  tools/lib/bpf/bpf.h      |  4 ++
> >  tools/lib/bpf/libbpf.c   | 87 ++++++++++++++++++++++++++++++++++++++++
> >  tools/lib/bpf/libbpf.h   | 14 +++++++
> >  tools/lib/bpf/libbpf.map |  1 +
> >  5 files changed, 113 insertions(+)
> 
> [...]
> 
> >  static const char * const map_type_name[] = {
> > @@ -9814,6 +9817,7 @@ static int attach_kprobe_session(const struct bpf_program *prog, long cookie, st
> >  static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link);
> >  static int attach_lsm(const struct bpf_program *prog, long cookie, struct bpf_link **link);
> >  static int attach_iter(const struct bpf_program *prog, long cookie, struct bpf_link **link);
> > +static int attach_tracing_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link);
> >
> >  static const struct bpf_sec_def section_defs[] = {
> >         SEC_DEF("socket",               SOCKET_FILTER, 0, SEC_NONE),
> > @@ -9862,6 +9866,8 @@ static const struct bpf_sec_def section_defs[] = {
> >         SEC_DEF("fexit.s+",             TRACING, BPF_TRACE_FEXIT, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace),
> >         SEC_DEF("fsession+",            TRACING, BPF_TRACE_FSESSION, SEC_ATTACH_BTF, attach_trace),
> >         SEC_DEF("fsession.s+",          TRACING, BPF_TRACE_FSESSION, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace),
> > +       SEC_DEF("fentry.multi+",        TRACING, BPF_TRACE_FENTRY_MULTI, 0, attach_tracing_multi),
> > +       SEC_DEF("fexit.multi+",         TRACING, BPF_TRACE_FEXIT_MULTI, 0, attach_tracing_multi),
> >         SEC_DEF("freplace+",            EXT, 0, SEC_ATTACH_BTF, attach_trace),
> >         SEC_DEF("lsm+",                 LSM, BPF_LSM_MAC, SEC_ATTACH_BTF, attach_lsm),
> >         SEC_DEF("lsm.s+",               LSM, BPF_LSM_MAC, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_lsm),
> > @@ -12237,6 +12243,87 @@ static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, stru
> >         return ret;
> >  }
> >
> > +struct bpf_link *
> > +bpf_program__attach_tracing_multi(const struct bpf_program *prog, const char *pattern,
> > +                                 const struct bpf_tracing_multi_opts *opts)
> > +{
> > +       LIBBPF_OPTS(bpf_link_create_opts, lopts);
> > +       __u32 *btf_ids, cnt, *free_ids = NULL;
> > +       int prog_fd, link_fd, err;
> > +       struct bpf_link *link;
> > +
> > +       btf_ids = OPTS_GET(opts, btf_ids, false);
> > +       cnt = OPTS_GET(opts, cnt, false);
> > +
> > +       if (!pattern && !btf_ids && !cnt)
> 
> let's check that either both btf_ids and cnt are specified or none
> 
> then we can check that either pattern or btf_ids are specified
> 
> still two checks, but will capture all the bad cases

ok

> 
> > +               return libbpf_err_ptr(-EINVAL);
> > +       if (pattern && (btf_ids || cnt))
> > +               return libbpf_err_ptr(-EINVAL);
> > +
> 
> [...]
> 
> >  struct bpf_uprobe_opts {
> >         /* size of this struct, for forward/backward compatibility */
> >         size_t sz;
> > diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> > index d18fbcea7578..a3ffb21270e9 100644
> > --- a/tools/lib/bpf/libbpf.map
> > +++ b/tools/lib/bpf/libbpf.map
> > @@ -358,6 +358,7 @@ LIBBPF_1.0.0 {
> >                 bpf_program__attach_ksyscall;
> >                 bpf_program__autoattach;
> >                 bpf_program__set_autoattach;
> > +               bpf_program__attach_tracing_multi;
> 
> stuck in the past? ;) we are in 1.7 cycle

a bit :) will fix, thnx

jirka

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 10/12] selftests/bpf: Add fentry tracing multi func test
  2026-02-03 10:13   ` bot+bpf-ci
@ 2026-02-17 22:06     ` Jiri Olsa
  0 siblings, 0 replies; 54+ messages in thread
From: Jiri Olsa @ 2026-02-17 22:06 UTC (permalink / raw)
  To: bot+bpf-ci
  Cc: ast, daniel, andrii, bpf, linux-trace-kernel, kafai, eddyz87,
	songliubraving, yhs, menglong8.dong, rostedt, martin.lau,
	yonghong.song, clm, ihor.solodrai

On Tue, Feb 03, 2026 at 10:13:55AM +0000, bot+bpf-ci@kernel.org wrote:
> > diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_check.c b/tools/testing/selftests/bpf/progs/tracing_multi_check.c
> > new file mode 100644
> > index 000000000..e5efa9884
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/progs/tracing_multi_check.c
> 
> [ ... ]
> 
> > +	} else if (ip == &bpf_fentry_test3) {
> > +		char a, b;
>                       ^
> 
> The commit message states "We need to cast to real arguments types in
> multi_arg_check, because the checked value can be shorter than u64."
> 
> Should variable b be declared as int instead of char?
> 
> The kernel function signature is:
> 
>     noinline int bpf_fentry_test3(char a, int b, u64 c)
> 
> The second parameter is int, not char. While the test passes with the
> current test value of 5, this appears inconsistent with the stated goal
> of matching argument types.

yep, will fix


> 
> > +		__u64 c;
> 
> [ ... ]
> 
> > +	} else if (ip == &bpf_fentry_test6) {
> 
> [ ... ]
> 
> > +		e = (void*) value;;
> > +		if (bpf_get_func_arg(ctx, 5, &value))
> > +			return;
> > +		f = value;;
>                       ^^
> 
> This isn't a bug, but there are extra semicolons on these two lines.

ok, thnx

jirka

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC bpf-next 12/12] selftests/bpf: Add tracing multi benchmark test
  2026-02-03 10:13   ` bot+bpf-ci
@ 2026-02-17 22:06     ` Jiri Olsa
  0 siblings, 0 replies; 54+ messages in thread
From: Jiri Olsa @ 2026-02-17 22:06 UTC (permalink / raw)
  To: bot+bpf-ci
  Cc: ast, daniel, andrii, bpf, linux-trace-kernel, kafai, eddyz87,
	songliubraving, yhs, menglong8.dong, rostedt, martin.lau,
	yonghong.song, clm, ihor.solodrai

On Tue, Feb 03, 2026 at 10:13:57AM +0000, bot+bpf-ci@kernel.org wrote:
> > diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
> > index 3ccf0d4ed..575454e31 100644
> > --- a/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
> > +++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi.c
> > @@ -6,6 +6,9 @@
> >  #include <bpf/btf.h>
> >  #include <linux/btf.h>
> >  #include <search.h>
> > +#include <bpf/btf.h>
> > +#include <linux/btf.h>
> > +#include <search.h>
> 
> These three includes are duplicates of the lines immediately above them.
> Were they added by mistake?
> 
> [ ... ]
> 
> > +	t = btf__type_by_id(btf, proto->type);
> > +        while (t && btf_type_is_modifier(t))
> > +		t = btf__type_by_id(btf, t->type);
> 
> The while line uses spaces for indentation instead of a tab.

yes, will fix, thnx

jirka

^ permalink raw reply	[flat|nested] 54+ messages in thread

end of thread, other threads:[~2026-02-17 22:06 UTC | newest]

Thread overview: 54+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-03  9:38 [RFC bpf-next 00/12] bpf: tracing_multi link Jiri Olsa
2026-02-03  9:38 ` [RFC bpf-next 01/12] ftrace: Add ftrace_hash_count function Jiri Olsa
2026-02-03 15:40   ` Steven Rostedt
2026-02-04 12:06     ` Jiri Olsa
2026-02-03  9:38 ` [RFC bpf-next 02/12] bpf: Add struct bpf_trampoline_ops object Jiri Olsa
2026-02-03  9:38 ` [RFC bpf-next 03/12] bpf: Add struct bpf_struct_ops_tramp_link object Jiri Olsa
2026-02-03  9:38 ` [RFC bpf-next 04/12] bpf: Add struct bpf_tramp_node object Jiri Olsa
2026-02-04 19:00   ` Andrii Nakryiko
2026-02-05  8:57     ` Jiri Olsa
2026-02-05 22:27       ` Andrii Nakryiko
2026-02-06  8:27         ` Jiri Olsa
2026-02-03  9:38 ` [RFC bpf-next 05/12] bpf: Add multi tracing attach types Jiri Olsa
2026-02-03 10:13   ` bot+bpf-ci
2026-02-17 22:05     ` Jiri Olsa
2026-02-04  2:20   ` Leon Hwang
2026-02-04 12:41     ` Jiri Olsa
2026-02-03  9:38 ` [RFC bpf-next 06/12] bpf: Add bpf_trampoline_multi_attach/detach functions Jiri Olsa
2026-02-03 10:14   ` bot+bpf-ci
2026-02-17 22:05     ` Jiri Olsa
2026-02-05  9:16   ` Menglong Dong
2026-02-05 13:45     ` Jiri Olsa
2026-02-11  8:04       ` Menglong Dong
2026-02-03  9:38 ` [RFC bpf-next 07/12] bpf: Add support to create tracing multi link Jiri Olsa
2026-02-03 10:13   ` bot+bpf-ci
2026-02-17 22:05     ` Jiri Olsa
2026-02-04 19:05   ` Andrii Nakryiko
2026-02-05  8:55     ` Jiri Olsa
2026-02-03  9:38 ` [RFC bpf-next 08/12] libbpf: Add btf__find_by_glob_kind function Jiri Olsa
2026-02-03 10:14   ` bot+bpf-ci
2026-02-04 19:04   ` Andrii Nakryiko
2026-02-05  8:57     ` Jiri Olsa
2026-02-05 22:45       ` Andrii Nakryiko
2026-02-06  8:43         ` Jiri Olsa
2026-02-06 16:58           ` Andrii Nakryiko
2026-02-03  9:38 ` [RFC bpf-next 09/12] libbpf: Add support to create tracing multi link Jiri Olsa
2026-02-03 10:14   ` bot+bpf-ci
2026-02-17 22:05     ` Jiri Olsa
2026-02-04 19:05   ` Andrii Nakryiko
2026-02-17 22:06     ` Jiri Olsa
2026-02-03  9:38 ` [RFC bpf-next 10/12] selftests/bpf: Add fentry tracing multi func test Jiri Olsa
2026-02-03 10:13   ` bot+bpf-ci
2026-02-17 22:06     ` Jiri Olsa
2026-02-03  9:38 ` [RFC bpf-next 11/12] selftests/bpf: Add fentry intersected " Jiri Olsa
2026-02-03  9:38 ` [RFC bpf-next 12/12] selftests/bpf: Add tracing multi benchmark test Jiri Olsa
2026-02-03 10:13   ` bot+bpf-ci
2026-02-17 22:06     ` Jiri Olsa
2026-02-03 23:17 ` [RFC bpf-next 00/12] bpf: tracing_multi link Alexei Starovoitov
2026-02-04 12:36   ` Jiri Olsa
2026-02-04 16:06     ` Alexei Starovoitov
2026-02-05  8:55       ` Jiri Olsa
2026-02-05 15:55         ` Alexei Starovoitov
2026-02-06  8:18           ` Jiri Olsa
2026-02-06 17:03             ` Andrii Nakryiko
2026-02-08 20:54               ` Jiri Olsa

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox