* [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines
@ 2025-12-30 14:50 Jiri Olsa
2025-12-30 14:50 ` [PATCHv6 bpf-next 1/9] ftrace,bpf: Remove FTRACE_OPS_FL_JMP ftrace_ops flag Jiri Olsa
` (11 more replies)
0 siblings, 12 replies; 27+ messages in thread
From: Jiri Olsa @ 2025-12-30 14:50 UTC (permalink / raw)
To: Steven Rostedt, Florent Revest, Mark Rutland
Cc: bpf, linux-kernel, linux-trace-kernel, linux-arm-kernel,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Menglong Dong, Song Liu
hi,
while poking the multi-tracing interface I ended up with just one ftrace_ops
object to attach all trampolines.
This change allows to use less direct API calls during the attachment changes
in the future code, so in effect speeding up the attachment.
In current code we get a speed up from using just a single ftrace_ops object.
- with current code:
Performance counter stats for 'bpftrace -e fentry:vmlinux:ksys_* {} -c true':
6,364,157,902 cycles:k
828,728,902 cycles:u
1,064,803,824 instructions:u # 1.28 insn per cycle
23,797,500,067 instructions:k # 3.74 insn per cycle
4.416004987 seconds time elapsed
0.164121000 seconds user
1.289550000 seconds sys
- with the fix:
Performance counter stats for 'bpftrace -e fentry:vmlinux:ksys_* {} -c true':
6,535,857,905 cycles:k
810,809,429 cycles:u
1,064,594,027 instructions:u # 1.31 insn per cycle
23,962,552,894 instructions:k # 3.67 insn per cycle
1.666961239 seconds time elapsed
0.157412000 seconds user
1.283396000 seconds sys
The speedup seems to be related to the fact that with single ftrace_ops object
we don't call ftrace_shutdown anymore (we use ftrace_update_ops instead) and
we skip the synchronize rcu calls (each ~100ms) at the end of that function.
rfc: https://lore.kernel.org/bpf/20250729102813.1531457-1-jolsa@kernel.org/
v1: https://lore.kernel.org/bpf/20250923215147.1571952-1-jolsa@kernel.org/
v2: https://lore.kernel.org/bpf/20251113123750.2507435-1-jolsa@kernel.org/
v3: https://lore.kernel.org/bpf/20251120212402.466524-1-jolsa@kernel.org/
v4: https://lore.kernel.org/bpf/20251203082402.78816-1-jolsa@kernel.org/
v5: https://lore.kernel.org/bpf/20251215211402.353056-10-jolsa@kernel.org/
v6 changes:
- rename add_hash_entry_direct to add_ftrace_hash_entry_direct [Steven]
- factor hash_add/hash_sub [Steven]
- add kerneldoc header for update_ftrace_direct_* functions [Steven]
- few assorted smaller fixes [Steven]
- added missing direct_ops wrappers for !CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
case [Steven]
v5 changes:
- do not export ftrace_hash object [Steven]
- fix update_ftrace_direct_add new_filter_hash leak [ci]
v4 changes:
- rebased on top of bpf-next/master (with jmp attach changes)
added patch 1 to deal with that
- added extra checks for update_ftrace_direct_del/mod to address
the ci bot review
v3 changes:
- rebased on top of bpf-next/master
- fixed update_ftrace_direct_del cleanup path
- added missing inline to update_ftrace_direct_* stubs
v2 changes:
- rebased on top fo bpf-next/master plus Song's livepatch fixes [1]
- renamed the API functions [2] [Steven]
- do not export the new api [Steven]
- kept the original direct interface:
I'm not sure if we want to melt both *_ftrace_direct and the new interface
into single one. It's bit different in semantic (hence the name change as
Steven suggested [2]) and I don't think the changes are not that big so
we could easily keep both APIs.
v1 changes:
- make the change x86 specific, after discussing with Mark options for
arm64 [Mark]
thanks,
jirka
[1] https://lore.kernel.org/bpf/20251027175023.1521602-1-song@kernel.org/
[2] https://lore.kernel.org/bpf/20250924050415.4aefcb91@batman.local.home/
---
Jiri Olsa (9):
ftrace,bpf: Remove FTRACE_OPS_FL_JMP ftrace_ops flag
ftrace: Make alloc_and_copy_ftrace_hash direct friendly
ftrace: Export some of hash related functions
ftrace: Add update_ftrace_direct_add function
ftrace: Add update_ftrace_direct_del function
ftrace: Add update_ftrace_direct_mod function
bpf: Add trampoline ip hash table
ftrace: Factor ftrace_ops ops_func interface
bpf,x86: Use single ftrace_ops for direct calls
arch/x86/Kconfig | 1 +
include/linux/bpf.h | 7 ++-
include/linux/ftrace.h | 31 +++++++++-
kernel/bpf/trampoline.c | 259 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---------------
kernel/trace/Kconfig | 3 +
kernel/trace/ftrace.c | 406 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------
6 files changed, 632 insertions(+), 75 deletions(-)
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCHv6 bpf-next 1/9] ftrace,bpf: Remove FTRACE_OPS_FL_JMP ftrace_ops flag
2025-12-30 14:50 [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines Jiri Olsa
@ 2025-12-30 14:50 ` Jiri Olsa
2026-01-10 0:36 ` Andrii Nakryiko
2025-12-30 14:50 ` [PATCHv6 bpf-next 2/9] ftrace: Make alloc_and_copy_ftrace_hash direct friendly Jiri Olsa
` (10 subsequent siblings)
11 siblings, 1 reply; 27+ messages in thread
From: Jiri Olsa @ 2025-12-30 14:50 UTC (permalink / raw)
To: Steven Rostedt, Florent Revest, Mark Rutland
Cc: bpf, linux-kernel, linux-trace-kernel, linux-arm-kernel,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Menglong Dong, Song Liu
At the moment the we allow the jmp attach only for ftrace_ops that
has FTRACE_OPS_FL_JMP set. This conflicts with following changes
where we use single ftrace_ops object for all direct call sites,
so all could be be attached via just call or jmp.
We already limit the jmp attach support with config option and bit
(LSB) set on the trampoline address. It turns out that's actually
enough to limit the jmp attach for architecture and only for chosen
addresses (with LSB bit set).
Each user of register_ftrace_direct or modify_ftrace_direct can set
the trampoline bit (LSB) to indicate it has to be attached by jmp.
The bpf trampoline generation code uses trampoline flags to generate
jmp-attach specific code and ftrace inner code uses the trampoline
bit (LSB) to handle return from jmp attachment, so there's no harm
to remove the FTRACE_OPS_FL_JMP bit.
The fexit/fmodret performance stays the same (did not drop),
current code:
fentry : 77.904 ± 0.546M/s
fexit : 62.430 ± 0.554M/s
fmodret : 66.503 ± 0.902M/s
with this change:
fentry : 80.472 ± 0.061M/s
fexit : 63.995 ± 0.127M/s
fmodret : 67.362 ± 0.175M/s
Fixes: 25e4e3565d45 ("ftrace: Introduce FTRACE_OPS_FL_JMP")
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/ftrace.h | 1 -
kernel/bpf/trampoline.c | 32 ++++++++++++++------------------
kernel/trace/ftrace.c | 14 --------------
3 files changed, 14 insertions(+), 33 deletions(-)
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 770f0dc993cc..41c9bb08d4e4 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -359,7 +359,6 @@ enum {
FTRACE_OPS_FL_DIRECT = BIT(17),
FTRACE_OPS_FL_SUBOP = BIT(18),
FTRACE_OPS_FL_GRAPH = BIT(19),
- FTRACE_OPS_FL_JMP = BIT(20),
};
#ifndef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 2a125d063e62..789ff4e1f40b 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -214,10 +214,15 @@ static int modify_fentry(struct bpf_trampoline *tr, u32 orig_flags,
int ret;
if (tr->func.ftrace_managed) {
+ unsigned long addr = (unsigned long) new_addr;
+
+ if (bpf_trampoline_use_jmp(tr->flags))
+ addr = ftrace_jmp_set(addr);
+
if (lock_direct_mutex)
- ret = modify_ftrace_direct(tr->fops, (long)new_addr);
+ ret = modify_ftrace_direct(tr->fops, addr);
else
- ret = modify_ftrace_direct_nolock(tr->fops, (long)new_addr);
+ ret = modify_ftrace_direct_nolock(tr->fops, addr);
} else {
ret = bpf_trampoline_update_fentry(tr, orig_flags, old_addr,
new_addr);
@@ -240,10 +245,15 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
}
if (tr->func.ftrace_managed) {
+ unsigned long addr = (unsigned long) new_addr;
+
+ if (bpf_trampoline_use_jmp(tr->flags))
+ addr = ftrace_jmp_set(addr);
+
ret = ftrace_set_filter_ip(tr->fops, (unsigned long)ip, 0, 1);
if (ret)
return ret;
- ret = register_ftrace_direct(tr->fops, (long)new_addr);
+ ret = register_ftrace_direct(tr->fops, addr);
} else {
ret = bpf_trampoline_update_fentry(tr, 0, NULL, new_addr);
}
@@ -499,13 +509,6 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
if (err)
goto out_free;
-#ifdef CONFIG_DYNAMIC_FTRACE_WITH_JMP
- if (bpf_trampoline_use_jmp(tr->flags))
- tr->fops->flags |= FTRACE_OPS_FL_JMP;
- else
- tr->fops->flags &= ~FTRACE_OPS_FL_JMP;
-#endif
-
WARN_ON(tr->cur_image && total == 0);
if (tr->cur_image)
/* progs already running at this address */
@@ -533,15 +536,8 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
tr->cur_image = im;
out:
/* If any error happens, restore previous flags */
- if (err) {
+ if (err)
tr->flags = orig_flags;
-#ifdef CONFIG_DYNAMIC_FTRACE_WITH_JMP
- if (bpf_trampoline_use_jmp(tr->flags))
- tr->fops->flags |= FTRACE_OPS_FL_JMP;
- else
- tr->fops->flags &= ~FTRACE_OPS_FL_JMP;
-#endif
- }
kfree(tlinks);
return err;
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 3ec2033c0774..f5f042ea079e 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -6043,15 +6043,8 @@ int register_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
if (ftrace_hash_empty(hash))
return -EINVAL;
- /* This is a "raw" address, and this should never happen. */
- if (WARN_ON_ONCE(ftrace_is_jmp(addr)))
- return -EINVAL;
-
mutex_lock(&direct_mutex);
- if (ops->flags & FTRACE_OPS_FL_JMP)
- addr = ftrace_jmp_set(addr);
-
/* Make sure requested entries are not already registered.. */
size = 1 << hash->size_bits;
for (i = 0; i < size; i++) {
@@ -6172,13 +6165,6 @@ __modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
lockdep_assert_held_once(&direct_mutex);
- /* This is a "raw" address, and this should never happen. */
- if (WARN_ON_ONCE(ftrace_is_jmp(addr)))
- return -EINVAL;
-
- if (ops->flags & FTRACE_OPS_FL_JMP)
- addr = ftrace_jmp_set(addr);
-
/* Enable the tmp_ops to have the same functions as the direct ops */
ftrace_ops_init(&tmp_ops);
tmp_ops.func_hash = ops->func_hash;
--
2.52.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCHv6 bpf-next 2/9] ftrace: Make alloc_and_copy_ftrace_hash direct friendly
2025-12-30 14:50 [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines Jiri Olsa
2025-12-30 14:50 ` [PATCHv6 bpf-next 1/9] ftrace,bpf: Remove FTRACE_OPS_FL_JMP ftrace_ops flag Jiri Olsa
@ 2025-12-30 14:50 ` Jiri Olsa
2025-12-30 14:50 ` [PATCHv6 bpf-next 3/9] ftrace: Export some of hash related functions Jiri Olsa
` (9 subsequent siblings)
11 siblings, 0 replies; 27+ messages in thread
From: Jiri Olsa @ 2025-12-30 14:50 UTC (permalink / raw)
To: Steven Rostedt, Florent Revest, Mark Rutland
Cc: bpf, linux-kernel, linux-trace-kernel, linux-arm-kernel,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Menglong Dong, Song Liu
Make alloc_and_copy_ftrace_hash to copy also direct address
for each hash entry.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
kernel/trace/ftrace.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index f5f042ea079e..409271aa8dad 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1212,7 +1212,7 @@ static void __add_hash_entry(struct ftrace_hash *hash,
}
static struct ftrace_func_entry *
-add_hash_entry(struct ftrace_hash *hash, unsigned long ip)
+add_hash_entry_direct(struct ftrace_hash *hash, unsigned long ip, unsigned long direct)
{
struct ftrace_func_entry *entry;
@@ -1221,11 +1221,18 @@ add_hash_entry(struct ftrace_hash *hash, unsigned long ip)
return NULL;
entry->ip = ip;
+ entry->direct = direct;
__add_hash_entry(hash, entry);
return entry;
}
+static struct ftrace_func_entry *
+add_hash_entry(struct ftrace_hash *hash, unsigned long ip)
+{
+ return add_hash_entry_direct(hash, ip, 0);
+}
+
static void
free_hash_entry(struct ftrace_hash *hash,
struct ftrace_func_entry *entry)
@@ -1398,7 +1405,7 @@ alloc_and_copy_ftrace_hash(int size_bits, struct ftrace_hash *hash)
size = 1 << hash->size_bits;
for (i = 0; i < size; i++) {
hlist_for_each_entry(entry, &hash->buckets[i], hlist) {
- if (add_hash_entry(new_hash, entry->ip) == NULL)
+ if (add_hash_entry_direct(new_hash, entry->ip, entry->direct) == NULL)
goto free_hash;
}
}
--
2.52.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCHv6 bpf-next 3/9] ftrace: Export some of hash related functions
2025-12-30 14:50 [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines Jiri Olsa
2025-12-30 14:50 ` [PATCHv6 bpf-next 1/9] ftrace,bpf: Remove FTRACE_OPS_FL_JMP ftrace_ops flag Jiri Olsa
2025-12-30 14:50 ` [PATCHv6 bpf-next 2/9] ftrace: Make alloc_and_copy_ftrace_hash direct friendly Jiri Olsa
@ 2025-12-30 14:50 ` Jiri Olsa
2025-12-30 14:50 ` [PATCHv6 bpf-next 4/9] ftrace: Add update_ftrace_direct_add function Jiri Olsa
` (8 subsequent siblings)
11 siblings, 0 replies; 27+ messages in thread
From: Jiri Olsa @ 2025-12-30 14:50 UTC (permalink / raw)
To: Steven Rostedt, Florent Revest, Mark Rutland
Cc: bpf, linux-kernel, linux-trace-kernel, linux-arm-kernel,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Menglong Dong, Song Liu
We are going to use these functions in following changes.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/ftrace.h | 9 +++++++++
kernel/trace/ftrace.c | 13 ++++++-------
2 files changed, 15 insertions(+), 7 deletions(-)
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 41c9bb08d4e4..472f2d8a4c0f 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -82,6 +82,7 @@ static inline void early_trace_init(void) { }
struct module;
struct ftrace_hash;
+struct ftrace_func_entry;
#if defined(CONFIG_FUNCTION_TRACER) && defined(CONFIG_MODULES) && \
defined(CONFIG_DYNAMIC_FTRACE)
@@ -405,6 +406,14 @@ enum ftrace_ops_cmd {
typedef int (*ftrace_ops_func_t)(struct ftrace_ops *op, enum ftrace_ops_cmd cmd);
#ifdef CONFIG_DYNAMIC_FTRACE
+
+#define FTRACE_HASH_DEFAULT_BITS 10
+
+struct ftrace_hash *alloc_ftrace_hash(int size_bits);
+void free_ftrace_hash(struct ftrace_hash *hash);
+struct ftrace_func_entry *add_ftrace_hash_entry_direct(struct ftrace_hash *hash,
+ unsigned long ip, unsigned long direct);
+
/* The hash used to know what functions callbacks trace */
struct ftrace_ops_hash {
struct ftrace_hash __rcu *notrace_hash;
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 409271aa8dad..3ca3aee5f886 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -68,7 +68,6 @@
})
/* hash bits for specific function selection */
-#define FTRACE_HASH_DEFAULT_BITS 10
#define FTRACE_HASH_MAX_BITS 12
#ifdef CONFIG_DYNAMIC_FTRACE
@@ -1211,8 +1210,8 @@ static void __add_hash_entry(struct ftrace_hash *hash,
hash->count++;
}
-static struct ftrace_func_entry *
-add_hash_entry_direct(struct ftrace_hash *hash, unsigned long ip, unsigned long direct)
+struct ftrace_func_entry *
+add_ftrace_hash_entry_direct(struct ftrace_hash *hash, unsigned long ip, unsigned long direct)
{
struct ftrace_func_entry *entry;
@@ -1230,7 +1229,7 @@ add_hash_entry_direct(struct ftrace_hash *hash, unsigned long ip, unsigned long
static struct ftrace_func_entry *
add_hash_entry(struct ftrace_hash *hash, unsigned long ip)
{
- return add_hash_entry_direct(hash, ip, 0);
+ return add_ftrace_hash_entry_direct(hash, ip, 0);
}
static void
@@ -1291,7 +1290,7 @@ static void clear_ftrace_mod_list(struct list_head *head)
mutex_unlock(&ftrace_lock);
}
-static void free_ftrace_hash(struct ftrace_hash *hash)
+void free_ftrace_hash(struct ftrace_hash *hash)
{
if (!hash || hash == EMPTY_HASH)
return;
@@ -1331,7 +1330,7 @@ void ftrace_free_filter(struct ftrace_ops *ops)
}
EXPORT_SYMBOL_GPL(ftrace_free_filter);
-static struct ftrace_hash *alloc_ftrace_hash(int size_bits)
+struct ftrace_hash *alloc_ftrace_hash(int size_bits)
{
struct ftrace_hash *hash;
int size;
@@ -1405,7 +1404,7 @@ alloc_and_copy_ftrace_hash(int size_bits, struct ftrace_hash *hash)
size = 1 << hash->size_bits;
for (i = 0; i < size; i++) {
hlist_for_each_entry(entry, &hash->buckets[i], hlist) {
- if (add_hash_entry_direct(new_hash, entry->ip, entry->direct) == NULL)
+ if (add_ftrace_hash_entry_direct(new_hash, entry->ip, entry->direct) == NULL)
goto free_hash;
}
}
--
2.52.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCHv6 bpf-next 4/9] ftrace: Add update_ftrace_direct_add function
2025-12-30 14:50 [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines Jiri Olsa
` (2 preceding siblings ...)
2025-12-30 14:50 ` [PATCHv6 bpf-next 3/9] ftrace: Export some of hash related functions Jiri Olsa
@ 2025-12-30 14:50 ` Jiri Olsa
2025-12-30 14:50 ` [PATCHv6 bpf-next 5/9] ftrace: Add update_ftrace_direct_del function Jiri Olsa
` (7 subsequent siblings)
11 siblings, 0 replies; 27+ messages in thread
From: Jiri Olsa @ 2025-12-30 14:50 UTC (permalink / raw)
To: Steven Rostedt, Florent Revest, Mark Rutland
Cc: bpf, linux-kernel, linux-trace-kernel, linux-arm-kernel,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Menglong Dong, Song Liu
Adding update_ftrace_direct_add function that adds all entries
(ip -> addr) provided in hash argument to direct ftrace ops
and updates its attachments.
The difference to current register_ftrace_direct is
- hash argument that allows to register multiple ip -> direct
entries at once
- we can call update_ftrace_direct_add multiple times on the
same ftrace_ops object, becase after first registration with
register_ftrace_function_nolock, it uses ftrace_update_ops to
update the ftrace_ops object
This change will allow us to have simple ftrace_ops for all bpf
direct interface users in following changes.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/ftrace.h | 7 +++
kernel/trace/ftrace.c | 140 +++++++++++++++++++++++++++++++++++++++++
2 files changed, 147 insertions(+)
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 472f2d8a4c0f..f0fcff389061 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -543,6 +543,8 @@ int unregister_ftrace_direct(struct ftrace_ops *ops, unsigned long addr,
int modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr);
int modify_ftrace_direct_nolock(struct ftrace_ops *ops, unsigned long addr);
+int update_ftrace_direct_add(struct ftrace_ops *ops, struct ftrace_hash *hash);
+
void ftrace_stub_direct_tramp(void);
#else
@@ -569,6 +571,11 @@ static inline int modify_ftrace_direct_nolock(struct ftrace_ops *ops, unsigned l
return -ENODEV;
}
+static inline int update_ftrace_direct_add(struct ftrace_ops *ops, struct ftrace_hash *hash)
+{
+ return -ENODEV;
+}
+
/*
* This must be implemented by the architecture.
* It is the way the ftrace direct_ops helper, when called
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 3ca3aee5f886..3d1170da1bb8 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -6275,6 +6275,146 @@ int modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
return err;
}
EXPORT_SYMBOL_GPL(modify_ftrace_direct);
+
+static unsigned long hash_count(struct ftrace_hash *hash)
+{
+ return hash ? hash->count : 0;
+}
+
+/**
+ * hash_add - adds two struct ftrace_hash and returns the result
+ * @a: struct ftrace_hash object
+ * @b: struct ftrace_hash object
+ *
+ * Returns struct ftrace_hash object on success, NULL on error.
+ */
+static struct ftrace_hash *hash_add(struct ftrace_hash *a, struct ftrace_hash *b)
+{
+ struct ftrace_func_entry *entry;
+ struct ftrace_hash *add;
+ int size;
+
+ size = hash_count(a) + hash_count(b);
+ if (size > 32)
+ size = 32;
+
+ add = alloc_and_copy_ftrace_hash(fls(size), a);
+ if (!add)
+ return NULL;
+
+ size = 1 << b->size_bits;
+ for (int i = 0; i < size; i++) {
+ hlist_for_each_entry(entry, &b->buckets[i], hlist) {
+ if (add_ftrace_hash_entry_direct(add, entry->ip, entry->direct) == NULL) {
+ free_ftrace_hash(add);
+ return NULL;
+ }
+ }
+ }
+ return add;
+}
+
+/**
+ * update_ftrace_direct_add - Updates @ops by adding direct
+ * callers provided in @hash
+ * @ops: The address of the struct ftrace_ops object
+ * @hash: The address of the struct ftrace_hash object
+ *
+ * This is used to add custom direct callers (ip -> addr) to @ops,
+ * specified in @hash. The @ops will be either registered or updated.
+ *
+ * Returns: zero on success. Non zero on error, which includes:
+ * -EINVAL - The @hash is empty
+ */
+int update_ftrace_direct_add(struct ftrace_ops *ops, struct ftrace_hash *hash)
+{
+ struct ftrace_hash *old_direct_functions = NULL;
+ struct ftrace_hash *new_direct_functions;
+ struct ftrace_hash *old_filter_hash;
+ struct ftrace_hash *new_filter_hash = NULL;
+ struct ftrace_func_entry *entry;
+ int err = -EINVAL;
+ int size;
+ bool reg;
+
+ if (!hash_count(hash))
+ return -EINVAL;
+
+ mutex_lock(&direct_mutex);
+
+ /* Make sure requested entries are not already registered. */
+ size = 1 << hash->size_bits;
+ for (int i = 0; i < size; i++) {
+ hlist_for_each_entry(entry, &hash->buckets[i], hlist) {
+ if (__ftrace_lookup_ip(direct_functions, entry->ip))
+ goto out_unlock;
+ }
+ }
+
+ old_filter_hash = ops->func_hash ? ops->func_hash->filter_hash : NULL;
+
+ /* If there's nothing in filter_hash we need to register the ops. */
+ reg = hash_count(old_filter_hash) == 0;
+ if (reg) {
+ if (ops->func || ops->trampoline)
+ goto out_unlock;
+ if (ops->flags & FTRACE_OPS_FL_ENABLED)
+ goto out_unlock;
+ }
+
+ err = -ENOMEM;
+ new_filter_hash = hash_add(old_filter_hash, hash);
+ if (!new_filter_hash)
+ goto out_unlock;
+
+ new_direct_functions = hash_add(direct_functions, hash);
+ if (!new_direct_functions)
+ goto out_unlock;
+
+ old_direct_functions = direct_functions;
+ rcu_assign_pointer(direct_functions, new_direct_functions);
+
+ if (reg) {
+ ops->func = call_direct_funcs;
+ ops->flags |= MULTI_FLAGS;
+ ops->trampoline = FTRACE_REGS_ADDR;
+ ops->local_hash.filter_hash = new_filter_hash;
+
+ err = register_ftrace_function_nolock(ops);
+ if (err) {
+ /* restore old filter on error */
+ ops->local_hash.filter_hash = old_filter_hash;
+
+ /* cleanup for possible another register call */
+ ops->func = NULL;
+ ops->trampoline = 0;
+ } else {
+ new_filter_hash = old_filter_hash;
+ }
+ } else {
+ err = ftrace_update_ops(ops, new_filter_hash, EMPTY_HASH);
+ /*
+ * new_filter_hash is dup-ed, so we need to release it anyway,
+ * old_filter_hash either stays on error or is already released
+ */
+ }
+
+ if (err) {
+ /* reset direct_functions and free the new one */
+ rcu_assign_pointer(direct_functions, old_direct_functions);
+ old_direct_functions = new_direct_functions;
+ }
+
+ out_unlock:
+ mutex_unlock(&direct_mutex);
+
+ if (old_direct_functions && old_direct_functions != EMPTY_HASH)
+ call_rcu_tasks(&old_direct_functions->rcu, register_ftrace_direct_cb);
+ free_ftrace_hash(new_filter_hash);
+
+ return err;
+}
+
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
/**
--
2.52.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCHv6 bpf-next 5/9] ftrace: Add update_ftrace_direct_del function
2025-12-30 14:50 [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines Jiri Olsa
` (3 preceding siblings ...)
2025-12-30 14:50 ` [PATCHv6 bpf-next 4/9] ftrace: Add update_ftrace_direct_add function Jiri Olsa
@ 2025-12-30 14:50 ` Jiri Olsa
2025-12-30 14:50 ` [PATCHv6 bpf-next 6/9] ftrace: Add update_ftrace_direct_mod function Jiri Olsa
` (6 subsequent siblings)
11 siblings, 0 replies; 27+ messages in thread
From: Jiri Olsa @ 2025-12-30 14:50 UTC (permalink / raw)
To: Steven Rostedt, Florent Revest, Mark Rutland
Cc: bpf, linux-kernel, linux-trace-kernel, linux-arm-kernel,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Menglong Dong, Song Liu
Adding update_ftrace_direct_del function that removes all entries
(ip -> addr) provided in hash argument to direct ftrace ops and
updates its attachments.
The difference to current unregister_ftrace_direct is
- hash argument that allows to unregister multiple ip -> direct
entries at once
- we can call update_ftrace_direct_del multiple times on the
same ftrace_ops object, becase we do not need to unregister
all entries at once, we can do it gradualy with the help of
ftrace_update_ops function
This change will allow us to have simple ftrace_ops for all bpf
direct interface users in following changes.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/ftrace.h | 6 ++
kernel/trace/ftrace.c | 127 +++++++++++++++++++++++++++++++++++++++++
2 files changed, 133 insertions(+)
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index f0fcff389061..a3cc1b48c9fc 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -544,6 +544,7 @@ int modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr);
int modify_ftrace_direct_nolock(struct ftrace_ops *ops, unsigned long addr);
int update_ftrace_direct_add(struct ftrace_ops *ops, struct ftrace_hash *hash);
+int update_ftrace_direct_del(struct ftrace_ops *ops, struct ftrace_hash *hash);
void ftrace_stub_direct_tramp(void);
@@ -576,6 +577,11 @@ static inline int update_ftrace_direct_add(struct ftrace_ops *ops, struct ftrace
return -ENODEV;
}
+static inline int update_ftrace_direct_del(struct ftrace_ops *ops, struct ftrace_hash *hash)
+{
+ return -ENODEV;
+}
+
/*
* This must be implemented by the architecture.
* It is the way the ftrace direct_ops helper, when called
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 3d1170da1bb8..8b75166fb223 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -6415,6 +6415,133 @@ int update_ftrace_direct_add(struct ftrace_ops *ops, struct ftrace_hash *hash)
return err;
}
+/**
+ * hash_sub - substracts @b from @a and returns the result
+ * @a: struct ftrace_hash object
+ * @b: struct ftrace_hash object
+ *
+ * Returns struct ftrace_hash object on success, NULL on error.
+ */
+static struct ftrace_hash *hash_sub(struct ftrace_hash *a, struct ftrace_hash *b)
+{
+ struct ftrace_func_entry *entry, *del;
+ struct ftrace_hash *sub;
+ int size;
+
+ sub = alloc_and_copy_ftrace_hash(a->size_bits, a);
+ if (!sub)
+ return NULL;
+
+ size = 1 << b->size_bits;
+ for (int i = 0; i < size; i++) {
+ hlist_for_each_entry(entry, &b->buckets[i], hlist) {
+ del = __ftrace_lookup_ip(sub, entry->ip);
+ if (WARN_ON_ONCE(!del)) {
+ free_ftrace_hash(sub);
+ return NULL;
+ }
+ remove_hash_entry(sub, del);
+ kfree(del);
+ }
+ }
+ return sub;
+}
+
+/**
+ * update_ftrace_direct_del - Updates @ops by removing its direct
+ * callers provided in @hash
+ * @ops: The address of the struct ftrace_ops object
+ * @hash: The address of the struct ftrace_hash object
+ *
+ * This is used to delete custom direct callers (ip -> addr) in
+ * @ops specified via @hash. The @ops will be either unregistered
+ * updated.
+ *
+ * Returns: zero on success. Non zero on error, which includes:
+ * -EINVAL - The @hash is empty
+ * -EINVAL - The @ops is not registered
+ */
+int update_ftrace_direct_del(struct ftrace_ops *ops, struct ftrace_hash *hash)
+{
+ struct ftrace_hash *old_direct_functions = NULL;
+ struct ftrace_hash *new_direct_functions;
+ struct ftrace_hash *new_filter_hash = NULL;
+ struct ftrace_hash *old_filter_hash;
+ struct ftrace_func_entry *entry;
+ struct ftrace_func_entry *del;
+ unsigned long size;
+ int err = -EINVAL;
+
+ if (!hash_count(hash))
+ return -EINVAL;
+ if (check_direct_multi(ops))
+ return -EINVAL;
+ if (!(ops->flags & FTRACE_OPS_FL_ENABLED))
+ return -EINVAL;
+ if (direct_functions == EMPTY_HASH)
+ return -EINVAL;
+
+ mutex_lock(&direct_mutex);
+
+ old_filter_hash = ops->func_hash ? ops->func_hash->filter_hash : NULL;
+
+ if (!hash_count(old_filter_hash))
+ goto out_unlock;
+
+ /* Make sure requested entries are already registered. */
+ size = 1 << hash->size_bits;
+ for (int i = 0; i < size; i++) {
+ hlist_for_each_entry(entry, &hash->buckets[i], hlist) {
+ del = __ftrace_lookup_ip(direct_functions, entry->ip);
+ if (!del || del->direct != entry->direct)
+ goto out_unlock;
+ }
+ }
+
+ err = -ENOMEM;
+ new_filter_hash = hash_sub(old_filter_hash, hash);
+ if (!new_filter_hash)
+ goto out_unlock;
+
+ new_direct_functions = hash_sub(direct_functions, hash);
+ if (!new_direct_functions)
+ goto out_unlock;
+
+ /* If there's nothing left, we need to unregister the ops. */
+ if (ftrace_hash_empty(new_filter_hash)) {
+ err = unregister_ftrace_function(ops);
+ if (!err) {
+ /* cleanup for possible another register call */
+ ops->func = NULL;
+ ops->trampoline = 0;
+ ftrace_free_filter(ops);
+ ops->func_hash->filter_hash = NULL;
+ }
+ } else {
+ err = ftrace_update_ops(ops, new_filter_hash, EMPTY_HASH);
+ /*
+ * new_filter_hash is dup-ed, so we need to release it anyway,
+ * old_filter_hash either stays on error or is already released
+ */
+ }
+
+ if (err) {
+ /* free the new_direct_functions */
+ old_direct_functions = new_direct_functions;
+ } else {
+ rcu_assign_pointer(direct_functions, new_direct_functions);
+ }
+
+ out_unlock:
+ mutex_unlock(&direct_mutex);
+
+ if (old_direct_functions && old_direct_functions != EMPTY_HASH)
+ call_rcu_tasks(&old_direct_functions->rcu, register_ftrace_direct_cb);
+ free_ftrace_hash(new_filter_hash);
+
+ return err;
+}
+
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
/**
--
2.52.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCHv6 bpf-next 6/9] ftrace: Add update_ftrace_direct_mod function
2025-12-30 14:50 [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines Jiri Olsa
` (4 preceding siblings ...)
2025-12-30 14:50 ` [PATCHv6 bpf-next 5/9] ftrace: Add update_ftrace_direct_del function Jiri Olsa
@ 2025-12-30 14:50 ` Jiri Olsa
2025-12-30 14:50 ` [PATCHv6 bpf-next 7/9] bpf: Add trampoline ip hash table Jiri Olsa
` (5 subsequent siblings)
11 siblings, 0 replies; 27+ messages in thread
From: Jiri Olsa @ 2025-12-30 14:50 UTC (permalink / raw)
To: Steven Rostedt, Florent Revest, Mark Rutland
Cc: bpf, linux-kernel, linux-trace-kernel, linux-arm-kernel,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Menglong Dong, Song Liu
Adding update_ftrace_direct_mod function that modifies all entries
(ip -> direct) provided in hash argument to direct ftrace ops and
updates its attachments.
The difference to current modify_ftrace_direct is:
- hash argument that allows to modify multiple ip -> direct
entries at once
This change will allow us to have simple ftrace_ops for all bpf
direct interface users in following changes.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/ftrace.h | 6 +++
kernel/trace/ftrace.c | 94 ++++++++++++++++++++++++++++++++++++++++++
2 files changed, 100 insertions(+)
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index a3cc1b48c9fc..6c1680ab8bf9 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -545,6 +545,7 @@ int modify_ftrace_direct_nolock(struct ftrace_ops *ops, unsigned long addr);
int update_ftrace_direct_add(struct ftrace_ops *ops, struct ftrace_hash *hash);
int update_ftrace_direct_del(struct ftrace_ops *ops, struct ftrace_hash *hash);
+int update_ftrace_direct_mod(struct ftrace_ops *ops, struct ftrace_hash *hash, bool do_direct_lock);
void ftrace_stub_direct_tramp(void);
@@ -582,6 +583,11 @@ static inline int update_ftrace_direct_del(struct ftrace_ops *ops, struct ftrace
return -ENODEV;
}
+static inline int update_ftrace_direct_mod(struct ftrace_ops *ops, struct ftrace_hash *hash, bool do_direct_lock)
+{
+ return -ENODEV;
+}
+
/*
* This must be implemented by the architecture.
* It is the way the ftrace direct_ops helper, when called
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 8b75166fb223..d24f28677007 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -6542,6 +6542,100 @@ int update_ftrace_direct_del(struct ftrace_ops *ops, struct ftrace_hash *hash)
return err;
}
+/**
+ * update_ftrace_direct_mod - Updates @ops by modifing its direct
+ * callers provided in @hash
+ * @ops: The address of the struct ftrace_ops object
+ * @hash: The address of the struct ftrace_hash object
+ * @do_direct_lock: If true lock the direct_mutex
+ *
+ * This is used to modify custom direct callers (ip -> addr) in
+ * @ops specified via @hash.
+ *
+ * This can be called from within ftrace ops_func callback with
+ * direct_mutex already locked, in which case @do_direct_lock
+ * needs to be false.
+ *
+ * Returns: zero on success. Non zero on error, which includes:
+ * -EINVAL - The @hash is empty
+ * -EINVAL - The @ops is not registered
+ */
+int update_ftrace_direct_mod(struct ftrace_ops *ops, struct ftrace_hash *hash, bool do_direct_lock)
+{
+ struct ftrace_func_entry *entry, *tmp;
+ static struct ftrace_ops tmp_ops = {
+ .func = ftrace_stub,
+ .flags = FTRACE_OPS_FL_STUB,
+ };
+ struct ftrace_hash *orig_hash;
+ unsigned long size, i;
+ int err = -EINVAL;
+
+ if (!hash_count(hash))
+ return -EINVAL;
+ if (check_direct_multi(ops))
+ return -EINVAL;
+ if (!(ops->flags & FTRACE_OPS_FL_ENABLED))
+ return -EINVAL;
+ if (direct_functions == EMPTY_HASH)
+ return -EINVAL;
+
+ /*
+ * We can be called from within ops_func callback with direct_mutex
+ * already taken.
+ */
+ if (do_direct_lock)
+ mutex_lock(&direct_mutex);
+
+ orig_hash = ops->func_hash ? ops->func_hash->filter_hash : NULL;
+ if (!orig_hash)
+ goto unlock;
+
+ /* Enable the tmp_ops to have the same functions as the direct ops */
+ ftrace_ops_init(&tmp_ops);
+ tmp_ops.func_hash = ops->func_hash;
+
+ err = register_ftrace_function_nolock(&tmp_ops);
+ if (err)
+ goto unlock;
+
+ /*
+ * Call __ftrace_hash_update_ipmodify() here, so that we can call
+ * ops->ops_func for the ops. This is needed because the above
+ * register_ftrace_function_nolock() worked on tmp_ops.
+ */
+ err = __ftrace_hash_update_ipmodify(ops, orig_hash, orig_hash, true);
+ if (err)
+ goto out;
+
+ /*
+ * Now the ftrace_ops_list_func() is called to do the direct callers.
+ * We can safely change the direct functions attached to each entry.
+ */
+ mutex_lock(&ftrace_lock);
+
+ size = 1 << hash->size_bits;
+ for (i = 0; i < size; i++) {
+ hlist_for_each_entry(entry, &hash->buckets[i], hlist) {
+ tmp = __ftrace_lookup_ip(direct_functions, entry->ip);
+ if (!tmp)
+ continue;
+ tmp->direct = entry->direct;
+ }
+ }
+
+ mutex_unlock(&ftrace_lock);
+
+out:
+ /* Removing the tmp_ops will add the updated direct callers to the functions */
+ unregister_ftrace_function(&tmp_ops);
+
+unlock:
+ if (do_direct_lock)
+ mutex_unlock(&direct_mutex);
+ return err;
+}
+
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
/**
--
2.52.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCHv6 bpf-next 7/9] bpf: Add trampoline ip hash table
2025-12-30 14:50 [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines Jiri Olsa
` (5 preceding siblings ...)
2025-12-30 14:50 ` [PATCHv6 bpf-next 6/9] ftrace: Add update_ftrace_direct_mod function Jiri Olsa
@ 2025-12-30 14:50 ` Jiri Olsa
2026-01-10 0:36 ` Andrii Nakryiko
2025-12-30 14:50 ` [PATCHv6 bpf-next 8/9] ftrace: Factor ftrace_ops ops_func interface Jiri Olsa
` (4 subsequent siblings)
11 siblings, 1 reply; 27+ messages in thread
From: Jiri Olsa @ 2025-12-30 14:50 UTC (permalink / raw)
To: Steven Rostedt, Florent Revest, Mark Rutland
Cc: bpf, linux-kernel, linux-trace-kernel, linux-arm-kernel,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Menglong Dong, Song Liu
Following changes need to lookup trampoline based on its ip address,
adding hash table for that.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/bpf.h | 7 +++++--
kernel/bpf/trampoline.c | 30 +++++++++++++++++++-----------
2 files changed, 24 insertions(+), 13 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 4e7d72dfbcd4..c85677aae865 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1325,14 +1325,17 @@ struct bpf_tramp_image {
};
struct bpf_trampoline {
- /* hlist for trampoline_table */
- struct hlist_node hlist;
+ /* hlist for trampoline_key_table */
+ struct hlist_node hlist_key;
+ /* hlist for trampoline_ip_table */
+ struct hlist_node hlist_ip;
struct ftrace_ops *fops;
/* serializes access to fields of this trampoline */
struct mutex mutex;
refcount_t refcnt;
u32 flags;
u64 key;
+ unsigned long ip;
struct {
struct btf_func_model model;
void *addr;
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 789ff4e1f40b..bdac9d673776 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -24,9 +24,10 @@ const struct bpf_prog_ops bpf_extension_prog_ops = {
#define TRAMPOLINE_HASH_BITS 10
#define TRAMPOLINE_TABLE_SIZE (1 << TRAMPOLINE_HASH_BITS)
-static struct hlist_head trampoline_table[TRAMPOLINE_TABLE_SIZE];
+static struct hlist_head trampoline_key_table[TRAMPOLINE_TABLE_SIZE];
+static struct hlist_head trampoline_ip_table[TRAMPOLINE_TABLE_SIZE];
-/* serializes access to trampoline_table */
+/* serializes access to trampoline tables */
static DEFINE_MUTEX(trampoline_mutex);
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
@@ -135,15 +136,15 @@ void bpf_image_ksym_del(struct bpf_ksym *ksym)
PAGE_SIZE, true, ksym->name);
}
-static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
+static struct bpf_trampoline *bpf_trampoline_lookup(u64 key, unsigned long ip)
{
struct bpf_trampoline *tr;
struct hlist_head *head;
int i;
mutex_lock(&trampoline_mutex);
- head = &trampoline_table[hash_64(key, TRAMPOLINE_HASH_BITS)];
- hlist_for_each_entry(tr, head, hlist) {
+ head = &trampoline_key_table[hash_64(key, TRAMPOLINE_HASH_BITS)];
+ hlist_for_each_entry(tr, head, hlist_key) {
if (tr->key == key) {
refcount_inc(&tr->refcnt);
goto out;
@@ -164,8 +165,12 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
#endif
tr->key = key;
- INIT_HLIST_NODE(&tr->hlist);
- hlist_add_head(&tr->hlist, head);
+ tr->ip = ftrace_location(ip);
+ INIT_HLIST_NODE(&tr->hlist_key);
+ INIT_HLIST_NODE(&tr->hlist_ip);
+ hlist_add_head(&tr->hlist_key, head);
+ head = &trampoline_ip_table[hash_64(tr->ip, TRAMPOLINE_HASH_BITS)];
+ hlist_add_head(&tr->hlist_ip, head);
refcount_set(&tr->refcnt, 1);
mutex_init(&tr->mutex);
for (i = 0; i < BPF_TRAMP_MAX; i++)
@@ -846,7 +851,7 @@ void bpf_trampoline_unlink_cgroup_shim(struct bpf_prog *prog)
prog->aux->attach_btf_id);
bpf_lsm_find_cgroup_shim(prog, &bpf_func);
- tr = bpf_trampoline_lookup(key);
+ tr = bpf_trampoline_lookup(key, 0);
if (WARN_ON_ONCE(!tr))
return;
@@ -866,7 +871,7 @@ struct bpf_trampoline *bpf_trampoline_get(u64 key,
{
struct bpf_trampoline *tr;
- tr = bpf_trampoline_lookup(key);
+ tr = bpf_trampoline_lookup(key, tgt_info->tgt_addr);
if (!tr)
return NULL;
@@ -902,7 +907,8 @@ void bpf_trampoline_put(struct bpf_trampoline *tr)
* fexit progs. The fentry-only trampoline will be freed via
* multiple rcu callbacks.
*/
- hlist_del(&tr->hlist);
+ hlist_del(&tr->hlist_key);
+ hlist_del(&tr->hlist_ip);
if (tr->fops) {
ftrace_free_filter(tr->fops);
kfree(tr->fops);
@@ -1175,7 +1181,9 @@ static int __init init_trampolines(void)
int i;
for (i = 0; i < TRAMPOLINE_TABLE_SIZE; i++)
- INIT_HLIST_HEAD(&trampoline_table[i]);
+ INIT_HLIST_HEAD(&trampoline_key_table[i]);
+ for (i = 0; i < TRAMPOLINE_TABLE_SIZE; i++)
+ INIT_HLIST_HEAD(&trampoline_ip_table[i]);
return 0;
}
late_initcall(init_trampolines);
--
2.52.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCHv6 bpf-next 8/9] ftrace: Factor ftrace_ops ops_func interface
2025-12-30 14:50 [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines Jiri Olsa
` (6 preceding siblings ...)
2025-12-30 14:50 ` [PATCHv6 bpf-next 7/9] bpf: Add trampoline ip hash table Jiri Olsa
@ 2025-12-30 14:50 ` Jiri Olsa
2025-12-30 14:50 ` [PATCHv6 bpf-next 9/9] bpf,x86: Use single ftrace_ops for direct calls Jiri Olsa
` (3 subsequent siblings)
11 siblings, 0 replies; 27+ messages in thread
From: Jiri Olsa @ 2025-12-30 14:50 UTC (permalink / raw)
To: Steven Rostedt, Florent Revest, Mark Rutland
Cc: Steven Rostedt (Google), bpf, linux-kernel, linux-trace-kernel,
linux-arm-kernel, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Menglong Dong, Song Liu
We are going to remove "ftrace_ops->private == bpf_trampoline" setup
in following changes.
Adding ip argument to ftrace_ops_func_t callback function, so we can
use it to look up the trampoline.
Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/ftrace.h | 2 +-
kernel/bpf/trampoline.c | 3 ++-
kernel/trace/ftrace.c | 6 +++---
3 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 6c1680ab8bf9..781b613781a6 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -403,7 +403,7 @@ enum ftrace_ops_cmd {
* Negative on failure. The return value is dependent on the
* callback.
*/
-typedef int (*ftrace_ops_func_t)(struct ftrace_ops *op, enum ftrace_ops_cmd cmd);
+typedef int (*ftrace_ops_func_t)(struct ftrace_ops *op, unsigned long ip, enum ftrace_ops_cmd cmd);
#ifdef CONFIG_DYNAMIC_FTRACE
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index bdac9d673776..e5a0d58ed6dc 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -33,7 +33,8 @@ static DEFINE_MUTEX(trampoline_mutex);
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex);
-static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, enum ftrace_ops_cmd cmd)
+static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, unsigned long ip,
+ enum ftrace_ops_cmd cmd)
{
struct bpf_trampoline *tr = ops->private;
int ret = 0;
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index d24f28677007..02030f62d737 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -2075,7 +2075,7 @@ static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
*/
if (!ops->ops_func)
return -EBUSY;
- ret = ops->ops_func(ops, FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_SELF);
+ ret = ops->ops_func(ops, rec->ip, FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_SELF);
if (ret)
return ret;
} else if (is_ipmodify) {
@@ -9058,7 +9058,7 @@ static int prepare_direct_functions_for_ipmodify(struct ftrace_ops *ops)
if (!op->ops_func)
return -EBUSY;
- ret = op->ops_func(op, FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER);
+ ret = op->ops_func(op, ip, FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER);
if (ret)
return ret;
}
@@ -9105,7 +9105,7 @@ static void cleanup_direct_functions_after_ipmodify(struct ftrace_ops *ops)
/* The cleanup is optional, ignore any errors */
if (found_op && op->ops_func)
- op->ops_func(op, FTRACE_OPS_CMD_DISABLE_SHARE_IPMODIFY_PEER);
+ op->ops_func(op, ip, FTRACE_OPS_CMD_DISABLE_SHARE_IPMODIFY_PEER);
}
}
mutex_unlock(&direct_mutex);
--
2.52.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCHv6 bpf-next 9/9] bpf,x86: Use single ftrace_ops for direct calls
2025-12-30 14:50 [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines Jiri Olsa
` (7 preceding siblings ...)
2025-12-30 14:50 ` [PATCHv6 bpf-next 8/9] ftrace: Factor ftrace_ops ops_func interface Jiri Olsa
@ 2025-12-30 14:50 ` Jiri Olsa
2026-01-10 0:36 ` Andrii Nakryiko
2026-02-27 17:40 ` Ihor Solodrai
2026-01-15 18:54 ` [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines Andrii Nakryiko
` (2 subsequent siblings)
11 siblings, 2 replies; 27+ messages in thread
From: Jiri Olsa @ 2025-12-30 14:50 UTC (permalink / raw)
To: Steven Rostedt, Florent Revest, Mark Rutland
Cc: bpf, linux-kernel, linux-trace-kernel, linux-arm-kernel,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Menglong Dong, Song Liu
Using single ftrace_ops for direct calls update instead of allocating
ftrace_ops object for each trampoline.
With single ftrace_ops object we can use update_ftrace_direct_* api
that allows multiple ip sites updates on single ftrace_ops object.
Adding HAVE_SINGLE_FTRACE_DIRECT_OPS config option to be enabled on
each arch that supports this.
At the moment we can enable this only on x86 arch, because arm relies
on ftrace_ops object representing just single trampoline image (stored
in ftrace_ops::direct_call). Archs that do not support this will continue
to use *_ftrace_direct api.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
arch/x86/Kconfig | 1 +
kernel/bpf/trampoline.c | 220 ++++++++++++++++++++++++++++++++++------
kernel/trace/Kconfig | 3 +
kernel/trace/ftrace.c | 7 +-
4 files changed, 200 insertions(+), 31 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 80527299f859..53bf2cf7ff6f 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -336,6 +336,7 @@ config X86
select SCHED_SMT if SMP
select ARCH_SUPPORTS_SCHED_CLUSTER if SMP
select ARCH_SUPPORTS_SCHED_MC if SMP
+ select HAVE_SINGLE_FTRACE_DIRECT_OPS if X86_64 && DYNAMIC_FTRACE_WITH_DIRECT_CALLS
config INSTRUCTION_DECODER
def_bool y
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index e5a0d58ed6dc..248cd368fa37 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -33,12 +33,40 @@ static DEFINE_MUTEX(trampoline_mutex);
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex);
+#ifdef CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS
+static struct bpf_trampoline *direct_ops_ip_lookup(struct ftrace_ops *ops, unsigned long ip)
+{
+ struct hlist_head *head_ip;
+ struct bpf_trampoline *tr;
+
+ mutex_lock(&trampoline_mutex);
+ head_ip = &trampoline_ip_table[hash_64(ip, TRAMPOLINE_HASH_BITS)];
+ hlist_for_each_entry(tr, head_ip, hlist_ip) {
+ if (tr->ip == ip)
+ goto out;
+ }
+ tr = NULL;
+out:
+ mutex_unlock(&trampoline_mutex);
+ return tr;
+}
+#else
+static struct bpf_trampoline *direct_ops_ip_lookup(struct ftrace_ops *ops, unsigned long ip)
+{
+ return ops->private;
+}
+#endif /* CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS */
+
static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, unsigned long ip,
enum ftrace_ops_cmd cmd)
{
- struct bpf_trampoline *tr = ops->private;
+ struct bpf_trampoline *tr;
int ret = 0;
+ tr = direct_ops_ip_lookup(ops, ip);
+ if (!tr)
+ return -EINVAL;
+
if (cmd == FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_SELF) {
/* This is called inside register_ftrace_direct_multi(), so
* tr->mutex is already locked.
@@ -137,6 +165,162 @@ void bpf_image_ksym_del(struct bpf_ksym *ksym)
PAGE_SIZE, true, ksym->name);
}
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
+#ifdef CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS
+/*
+ * We have only single direct_ops which contains all the direct call
+ * sites and is the only global ftrace_ops for all trampolines.
+ *
+ * We use 'update_ftrace_direct_*' api for attachment.
+ */
+struct ftrace_ops direct_ops = {
+ .ops_func = bpf_tramp_ftrace_ops_func,
+};
+
+static int direct_ops_alloc(struct bpf_trampoline *tr)
+{
+ tr->fops = &direct_ops;
+ return 0;
+}
+
+static void direct_ops_free(struct bpf_trampoline *tr) { }
+
+static struct ftrace_hash *hash_from_ip(struct bpf_trampoline *tr, void *ptr)
+{
+ unsigned long ip, addr = (unsigned long) ptr;
+ struct ftrace_hash *hash;
+
+ ip = ftrace_location(tr->ip);
+ if (!ip)
+ return NULL;
+ hash = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
+ if (!hash)
+ return NULL;
+ if (bpf_trampoline_use_jmp(tr->flags))
+ addr = ftrace_jmp_set(addr);
+ if (!add_ftrace_hash_entry_direct(hash, ip, addr)) {
+ free_ftrace_hash(hash);
+ return NULL;
+ }
+ return hash;
+}
+
+static int direct_ops_add(struct bpf_trampoline *tr, void *addr)
+{
+ struct ftrace_hash *hash = hash_from_ip(tr, addr);
+ int err;
+
+ if (!hash)
+ return -ENOMEM;
+ err = update_ftrace_direct_add(tr->fops, hash);
+ free_ftrace_hash(hash);
+ return err;
+}
+
+static int direct_ops_del(struct bpf_trampoline *tr, void *addr)
+{
+ struct ftrace_hash *hash = hash_from_ip(tr, addr);
+ int err;
+
+ if (!hash)
+ return -ENOMEM;
+ err = update_ftrace_direct_del(tr->fops, hash);
+ free_ftrace_hash(hash);
+ return err;
+}
+
+static int direct_ops_mod(struct bpf_trampoline *tr, void *addr, bool lock_direct_mutex)
+{
+ struct ftrace_hash *hash = hash_from_ip(tr, addr);
+ int err;
+
+ if (!hash)
+ return -ENOMEM;
+ err = update_ftrace_direct_mod(tr->fops, hash, lock_direct_mutex);
+ free_ftrace_hash(hash);
+ return err;
+}
+#else
+/*
+ * We allocate ftrace_ops object for each trampoline and it contains
+ * call site specific for that trampoline.
+ *
+ * We use *_ftrace_direct api for attachment.
+ */
+static int direct_ops_alloc(struct bpf_trampoline *tr)
+{
+ tr->fops = kzalloc(sizeof(struct ftrace_ops), GFP_KERNEL);
+ if (!tr->fops)
+ return -ENOMEM;
+ tr->fops->private = tr;
+ tr->fops->ops_func = bpf_tramp_ftrace_ops_func;
+ return 0;
+}
+
+static void direct_ops_free(struct bpf_trampoline *tr)
+{
+ if (!tr->fops)
+ return;
+ ftrace_free_filter(tr->fops);
+ kfree(tr->fops);
+}
+
+static int direct_ops_add(struct bpf_trampoline *tr, void *ptr)
+{
+ unsigned long addr = (unsigned long) ptr;
+ struct ftrace_ops *ops = tr->fops;
+ int ret;
+
+ if (bpf_trampoline_use_jmp(tr->flags))
+ addr = ftrace_jmp_set(addr);
+
+ ret = ftrace_set_filter_ip(ops, tr->ip, 0, 1);
+ if (ret)
+ return ret;
+ return register_ftrace_direct(ops, addr);
+}
+
+static int direct_ops_del(struct bpf_trampoline *tr, void *addr)
+{
+ return unregister_ftrace_direct(tr->fops, (long)addr, false);
+}
+
+static int direct_ops_mod(struct bpf_trampoline *tr, void *ptr, bool lock_direct_mutex)
+{
+ unsigned long addr = (unsigned long) ptr;
+ struct ftrace_ops *ops = tr->fops;
+
+ if (bpf_trampoline_use_jmp(tr->flags))
+ addr = ftrace_jmp_set(addr);
+ if (lock_direct_mutex)
+ return modify_ftrace_direct(ops, addr);
+ return modify_ftrace_direct_nolock(ops, addr);
+}
+#endif /* CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS */
+#else
+static void direct_ops_free(struct bpf_trampoline *tr) { }
+
+static int direct_ops_alloc(struct bpf_trampoline *tr)
+{
+ return 0;
+}
+
+static int direct_ops_add(struct bpf_trampoline *tr, void *addr)
+{
+ return -ENODEV;
+}
+
+static int direct_ops_del(struct bpf_trampoline *tr, void *addr)
+{
+ return -ENODEV;
+}
+
+static int direct_ops_mod(struct bpf_trampoline *tr, void *ptr, bool lock_direct_mutex)
+{
+ return -ENODEV;
+}
+#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
+
static struct bpf_trampoline *bpf_trampoline_lookup(u64 key, unsigned long ip)
{
struct bpf_trampoline *tr;
@@ -154,16 +338,11 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key, unsigned long ip)
tr = kzalloc(sizeof(*tr), GFP_KERNEL);
if (!tr)
goto out;
-#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
- tr->fops = kzalloc(sizeof(struct ftrace_ops), GFP_KERNEL);
- if (!tr->fops) {
+ if (direct_ops_alloc(tr)) {
kfree(tr);
tr = NULL;
goto out;
}
- tr->fops->private = tr;
- tr->fops->ops_func = bpf_tramp_ftrace_ops_func;
-#endif
tr->key = key;
tr->ip = ftrace_location(ip);
@@ -206,7 +385,7 @@ static int unregister_fentry(struct bpf_trampoline *tr, u32 orig_flags,
int ret;
if (tr->func.ftrace_managed)
- ret = unregister_ftrace_direct(tr->fops, (long)old_addr, false);
+ ret = direct_ops_del(tr, old_addr);
else
ret = bpf_trampoline_update_fentry(tr, orig_flags, old_addr, NULL);
@@ -220,15 +399,7 @@ static int modify_fentry(struct bpf_trampoline *tr, u32 orig_flags,
int ret;
if (tr->func.ftrace_managed) {
- unsigned long addr = (unsigned long) new_addr;
-
- if (bpf_trampoline_use_jmp(tr->flags))
- addr = ftrace_jmp_set(addr);
-
- if (lock_direct_mutex)
- ret = modify_ftrace_direct(tr->fops, addr);
- else
- ret = modify_ftrace_direct_nolock(tr->fops, addr);
+ ret = direct_ops_mod(tr, new_addr, lock_direct_mutex);
} else {
ret = bpf_trampoline_update_fentry(tr, orig_flags, old_addr,
new_addr);
@@ -251,15 +422,7 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
}
if (tr->func.ftrace_managed) {
- unsigned long addr = (unsigned long) new_addr;
-
- if (bpf_trampoline_use_jmp(tr->flags))
- addr = ftrace_jmp_set(addr);
-
- ret = ftrace_set_filter_ip(tr->fops, (unsigned long)ip, 0, 1);
- if (ret)
- return ret;
- ret = register_ftrace_direct(tr->fops, addr);
+ ret = direct_ops_add(tr, new_addr);
} else {
ret = bpf_trampoline_update_fentry(tr, 0, NULL, new_addr);
}
@@ -910,10 +1073,7 @@ void bpf_trampoline_put(struct bpf_trampoline *tr)
*/
hlist_del(&tr->hlist_key);
hlist_del(&tr->hlist_ip);
- if (tr->fops) {
- ftrace_free_filter(tr->fops);
- kfree(tr->fops);
- }
+ direct_ops_free(tr);
kfree(tr);
out:
mutex_unlock(&trampoline_mutex);
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index bfa2ec46e075..d7042a09fe46 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -50,6 +50,9 @@ config HAVE_DYNAMIC_FTRACE_WITH_REGS
config HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
bool
+config HAVE_SINGLE_FTRACE_DIRECT_OPS
+ bool
+
config HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS
bool
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 02030f62d737..4ed910d3d00d 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -2631,8 +2631,13 @@ unsigned long ftrace_find_rec_direct(unsigned long ip)
static void call_direct_funcs(unsigned long ip, unsigned long pip,
struct ftrace_ops *ops, struct ftrace_regs *fregs)
{
- unsigned long addr = READ_ONCE(ops->direct_call);
+ unsigned long addr;
+#ifdef CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS
+ addr = ftrace_find_rec_direct(ip);
+#else
+ addr = READ_ONCE(ops->direct_call);
+#endif
if (!addr)
return;
--
2.52.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PATCHv6 bpf-next 1/9] ftrace,bpf: Remove FTRACE_OPS_FL_JMP ftrace_ops flag
2025-12-30 14:50 ` [PATCHv6 bpf-next 1/9] ftrace,bpf: Remove FTRACE_OPS_FL_JMP ftrace_ops flag Jiri Olsa
@ 2026-01-10 0:36 ` Andrii Nakryiko
0 siblings, 0 replies; 27+ messages in thread
From: Andrii Nakryiko @ 2026-01-10 0:36 UTC (permalink / raw)
To: Jiri Olsa
Cc: Steven Rostedt, Florent Revest, Mark Rutland, bpf, linux-kernel,
linux-trace-kernel, linux-arm-kernel, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Menglong Dong, Song Liu
On Tue, Dec 30, 2025 at 6:50 AM Jiri Olsa <jolsa@kernel.org> wrote:
>
> At the moment the we allow the jmp attach only for ftrace_ops that
> has FTRACE_OPS_FL_JMP set. This conflicts with following changes
> where we use single ftrace_ops object for all direct call sites,
> so all could be be attached via just call or jmp.
>
> We already limit the jmp attach support with config option and bit
> (LSB) set on the trampoline address. It turns out that's actually
> enough to limit the jmp attach for architecture and only for chosen
> addresses (with LSB bit set).
>
> Each user of register_ftrace_direct or modify_ftrace_direct can set
> the trampoline bit (LSB) to indicate it has to be attached by jmp.
>
> The bpf trampoline generation code uses trampoline flags to generate
> jmp-attach specific code and ftrace inner code uses the trampoline
> bit (LSB) to handle return from jmp attachment, so there's no harm
> to remove the FTRACE_OPS_FL_JMP bit.
>
> The fexit/fmodret performance stays the same (did not drop),
> current code:
>
> fentry : 77.904 ± 0.546M/s
> fexit : 62.430 ± 0.554M/s
> fmodret : 66.503 ± 0.902M/s
>
> with this change:
>
> fentry : 80.472 ± 0.061M/s
> fexit : 63.995 ± 0.127M/s
> fmodret : 67.362 ± 0.175M/s
>
> Fixes: 25e4e3565d45 ("ftrace: Introduce FTRACE_OPS_FL_JMP")
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
> include/linux/ftrace.h | 1 -
> kernel/bpf/trampoline.c | 32 ++++++++++++++------------------
> kernel/trace/ftrace.c | 14 --------------
> 3 files changed, 14 insertions(+), 33 deletions(-)
>
I don't see anything wrong with this from BPF side
Acked-by: Andrii Nakryiko <andrii@kernel.org>
[...]
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCHv6 bpf-next 7/9] bpf: Add trampoline ip hash table
2025-12-30 14:50 ` [PATCHv6 bpf-next 7/9] bpf: Add trampoline ip hash table Jiri Olsa
@ 2026-01-10 0:36 ` Andrii Nakryiko
2026-01-12 21:27 ` Jiri Olsa
0 siblings, 1 reply; 27+ messages in thread
From: Andrii Nakryiko @ 2026-01-10 0:36 UTC (permalink / raw)
To: Jiri Olsa
Cc: Steven Rostedt, Florent Revest, Mark Rutland, bpf, linux-kernel,
linux-trace-kernel, linux-arm-kernel, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Menglong Dong, Song Liu
On Tue, Dec 30, 2025 at 6:51 AM Jiri Olsa <jolsa@kernel.org> wrote:
>
> Following changes need to lookup trampoline based on its ip address,
> adding hash table for that.
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
> include/linux/bpf.h | 7 +++++--
> kernel/bpf/trampoline.c | 30 +++++++++++++++++++-----------
> 2 files changed, 24 insertions(+), 13 deletions(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 4e7d72dfbcd4..c85677aae865 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1325,14 +1325,17 @@ struct bpf_tramp_image {
> };
>
> struct bpf_trampoline {
> - /* hlist for trampoline_table */
> - struct hlist_node hlist;
> + /* hlist for trampoline_key_table */
> + struct hlist_node hlist_key;
> + /* hlist for trampoline_ip_table */
> + struct hlist_node hlist_ip;
> struct ftrace_ops *fops;
> /* serializes access to fields of this trampoline */
> struct mutex mutex;
> refcount_t refcnt;
> u32 flags;
> u64 key;
> + unsigned long ip;
> struct {
> struct btf_func_model model;
> void *addr;
> diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> index 789ff4e1f40b..bdac9d673776 100644
> --- a/kernel/bpf/trampoline.c
> +++ b/kernel/bpf/trampoline.c
> @@ -24,9 +24,10 @@ const struct bpf_prog_ops bpf_extension_prog_ops = {
> #define TRAMPOLINE_HASH_BITS 10
> #define TRAMPOLINE_TABLE_SIZE (1 << TRAMPOLINE_HASH_BITS)
>
> -static struct hlist_head trampoline_table[TRAMPOLINE_TABLE_SIZE];
> +static struct hlist_head trampoline_key_table[TRAMPOLINE_TABLE_SIZE];
> +static struct hlist_head trampoline_ip_table[TRAMPOLINE_TABLE_SIZE];
>
> -/* serializes access to trampoline_table */
> +/* serializes access to trampoline tables */
> static DEFINE_MUTEX(trampoline_mutex);
>
> #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
> @@ -135,15 +136,15 @@ void bpf_image_ksym_del(struct bpf_ksym *ksym)
> PAGE_SIZE, true, ksym->name);
> }
>
> -static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
> +static struct bpf_trampoline *bpf_trampoline_lookup(u64 key, unsigned long ip)
> {
> struct bpf_trampoline *tr;
> struct hlist_head *head;
> int i;
>
> mutex_lock(&trampoline_mutex);
> - head = &trampoline_table[hash_64(key, TRAMPOLINE_HASH_BITS)];
> - hlist_for_each_entry(tr, head, hlist) {
> + head = &trampoline_key_table[hash_64(key, TRAMPOLINE_HASH_BITS)];
> + hlist_for_each_entry(tr, head, hlist_key) {
> if (tr->key == key) {
> refcount_inc(&tr->refcnt);
> goto out;
> @@ -164,8 +165,12 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
> #endif
>
> tr->key = key;
> - INIT_HLIST_NODE(&tr->hlist);
> - hlist_add_head(&tr->hlist, head);
> + tr->ip = ftrace_location(ip);
> + INIT_HLIST_NODE(&tr->hlist_key);
> + INIT_HLIST_NODE(&tr->hlist_ip);
> + hlist_add_head(&tr->hlist_key, head);
> + head = &trampoline_ip_table[hash_64(tr->ip, TRAMPOLINE_HASH_BITS)];
For key lookups we check that there is no existing trampoline for the
given key. Can it happen that we have two trampolines at the same IP
but using two different keys?
> + hlist_add_head(&tr->hlist_ip, head);
> refcount_set(&tr->refcnt, 1);
> mutex_init(&tr->mutex);
> for (i = 0; i < BPF_TRAMP_MAX; i++)
> @@ -846,7 +851,7 @@ void bpf_trampoline_unlink_cgroup_shim(struct bpf_prog *prog)
> prog->aux->attach_btf_id);
>
> bpf_lsm_find_cgroup_shim(prog, &bpf_func);
> - tr = bpf_trampoline_lookup(key);
> + tr = bpf_trampoline_lookup(key, 0);
> if (WARN_ON_ONCE(!tr))
> return;
>
> @@ -866,7 +871,7 @@ struct bpf_trampoline *bpf_trampoline_get(u64 key,
> {
> struct bpf_trampoline *tr;
>
> - tr = bpf_trampoline_lookup(key);
> + tr = bpf_trampoline_lookup(key, tgt_info->tgt_addr);
> if (!tr)
> return NULL;
>
> @@ -902,7 +907,8 @@ void bpf_trampoline_put(struct bpf_trampoline *tr)
> * fexit progs. The fentry-only trampoline will be freed via
> * multiple rcu callbacks.
> */
> - hlist_del(&tr->hlist);
> + hlist_del(&tr->hlist_key);
> + hlist_del(&tr->hlist_ip);
> if (tr->fops) {
> ftrace_free_filter(tr->fops);
> kfree(tr->fops);
> @@ -1175,7 +1181,9 @@ static int __init init_trampolines(void)
> int i;
>
> for (i = 0; i < TRAMPOLINE_TABLE_SIZE; i++)
> - INIT_HLIST_HEAD(&trampoline_table[i]);
> + INIT_HLIST_HEAD(&trampoline_key_table[i]);
> + for (i = 0; i < TRAMPOLINE_TABLE_SIZE; i++)
> + INIT_HLIST_HEAD(&trampoline_ip_table[i]);
> return 0;
> }
> late_initcall(init_trampolines);
> --
> 2.52.0
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCHv6 bpf-next 9/9] bpf,x86: Use single ftrace_ops for direct calls
2025-12-30 14:50 ` [PATCHv6 bpf-next 9/9] bpf,x86: Use single ftrace_ops for direct calls Jiri Olsa
@ 2026-01-10 0:36 ` Andrii Nakryiko
2026-02-27 17:40 ` Ihor Solodrai
1 sibling, 0 replies; 27+ messages in thread
From: Andrii Nakryiko @ 2026-01-10 0:36 UTC (permalink / raw)
To: Jiri Olsa
Cc: Steven Rostedt, Florent Revest, Mark Rutland, bpf, linux-kernel,
linux-trace-kernel, linux-arm-kernel, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Menglong Dong, Song Liu
On Tue, Dec 30, 2025 at 6:51 AM Jiri Olsa <jolsa@kernel.org> wrote:
>
> Using single ftrace_ops for direct calls update instead of allocating
> ftrace_ops object for each trampoline.
>
> With single ftrace_ops object we can use update_ftrace_direct_* api
> that allows multiple ip sites updates on single ftrace_ops object.
>
> Adding HAVE_SINGLE_FTRACE_DIRECT_OPS config option to be enabled on
> each arch that supports this.
>
> At the moment we can enable this only on x86 arch, because arm relies
> on ftrace_ops object representing just single trampoline image (stored
> in ftrace_ops::direct_call). Archs that do not support this will continue
> to use *_ftrace_direct api.
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
> arch/x86/Kconfig | 1 +
> kernel/bpf/trampoline.c | 220 ++++++++++++++++++++++++++++++++++------
> kernel/trace/Kconfig | 3 +
> kernel/trace/ftrace.c | 7 +-
> 4 files changed, 200 insertions(+), 31 deletions(-)
>
As far as I can follow, everything looks reasonable
Acked-by: Andrii Nakryiko <andrii@kernel.org>
[...]
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCHv6 bpf-next 7/9] bpf: Add trampoline ip hash table
2026-01-10 0:36 ` Andrii Nakryiko
@ 2026-01-12 21:27 ` Jiri Olsa
2026-01-13 11:02 ` Alan Maguire
0 siblings, 1 reply; 27+ messages in thread
From: Jiri Olsa @ 2026-01-12 21:27 UTC (permalink / raw)
To: Andrii Nakryiko, Alan Maguire
Cc: Steven Rostedt, Florent Revest, Mark Rutland, bpf, linux-kernel,
linux-trace-kernel, linux-arm-kernel, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Menglong Dong, Song Liu
On Fri, Jan 09, 2026 at 04:36:41PM -0800, Andrii Nakryiko wrote:
> On Tue, Dec 30, 2025 at 6:51 AM Jiri Olsa <jolsa@kernel.org> wrote:
> >
> > Following changes need to lookup trampoline based on its ip address,
> > adding hash table for that.
> >
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> > include/linux/bpf.h | 7 +++++--
> > kernel/bpf/trampoline.c | 30 +++++++++++++++++++-----------
> > 2 files changed, 24 insertions(+), 13 deletions(-)
> >
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index 4e7d72dfbcd4..c85677aae865 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -1325,14 +1325,17 @@ struct bpf_tramp_image {
> > };
> >
> > struct bpf_trampoline {
> > - /* hlist for trampoline_table */
> > - struct hlist_node hlist;
> > + /* hlist for trampoline_key_table */
> > + struct hlist_node hlist_key;
> > + /* hlist for trampoline_ip_table */
> > + struct hlist_node hlist_ip;
> > struct ftrace_ops *fops;
> > /* serializes access to fields of this trampoline */
> > struct mutex mutex;
> > refcount_t refcnt;
> > u32 flags;
> > u64 key;
> > + unsigned long ip;
> > struct {
> > struct btf_func_model model;
> > void *addr;
> > diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> > index 789ff4e1f40b..bdac9d673776 100644
> > --- a/kernel/bpf/trampoline.c
> > +++ b/kernel/bpf/trampoline.c
> > @@ -24,9 +24,10 @@ const struct bpf_prog_ops bpf_extension_prog_ops = {
> > #define TRAMPOLINE_HASH_BITS 10
> > #define TRAMPOLINE_TABLE_SIZE (1 << TRAMPOLINE_HASH_BITS)
> >
> > -static struct hlist_head trampoline_table[TRAMPOLINE_TABLE_SIZE];
> > +static struct hlist_head trampoline_key_table[TRAMPOLINE_TABLE_SIZE];
> > +static struct hlist_head trampoline_ip_table[TRAMPOLINE_TABLE_SIZE];
> >
> > -/* serializes access to trampoline_table */
> > +/* serializes access to trampoline tables */
> > static DEFINE_MUTEX(trampoline_mutex);
> >
> > #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
> > @@ -135,15 +136,15 @@ void bpf_image_ksym_del(struct bpf_ksym *ksym)
> > PAGE_SIZE, true, ksym->name);
> > }
> >
> > -static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
> > +static struct bpf_trampoline *bpf_trampoline_lookup(u64 key, unsigned long ip)
> > {
> > struct bpf_trampoline *tr;
> > struct hlist_head *head;
> > int i;
> >
> > mutex_lock(&trampoline_mutex);
> > - head = &trampoline_table[hash_64(key, TRAMPOLINE_HASH_BITS)];
> > - hlist_for_each_entry(tr, head, hlist) {
> > + head = &trampoline_key_table[hash_64(key, TRAMPOLINE_HASH_BITS)];
> > + hlist_for_each_entry(tr, head, hlist_key) {
> > if (tr->key == key) {
> > refcount_inc(&tr->refcnt);
> > goto out;
> > @@ -164,8 +165,12 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
> > #endif
> >
> > tr->key = key;
> > - INIT_HLIST_NODE(&tr->hlist);
> > - hlist_add_head(&tr->hlist, head);
> > + tr->ip = ftrace_location(ip);
> > + INIT_HLIST_NODE(&tr->hlist_key);
> > + INIT_HLIST_NODE(&tr->hlist_ip);
> > + hlist_add_head(&tr->hlist_key, head);
> > + head = &trampoline_ip_table[hash_64(tr->ip, TRAMPOLINE_HASH_BITS)];
>
> For key lookups we check that there is no existing trampoline for the
> given key. Can it happen that we have two trampolines at the same IP
> but using two different keys?
so multiple keys (different static functions with same name) resolving to
the same ip happened in past and we should now be able to catch those in
pahole, right? CC-ing Alan ;-)
however, that should fail the attachment on ftrace/direct layer
say we have already registered and attached trampoline key1-ip1,
follow-up attachment of trampoline with key2-ip1 will fail on:
bpf_trampoline_update
register_fentry
direct_ops_add
update_ftrace_direct_add
...
/* Make sure requested entries are not already registered. */
fails, because ip1 is already in direct_functions
...
jirka
>
>
>
> > + hlist_add_head(&tr->hlist_ip, head);
> > refcount_set(&tr->refcnt, 1);
> > mutex_init(&tr->mutex);
> > for (i = 0; i < BPF_TRAMP_MAX; i++)
> > @@ -846,7 +851,7 @@ void bpf_trampoline_unlink_cgroup_shim(struct bpf_prog *prog)
> > prog->aux->attach_btf_id);
> >
> > bpf_lsm_find_cgroup_shim(prog, &bpf_func);
> > - tr = bpf_trampoline_lookup(key);
> > + tr = bpf_trampoline_lookup(key, 0);
> > if (WARN_ON_ONCE(!tr))
> > return;
> >
> > @@ -866,7 +871,7 @@ struct bpf_trampoline *bpf_trampoline_get(u64 key,
> > {
> > struct bpf_trampoline *tr;
> >
> > - tr = bpf_trampoline_lookup(key);
> > + tr = bpf_trampoline_lookup(key, tgt_info->tgt_addr);
> > if (!tr)
> > return NULL;
> >
> > @@ -902,7 +907,8 @@ void bpf_trampoline_put(struct bpf_trampoline *tr)
> > * fexit progs. The fentry-only trampoline will be freed via
> > * multiple rcu callbacks.
> > */
> > - hlist_del(&tr->hlist);
> > + hlist_del(&tr->hlist_key);
> > + hlist_del(&tr->hlist_ip);
> > if (tr->fops) {
> > ftrace_free_filter(tr->fops);
> > kfree(tr->fops);
> > @@ -1175,7 +1181,9 @@ static int __init init_trampolines(void)
> > int i;
> >
> > for (i = 0; i < TRAMPOLINE_TABLE_SIZE; i++)
> > - INIT_HLIST_HEAD(&trampoline_table[i]);
> > + INIT_HLIST_HEAD(&trampoline_key_table[i]);
> > + for (i = 0; i < TRAMPOLINE_TABLE_SIZE; i++)
> > + INIT_HLIST_HEAD(&trampoline_ip_table[i]);
> > return 0;
> > }
> > late_initcall(init_trampolines);
> > --
> > 2.52.0
> >
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCHv6 bpf-next 7/9] bpf: Add trampoline ip hash table
2026-01-12 21:27 ` Jiri Olsa
@ 2026-01-13 11:02 ` Alan Maguire
2026-01-13 11:58 ` Jiri Olsa
0 siblings, 1 reply; 27+ messages in thread
From: Alan Maguire @ 2026-01-13 11:02 UTC (permalink / raw)
To: Jiri Olsa, Andrii Nakryiko
Cc: Steven Rostedt, Florent Revest, Mark Rutland, bpf, linux-kernel,
linux-trace-kernel, linux-arm-kernel, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Menglong Dong, Song Liu
On 12/01/2026 21:27, Jiri Olsa wrote:
> On Fri, Jan 09, 2026 at 04:36:41PM -0800, Andrii Nakryiko wrote:
>> On Tue, Dec 30, 2025 at 6:51 AM Jiri Olsa <jolsa@kernel.org> wrote:
>>>
>>> Following changes need to lookup trampoline based on its ip address,
>>> adding hash table for that.
>>>
>>> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
>>> ---
>>> include/linux/bpf.h | 7 +++++--
>>> kernel/bpf/trampoline.c | 30 +++++++++++++++++++-----------
>>> 2 files changed, 24 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
>>> index 4e7d72dfbcd4..c85677aae865 100644
>>> --- a/include/linux/bpf.h
>>> +++ b/include/linux/bpf.h
>>> @@ -1325,14 +1325,17 @@ struct bpf_tramp_image {
>>> };
>>>
>>> struct bpf_trampoline {
>>> - /* hlist for trampoline_table */
>>> - struct hlist_node hlist;
>>> + /* hlist for trampoline_key_table */
>>> + struct hlist_node hlist_key;
>>> + /* hlist for trampoline_ip_table */
>>> + struct hlist_node hlist_ip;
>>> struct ftrace_ops *fops;
>>> /* serializes access to fields of this trampoline */
>>> struct mutex mutex;
>>> refcount_t refcnt;
>>> u32 flags;
>>> u64 key;
>>> + unsigned long ip;
>>> struct {
>>> struct btf_func_model model;
>>> void *addr;
>>> diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
>>> index 789ff4e1f40b..bdac9d673776 100644
>>> --- a/kernel/bpf/trampoline.c
>>> +++ b/kernel/bpf/trampoline.c
>>> @@ -24,9 +24,10 @@ const struct bpf_prog_ops bpf_extension_prog_ops = {
>>> #define TRAMPOLINE_HASH_BITS 10
>>> #define TRAMPOLINE_TABLE_SIZE (1 << TRAMPOLINE_HASH_BITS)
>>>
>>> -static struct hlist_head trampoline_table[TRAMPOLINE_TABLE_SIZE];
>>> +static struct hlist_head trampoline_key_table[TRAMPOLINE_TABLE_SIZE];
>>> +static struct hlist_head trampoline_ip_table[TRAMPOLINE_TABLE_SIZE];
>>>
>>> -/* serializes access to trampoline_table */
>>> +/* serializes access to trampoline tables */
>>> static DEFINE_MUTEX(trampoline_mutex);
>>>
>>> #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
>>> @@ -135,15 +136,15 @@ void bpf_image_ksym_del(struct bpf_ksym *ksym)
>>> PAGE_SIZE, true, ksym->name);
>>> }
>>>
>>> -static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
>>> +static struct bpf_trampoline *bpf_trampoline_lookup(u64 key, unsigned long ip)
>>> {
>>> struct bpf_trampoline *tr;
>>> struct hlist_head *head;
>>> int i;
>>>
>>> mutex_lock(&trampoline_mutex);
>>> - head = &trampoline_table[hash_64(key, TRAMPOLINE_HASH_BITS)];
>>> - hlist_for_each_entry(tr, head, hlist) {
>>> + head = &trampoline_key_table[hash_64(key, TRAMPOLINE_HASH_BITS)];
>>> + hlist_for_each_entry(tr, head, hlist_key) {
>>> if (tr->key == key) {
>>> refcount_inc(&tr->refcnt);
>>> goto out;
>>> @@ -164,8 +165,12 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
>>> #endif
>>>
>>> tr->key = key;
>>> - INIT_HLIST_NODE(&tr->hlist);
>>> - hlist_add_head(&tr->hlist, head);
>>> + tr->ip = ftrace_location(ip);
>>> + INIT_HLIST_NODE(&tr->hlist_key);
>>> + INIT_HLIST_NODE(&tr->hlist_ip);
>>> + hlist_add_head(&tr->hlist_key, head);
>>> + head = &trampoline_ip_table[hash_64(tr->ip, TRAMPOLINE_HASH_BITS)];
>>
>> For key lookups we check that there is no existing trampoline for the
>> given key. Can it happen that we have two trampolines at the same IP
>> but using two different keys?
>
> so multiple keys (different static functions with same name) resolving to
> the same ip happened in past and we should now be able to catch those in
> pahole, right? CC-ing Alan ;-)
>
We could catch this I think, but today we don't. We have support to avoid
encoding BTF where a function name has multiple instances (ambiguous address).
Here you're concerned with mapping from ip to function name, where multiple
names share the same ip, right?
A quick scan of System.map suggests there's a ~150 of these,
excluding __pfx_ entries:
$ awk 'NR > 1 && ($2 == "T" || $2 == "t") && $1 == prev_field { print;} { prev_field = $1}' System.map|egrep -v __pfx|wc -l
155
Alan
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCHv6 bpf-next 7/9] bpf: Add trampoline ip hash table
2026-01-13 11:02 ` Alan Maguire
@ 2026-01-13 11:58 ` Jiri Olsa
0 siblings, 0 replies; 27+ messages in thread
From: Jiri Olsa @ 2026-01-13 11:58 UTC (permalink / raw)
To: Alan Maguire
Cc: Jiri Olsa, Andrii Nakryiko, Steven Rostedt, Florent Revest,
Mark Rutland, bpf, linux-kernel, linux-trace-kernel,
linux-arm-kernel, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Menglong Dong, Song Liu
On Tue, Jan 13, 2026 at 11:02:33AM +0000, Alan Maguire wrote:
> On 12/01/2026 21:27, Jiri Olsa wrote:
> > On Fri, Jan 09, 2026 at 04:36:41PM -0800, Andrii Nakryiko wrote:
> >> On Tue, Dec 30, 2025 at 6:51 AM Jiri Olsa <jolsa@kernel.org> wrote:
> >>>
> >>> Following changes need to lookup trampoline based on its ip address,
> >>> adding hash table for that.
> >>>
> >>> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> >>> ---
> >>> include/linux/bpf.h | 7 +++++--
> >>> kernel/bpf/trampoline.c | 30 +++++++++++++++++++-----------
> >>> 2 files changed, 24 insertions(+), 13 deletions(-)
> >>>
> >>> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> >>> index 4e7d72dfbcd4..c85677aae865 100644
> >>> --- a/include/linux/bpf.h
> >>> +++ b/include/linux/bpf.h
> >>> @@ -1325,14 +1325,17 @@ struct bpf_tramp_image {
> >>> };
> >>>
> >>> struct bpf_trampoline {
> >>> - /* hlist for trampoline_table */
> >>> - struct hlist_node hlist;
> >>> + /* hlist for trampoline_key_table */
> >>> + struct hlist_node hlist_key;
> >>> + /* hlist for trampoline_ip_table */
> >>> + struct hlist_node hlist_ip;
> >>> struct ftrace_ops *fops;
> >>> /* serializes access to fields of this trampoline */
> >>> struct mutex mutex;
> >>> refcount_t refcnt;
> >>> u32 flags;
> >>> u64 key;
> >>> + unsigned long ip;
> >>> struct {
> >>> struct btf_func_model model;
> >>> void *addr;
> >>> diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
> >>> index 789ff4e1f40b..bdac9d673776 100644
> >>> --- a/kernel/bpf/trampoline.c
> >>> +++ b/kernel/bpf/trampoline.c
> >>> @@ -24,9 +24,10 @@ const struct bpf_prog_ops bpf_extension_prog_ops = {
> >>> #define TRAMPOLINE_HASH_BITS 10
> >>> #define TRAMPOLINE_TABLE_SIZE (1 << TRAMPOLINE_HASH_BITS)
> >>>
> >>> -static struct hlist_head trampoline_table[TRAMPOLINE_TABLE_SIZE];
> >>> +static struct hlist_head trampoline_key_table[TRAMPOLINE_TABLE_SIZE];
> >>> +static struct hlist_head trampoline_ip_table[TRAMPOLINE_TABLE_SIZE];
> >>>
> >>> -/* serializes access to trampoline_table */
> >>> +/* serializes access to trampoline tables */
> >>> static DEFINE_MUTEX(trampoline_mutex);
> >>>
> >>> #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
> >>> @@ -135,15 +136,15 @@ void bpf_image_ksym_del(struct bpf_ksym *ksym)
> >>> PAGE_SIZE, true, ksym->name);
> >>> }
> >>>
> >>> -static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
> >>> +static struct bpf_trampoline *bpf_trampoline_lookup(u64 key, unsigned long ip)
> >>> {
> >>> struct bpf_trampoline *tr;
> >>> struct hlist_head *head;
> >>> int i;
> >>>
> >>> mutex_lock(&trampoline_mutex);
> >>> - head = &trampoline_table[hash_64(key, TRAMPOLINE_HASH_BITS)];
> >>> - hlist_for_each_entry(tr, head, hlist) {
> >>> + head = &trampoline_key_table[hash_64(key, TRAMPOLINE_HASH_BITS)];
> >>> + hlist_for_each_entry(tr, head, hlist_key) {
> >>> if (tr->key == key) {
> >>> refcount_inc(&tr->refcnt);
> >>> goto out;
> >>> @@ -164,8 +165,12 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
> >>> #endif
> >>>
> >>> tr->key = key;
> >>> - INIT_HLIST_NODE(&tr->hlist);
> >>> - hlist_add_head(&tr->hlist, head);
> >>> + tr->ip = ftrace_location(ip);
> >>> + INIT_HLIST_NODE(&tr->hlist_key);
> >>> + INIT_HLIST_NODE(&tr->hlist_ip);
> >>> + hlist_add_head(&tr->hlist_key, head);
> >>> + head = &trampoline_ip_table[hash_64(tr->ip, TRAMPOLINE_HASH_BITS)];
> >>
> >> For key lookups we check that there is no existing trampoline for the
> >> given key. Can it happen that we have two trampolines at the same IP
> >> but using two different keys?
> >
> > so multiple keys (different static functions with same name) resolving to
> > the same ip happened in past and we should now be able to catch those in
> > pahole, right? CC-ing Alan ;-)
> >
>
> We could catch this I think, but today we don't. We have support to avoid
> encoding BTF where a function name has multiple instances (ambiguous address).
> Here you're concerned with mapping from ip to function name, where multiple
> names share the same ip, right?
so trampolines work only on top of BTF func record, so the 'key' represents
BTF_KIND_FUNC record.. and as such it can resolve to just single ip, because
pahole filters out functions with ambiguous instances IIUC
>
> A quick scan of System.map suggests there's a ~150 of these,
> excluding __pfx_ entries:
>
> $ awk 'NR > 1 && ($2 == "T" || $2 == "t") && $1 == prev_field { print;} { prev_field = $1}' System.map|egrep -v __pfx|wc -l
> 155
right, but these are just regular kernel symbols with aliases and other
shared stuff
jirka
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines
2025-12-30 14:50 [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines Jiri Olsa
` (8 preceding siblings ...)
2025-12-30 14:50 ` [PATCHv6 bpf-next 9/9] bpf,x86: Use single ftrace_ops for direct calls Jiri Olsa
@ 2026-01-15 18:54 ` Andrii Nakryiko
2026-01-26 9:48 ` Jiri Olsa
2026-01-28 14:48 ` Steven Rostedt
2026-01-28 20:00 ` patchwork-bot+netdevbpf
11 siblings, 1 reply; 27+ messages in thread
From: Andrii Nakryiko @ 2026-01-15 18:54 UTC (permalink / raw)
To: Jiri Olsa, Steven Rostedt
Cc: Florent Revest, Mark Rutland, bpf, linux-kernel,
linux-trace-kernel, linux-arm-kernel, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Menglong Dong, Song Liu
On Tue, Dec 30, 2025 at 6:50 AM Jiri Olsa <jolsa@kernel.org> wrote:
>
> hi,
> while poking the multi-tracing interface I ended up with just one ftrace_ops
> object to attach all trampolines.
>
> This change allows to use less direct API calls during the attachment changes
> in the future code, so in effect speeding up the attachment.
>
> In current code we get a speed up from using just a single ftrace_ops object.
>
> - with current code:
>
> Performance counter stats for 'bpftrace -e fentry:vmlinux:ksys_* {} -c true':
>
> 6,364,157,902 cycles:k
> 828,728,902 cycles:u
> 1,064,803,824 instructions:u # 1.28 insn per cycle
> 23,797,500,067 instructions:k # 3.74 insn per cycle
>
> 4.416004987 seconds time elapsed
>
> 0.164121000 seconds user
> 1.289550000 seconds sys
>
>
> - with the fix:
>
> Performance counter stats for 'bpftrace -e fentry:vmlinux:ksys_* {} -c true':
>
> 6,535,857,905 cycles:k
> 810,809,429 cycles:u
> 1,064,594,027 instructions:u # 1.31 insn per cycle
> 23,962,552,894 instructions:k # 3.67 insn per cycle
>
> 1.666961239 seconds time elapsed
>
> 0.157412000 seconds user
> 1.283396000 seconds sys
>
>
>
> The speedup seems to be related to the fact that with single ftrace_ops object
> we don't call ftrace_shutdown anymore (we use ftrace_update_ops instead) and
> we skip the synchronize rcu calls (each ~100ms) at the end of that function.
>
> rfc: https://lore.kernel.org/bpf/20250729102813.1531457-1-jolsa@kernel.org/
> v1: https://lore.kernel.org/bpf/20250923215147.1571952-1-jolsa@kernel.org/
> v2: https://lore.kernel.org/bpf/20251113123750.2507435-1-jolsa@kernel.org/
> v3: https://lore.kernel.org/bpf/20251120212402.466524-1-jolsa@kernel.org/
> v4: https://lore.kernel.org/bpf/20251203082402.78816-1-jolsa@kernel.org/
> v5: https://lore.kernel.org/bpf/20251215211402.353056-10-jolsa@kernel.org/
>
> v6 changes:
> - rename add_hash_entry_direct to add_ftrace_hash_entry_direct [Steven]
> - factor hash_add/hash_sub [Steven]
> - add kerneldoc header for update_ftrace_direct_* functions [Steven]
> - few assorted smaller fixes [Steven]
> - added missing direct_ops wrappers for !CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
> case [Steven]
>
So this looks good from BPF side, I think. Steven, if you don't mind
giving this patch set another look and if everything is to your liking
giving your ack, we can then apply it to bpf-next. Thanks!
> v5 changes:
> - do not export ftrace_hash object [Steven]
> - fix update_ftrace_direct_add new_filter_hash leak [ci]
>
> v4 changes:
> - rebased on top of bpf-next/master (with jmp attach changes)
> added patch 1 to deal with that
> - added extra checks for update_ftrace_direct_del/mod to address
> the ci bot review
>
> v3 changes:
> - rebased on top of bpf-next/master
> - fixed update_ftrace_direct_del cleanup path
> - added missing inline to update_ftrace_direct_* stubs
>
> v2 changes:
> - rebased on top fo bpf-next/master plus Song's livepatch fixes [1]
> - renamed the API functions [2] [Steven]
> - do not export the new api [Steven]
> - kept the original direct interface:
>
> I'm not sure if we want to melt both *_ftrace_direct and the new interface
> into single one. It's bit different in semantic (hence the name change as
> Steven suggested [2]) and I don't think the changes are not that big so
> we could easily keep both APIs.
>
> v1 changes:
> - make the change x86 specific, after discussing with Mark options for
> arm64 [Mark]
>
> thanks,
> jirka
>
>
> [1] https://lore.kernel.org/bpf/20251027175023.1521602-1-song@kernel.org/
> [2] https://lore.kernel.org/bpf/20250924050415.4aefcb91@batman.local.home/
> ---
> Jiri Olsa (9):
> ftrace,bpf: Remove FTRACE_OPS_FL_JMP ftrace_ops flag
> ftrace: Make alloc_and_copy_ftrace_hash direct friendly
> ftrace: Export some of hash related functions
> ftrace: Add update_ftrace_direct_add function
> ftrace: Add update_ftrace_direct_del function
> ftrace: Add update_ftrace_direct_mod function
> bpf: Add trampoline ip hash table
> ftrace: Factor ftrace_ops ops_func interface
> bpf,x86: Use single ftrace_ops for direct calls
>
> arch/x86/Kconfig | 1 +
> include/linux/bpf.h | 7 ++-
> include/linux/ftrace.h | 31 +++++++++-
> kernel/bpf/trampoline.c | 259 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---------------
> kernel/trace/Kconfig | 3 +
> kernel/trace/ftrace.c | 406 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------
> 6 files changed, 632 insertions(+), 75 deletions(-)
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines
2026-01-15 18:54 ` [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines Andrii Nakryiko
@ 2026-01-26 9:48 ` Jiri Olsa
0 siblings, 0 replies; 27+ messages in thread
From: Jiri Olsa @ 2026-01-26 9:48 UTC (permalink / raw)
To: Steven Rostedt
Cc: Florent Revest, Andrii Nakryiko, Mark Rutland, bpf, linux-kernel,
linux-trace-kernel, linux-arm-kernel, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Menglong Dong, Song Liu
hi,
gentle ping, thanks
jirka
On Thu, Jan 15, 2026 at 10:54:09AM -0800, Andrii Nakryiko wrote:
> On Tue, Dec 30, 2025 at 6:50 AM Jiri Olsa <jolsa@kernel.org> wrote:
> >
> > hi,
> > while poking the multi-tracing interface I ended up with just one ftrace_ops
> > object to attach all trampolines.
> >
> > This change allows to use less direct API calls during the attachment changes
> > in the future code, so in effect speeding up the attachment.
> >
> > In current code we get a speed up from using just a single ftrace_ops object.
> >
> > - with current code:
> >
> > Performance counter stats for 'bpftrace -e fentry:vmlinux:ksys_* {} -c true':
> >
> > 6,364,157,902 cycles:k
> > 828,728,902 cycles:u
> > 1,064,803,824 instructions:u # 1.28 insn per cycle
> > 23,797,500,067 instructions:k # 3.74 insn per cycle
> >
> > 4.416004987 seconds time elapsed
> >
> > 0.164121000 seconds user
> > 1.289550000 seconds sys
> >
> >
> > - with the fix:
> >
> > Performance counter stats for 'bpftrace -e fentry:vmlinux:ksys_* {} -c true':
> >
> > 6,535,857,905 cycles:k
> > 810,809,429 cycles:u
> > 1,064,594,027 instructions:u # 1.31 insn per cycle
> > 23,962,552,894 instructions:k # 3.67 insn per cycle
> >
> > 1.666961239 seconds time elapsed
> >
> > 0.157412000 seconds user
> > 1.283396000 seconds sys
> >
> >
> >
> > The speedup seems to be related to the fact that with single ftrace_ops object
> > we don't call ftrace_shutdown anymore (we use ftrace_update_ops instead) and
> > we skip the synchronize rcu calls (each ~100ms) at the end of that function.
> >
> > rfc: https://lore.kernel.org/bpf/20250729102813.1531457-1-jolsa@kernel.org/
> > v1: https://lore.kernel.org/bpf/20250923215147.1571952-1-jolsa@kernel.org/
> > v2: https://lore.kernel.org/bpf/20251113123750.2507435-1-jolsa@kernel.org/
> > v3: https://lore.kernel.org/bpf/20251120212402.466524-1-jolsa@kernel.org/
> > v4: https://lore.kernel.org/bpf/20251203082402.78816-1-jolsa@kernel.org/
> > v5: https://lore.kernel.org/bpf/20251215211402.353056-10-jolsa@kernel.org/
> >
> > v6 changes:
> > - rename add_hash_entry_direct to add_ftrace_hash_entry_direct [Steven]
> > - factor hash_add/hash_sub [Steven]
> > - add kerneldoc header for update_ftrace_direct_* functions [Steven]
> > - few assorted smaller fixes [Steven]
> > - added missing direct_ops wrappers for !CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
> > case [Steven]
> >
>
> So this looks good from BPF side, I think. Steven, if you don't mind
> giving this patch set another look and if everything is to your liking
> giving your ack, we can then apply it to bpf-next. Thanks!
>
> > v5 changes:
> > - do not export ftrace_hash object [Steven]
> > - fix update_ftrace_direct_add new_filter_hash leak [ci]
> >
> > v4 changes:
> > - rebased on top of bpf-next/master (with jmp attach changes)
> > added patch 1 to deal with that
> > - added extra checks for update_ftrace_direct_del/mod to address
> > the ci bot review
> >
> > v3 changes:
> > - rebased on top of bpf-next/master
> > - fixed update_ftrace_direct_del cleanup path
> > - added missing inline to update_ftrace_direct_* stubs
> >
> > v2 changes:
> > - rebased on top fo bpf-next/master plus Song's livepatch fixes [1]
> > - renamed the API functions [2] [Steven]
> > - do not export the new api [Steven]
> > - kept the original direct interface:
> >
> > I'm not sure if we want to melt both *_ftrace_direct and the new interface
> > into single one. It's bit different in semantic (hence the name change as
> > Steven suggested [2]) and I don't think the changes are not that big so
> > we could easily keep both APIs.
> >
> > v1 changes:
> > - make the change x86 specific, after discussing with Mark options for
> > arm64 [Mark]
> >
> > thanks,
> > jirka
> >
> >
> > [1] https://lore.kernel.org/bpf/20251027175023.1521602-1-song@kernel.org/
> > [2] https://lore.kernel.org/bpf/20250924050415.4aefcb91@batman.local.home/
> > ---
> > Jiri Olsa (9):
> > ftrace,bpf: Remove FTRACE_OPS_FL_JMP ftrace_ops flag
> > ftrace: Make alloc_and_copy_ftrace_hash direct friendly
> > ftrace: Export some of hash related functions
> > ftrace: Add update_ftrace_direct_add function
> > ftrace: Add update_ftrace_direct_del function
> > ftrace: Add update_ftrace_direct_mod function
> > bpf: Add trampoline ip hash table
> > ftrace: Factor ftrace_ops ops_func interface
> > bpf,x86: Use single ftrace_ops for direct calls
> >
> > arch/x86/Kconfig | 1 +
> > include/linux/bpf.h | 7 ++-
> > include/linux/ftrace.h | 31 +++++++++-
> > kernel/bpf/trampoline.c | 259 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---------------
> > kernel/trace/Kconfig | 3 +
> > kernel/trace/ftrace.c | 406 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------
> > 6 files changed, 632 insertions(+), 75 deletions(-)
> >
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines
2025-12-30 14:50 [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines Jiri Olsa
` (9 preceding siblings ...)
2026-01-15 18:54 ` [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines Andrii Nakryiko
@ 2026-01-28 14:48 ` Steven Rostedt
2026-01-28 20:00 ` patchwork-bot+netdevbpf
11 siblings, 0 replies; 27+ messages in thread
From: Steven Rostedt @ 2026-01-28 14:48 UTC (permalink / raw)
To: Jiri Olsa
Cc: Florent Revest, Mark Rutland, bpf, linux-kernel,
linux-trace-kernel, linux-arm-kernel, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Menglong Dong, Song Liu
On Tue, 30 Dec 2025 15:50:01 +0100
Jiri Olsa <jolsa@kernel.org> wrote:
> Jiri Olsa (9):
> ftrace,bpf: Remove FTRACE_OPS_FL_JMP ftrace_ops flag
> ftrace: Make alloc_and_copy_ftrace_hash direct friendly
> ftrace: Export some of hash related functions
> ftrace: Add update_ftrace_direct_add function
> ftrace: Add update_ftrace_direct_del function
> ftrace: Add update_ftrace_direct_mod function
> bpf: Add trampoline ip hash table
> ftrace: Factor ftrace_ops ops_func interface
> bpf,x86: Use single ftrace_ops for direct calls
I reviewed all the above patches with the exception of patch 7 (which was
BPF only). I even ran the entire set through my internal tests and they
passed.
I don't have anything for this merge window that will conflict with this
series, so if you want to push it through the BPF tree, feel free to do so.
For patches 1-6,8,9:
Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
-- Steve
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines
2025-12-30 14:50 [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines Jiri Olsa
` (10 preceding siblings ...)
2026-01-28 14:48 ` Steven Rostedt
@ 2026-01-28 20:00 ` patchwork-bot+netdevbpf
11 siblings, 0 replies; 27+ messages in thread
From: patchwork-bot+netdevbpf @ 2026-01-28 20:00 UTC (permalink / raw)
To: Jiri Olsa
Cc: rostedt, revest, mark.rutland, bpf, linux-kernel,
linux-trace-kernel, linux-arm-kernel, ast, daniel, andrii,
menglong8.dong, song
Hello:
This series was applied to bpf/bpf-next.git (master)
by Andrii Nakryiko <andrii@kernel.org>:
On Tue, 30 Dec 2025 15:50:01 +0100 you wrote:
> hi,
> while poking the multi-tracing interface I ended up with just one ftrace_ops
> object to attach all trampolines.
>
> This change allows to use less direct API calls during the attachment changes
> in the future code, so in effect speeding up the attachment.
>
> [...]
Here is the summary with links:
- [PATCHv6,bpf-next,1/9] ftrace,bpf: Remove FTRACE_OPS_FL_JMP ftrace_ops flag
https://git.kernel.org/bpf/bpf-next/c/4be42c922201
- [PATCHv6,bpf-next,2/9] ftrace: Make alloc_and_copy_ftrace_hash direct friendly
https://git.kernel.org/bpf/bpf-next/c/676bfeae7bd5
- [PATCHv6,bpf-next,3/9] ftrace: Export some of hash related functions
https://git.kernel.org/bpf/bpf-next/c/0e860d07c29d
- [PATCHv6,bpf-next,4/9] ftrace: Add update_ftrace_direct_add function
https://git.kernel.org/bpf/bpf-next/c/05dc5e9c1fe1
- [PATCHv6,bpf-next,5/9] ftrace: Add update_ftrace_direct_del function
https://git.kernel.org/bpf/bpf-next/c/8d2c1233f371
- [PATCHv6,bpf-next,6/9] ftrace: Add update_ftrace_direct_mod function
https://git.kernel.org/bpf/bpf-next/c/e93672f770d7
- [PATCHv6,bpf-next,7/9] bpf: Add trampoline ip hash table
https://git.kernel.org/bpf/bpf-next/c/7d0452497c29
- [PATCHv6,bpf-next,8/9] ftrace: Factor ftrace_ops ops_func interface
https://git.kernel.org/bpf/bpf-next/c/956747efd82a
- [PATCHv6,bpf-next,9/9] bpf,x86: Use single ftrace_ops for direct calls
https://git.kernel.org/bpf/bpf-next/c/424f6a361096
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCHv6 bpf-next 9/9] bpf,x86: Use single ftrace_ops for direct calls
2025-12-30 14:50 ` [PATCHv6 bpf-next 9/9] bpf,x86: Use single ftrace_ops for direct calls Jiri Olsa
2026-01-10 0:36 ` Andrii Nakryiko
@ 2026-02-27 17:40 ` Ihor Solodrai
2026-02-27 20:37 ` Jiri Olsa
1 sibling, 1 reply; 27+ messages in thread
From: Ihor Solodrai @ 2026-02-27 17:40 UTC (permalink / raw)
To: Jiri Olsa, Steven Rostedt, Florent Revest, Mark Rutland
Cc: bpf, linux-kernel, linux-trace-kernel, linux-arm-kernel,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Menglong Dong, Song Liu, Kumar Kartikeya Dwivedi
On 12/30/25 6:50 AM, Jiri Olsa wrote:
> Using single ftrace_ops for direct calls update instead of allocating
> ftrace_ops object for each trampoline.
>
> With single ftrace_ops object we can use update_ftrace_direct_* api
> that allows multiple ip sites updates on single ftrace_ops object.
>
> Adding HAVE_SINGLE_FTRACE_DIRECT_OPS config option to be enabled on
> each arch that supports this.
>
> At the moment we can enable this only on x86 arch, because arm relies
> on ftrace_ops object representing just single trampoline image (stored
> in ftrace_ops::direct_call). Archs that do not support this will continue
> to use *_ftrace_direct api.
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Hi Jiri,
Me and Kumar stumbled on kernel splats with "ftrace failed to modify",
and if running with KASAN:
BUG: KASAN: slab-use-after-free in __get_valid_kprobe+0x224/0x2a0
Pasting a full splat example at the bottom.
I was able to create a reproducer with AI, and then used it to bisect
to this patch. You can run it with ./test_progs -t ftrace_direct_race
Below is my (human-generated, haha) summary of AI's analysis of what's
happening. It makes sense to me conceptually, but I don't know enough
details here to call bullshit. Please take a look:
With CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS ftrace_replace_code()
operates on all call sites in the shared ops. Then if a concurrent
ftrace user (like kprobe) modifies a call site in between
ftrace_replace_code's verify pass and its patch pass, then ftrace_bug
fires and sets ftrace_disabled to 1.
Once ftrace is disabled, direct_ops_del silently fails to unregister
the direct call, and the call site still redirects to the stale
trampoline. After the BPF program is freed, we'll get use-after-free
on the next trace hit.
The reproducer is not great, because if everything is fine it just hangs.
But with the bug the kernel crashes pretty fast.
Maybe it makes sense to refine it to a proper "stress" selftest?
Reproducer patch:
From c595ef5a0ad9bc62d768080ff09502bc982c40e6 Mon Sep 17 00:00:00 2001
From: Ihor Solodrai <ihor.solodrai@linux.dev>
Date: Thu, 26 Feb 2026 17:00:39 -0800
Subject: [PATCH] reproducer
---
.../bpf/prog_tests/ftrace_direct_race.c | 243 ++++++++++++++++++
1 file changed, 243 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/ftrace_direct_race.c
diff --git a/tools/testing/selftests/bpf/prog_tests/ftrace_direct_race.c b/tools/testing/selftests/bpf/prog_tests/ftrace_direct_race.c
new file mode 100644
index 000000000000..369c55364d05
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/ftrace_direct_race.c
@@ -0,0 +1,243 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+/* Test to reproduce ftrace race between BPF trampoline attach/detach
+ * and kprobe attach/detach on the same function.
+ *
+ * With CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS, all BPF trampolines share
+ * a single ftrace_ops. Concurrent modifications (BPF trampoline vs kprobe)
+ * can race in ftrace_replace_code's verify-then-patch sequence, causing
+ * ftrace to become permanently disabled and leaving stale trampolines
+ * that reference freed BPF programs.
+ *
+ * Run with: ./test_progs -t ftrace_direct_race
+ */
+#include <test_progs.h>
+#include <bpf/libbpf.h>
+#include <pthread.h>
+#include <sys/ioctl.h>
+#include <linux/perf_event.h>
+#include <sys/syscall.h>
+
+#include "fentry_test.lskel.h"
+
+#define NUM_ITERATIONS 200
+
+static volatile bool stop;
+
+/* Thread 1: Rapidly attach and detach fentry BPF trampolines */
+static void *fentry_thread_fn(void *arg)
+{
+ int i;
+
+ for (i = 0; i < NUM_ITERATIONS && !stop; i++) {
+ struct fentry_test_lskel *skel;
+ int err;
+
+ skel = fentry_test_lskel__open();
+ if (!skel)
+ continue;
+
+ skel->keyring_id = KEY_SPEC_SESSION_KEYRING;
+ err = fentry_test_lskel__load(skel);
+ if (err) {
+ fentry_test_lskel__destroy(skel);
+ continue;
+ }
+
+ err = fentry_test_lskel__attach(skel);
+ if (err) {
+ fentry_test_lskel__destroy(skel);
+ continue;
+ }
+
+ /* Brief sleep to let the trampoline be live while kprobes race */
+ usleep(100 + rand() % 500);
+
+ fentry_test_lskel__detach(skel);
+ fentry_test_lskel__destroy(skel);
+ }
+
+ return NULL;
+}
+
+/* Thread 2: Rapidly create and destroy kprobes via tracefs on
+ * bpf_fentry_test* functions (the same functions the fentry thread targets).
+ * Creating/removing kprobe events goes through the ftrace code patching
+ * path that can race with BPF trampoline direct call operations.
+ */
+static void *kprobe_thread_fn(void *arg)
+{
+ const char *funcs[] = {
+ "bpf_fentry_test1",
+ "bpf_fentry_test2",
+ "bpf_fentry_test3",
+ "bpf_fentry_test4",
+ "bpf_fentry_test5",
+ "bpf_fentry_test6",
+ };
+ int i;
+
+ for (i = 0; i < NUM_ITERATIONS && !stop; i++) {
+ int j;
+
+ for (j = 0; j < 6 && !stop; j++) {
+ char cmd[256];
+
+ /* Create kprobe via tracefs */
+ snprintf(cmd, sizeof(cmd),
+ "echo 'p:kprobe_race_%d %s' >> /sys/kernel/debug/tracing/kprobe_events 2>/dev/null",
+ j, funcs[j]);
+ system(cmd);
+
+ /* Small delay */
+ usleep(50 + rand() % 200);
+
+ /* Remove kprobe */
+ snprintf(cmd, sizeof(cmd),
+ "echo '-:kprobe_race_%d' >> /sys/kernel/debug/tracing/kprobe_events 2>/dev/null",
+ j);
+ system(cmd);
+ }
+ }
+
+ return NULL;
+}
+
+/* Thread 3: Create kprobes via perf_event_open (the ftrace-based kind)
+ * which go through the arm_kprobe / disarm_kprobe ftrace path.
+ */
+static void *perf_kprobe_thread_fn(void *arg)
+{
+ const char *funcs[] = {
+ "bpf_fentry_test1",
+ "bpf_fentry_test2",
+ "bpf_fentry_test3",
+ };
+ int i;
+
+ for (i = 0; i < NUM_ITERATIONS && !stop; i++) {
+ int fds[3] = {-1, -1, -1};
+ int j;
+
+ for (j = 0; j < 3 && !stop; j++) {
+ struct perf_event_attr attr = {};
+ char path[256];
+ char buf[32];
+ char cmd[256];
+ int id_fd, id;
+
+ /* Create kprobe event */
+ snprintf(cmd, sizeof(cmd),
+ "echo 'p:perf_race_%d %s' >> /sys/kernel/debug/tracing/kprobe_events 2>/dev/null",
+ j, funcs[j]);
+ system(cmd);
+
+ /* Try to get the event id */
+ snprintf(path, sizeof(path),
+ "/sys/kernel/debug/tracing/events/kprobes/perf_race_%d/id", j);
+ id_fd = open(path, O_RDONLY);
+ if (id_fd < 0)
+ continue;
+
+ memset(buf, 0, sizeof(buf));
+ if (read(id_fd, buf, sizeof(buf) - 1) > 0)
+ id = atoi(buf);
+ else
+ id = -1;
+ close(id_fd);
+
+ if (id < 0)
+ continue;
+
+ /* Open perf event to arm the kprobe via ftrace */
+ attr.type = PERF_TYPE_TRACEPOINT;
+ attr.size = sizeof(attr);
+ attr.config = id;
+ attr.sample_type = PERF_SAMPLE_RAW;
+ attr.sample_period = 1;
+ attr.wakeup_events = 1;
+
+ fds[j] = syscall(__NR_perf_event_open, &attr, -1, 0, -1, 0);
+ if (fds[j] >= 0)
+ ioctl(fds[j], PERF_EVENT_IOC_ENABLE, 0);
+ }
+
+ usleep(100 + rand() % 300);
+
+ /* Close perf events (disarms kprobes via ftrace) */
+ for (j = 0; j < 3; j++) {
+ char cmd[256];
+
+ if (fds[j] >= 0)
+ close(fds[j]);
+
+ snprintf(cmd, sizeof(cmd),
+ "echo '-:perf_race_%d' >> /sys/kernel/debug/tracing/kprobe_events 2>/dev/null",
+ j);
+ system(cmd);
+ }
+ }
+
+ return NULL;
+}
+
+void test_ftrace_direct_race(void)
+{
+ pthread_t fentry_tid, kprobe_tid, perf_kprobe_tid;
+ int err;
+
+ /* Check if ftrace is currently operational */
+ if (!ASSERT_OK(access("/sys/kernel/debug/tracing/kprobe_events", W_OK),
+ "tracefs_access"))
+ return;
+
+ stop = false;
+
+ err = pthread_create(&fentry_tid, NULL, fentry_thread_fn, NULL);
+ if (!ASSERT_OK(err, "create_fentry_thread"))
+ return;
+
+ err = pthread_create(&kprobe_tid, NULL, kprobe_thread_fn, NULL);
+ if (!ASSERT_OK(err, "create_kprobe_thread")) {
+ stop = true;
+ pthread_join(fentry_tid, NULL);
+ return;
+ }
+
+ err = pthread_create(&perf_kprobe_tid, NULL, perf_kprobe_thread_fn, NULL);
+ if (!ASSERT_OK(err, "create_perf_kprobe_thread")) {
+ stop = true;
+ pthread_join(fentry_tid, NULL);
+ pthread_join(kprobe_tid, NULL);
+ return;
+ }
+
+ pthread_join(fentry_tid, NULL);
+ pthread_join(kprobe_tid, NULL);
+ pthread_join(perf_kprobe_tid, NULL);
+
+ /* If we get here without a kernel panic/oops, the test passed.
+ * The real check is in dmesg: look for
+ * "WARNING: arch/x86/kernel/ftrace.c" or
+ * "BUG: KASAN: vmalloc-out-of-bounds in __bpf_prog_enter_recur"
+ *
+ * A more robust check: verify ftrace is still operational.
+ */
+ ASSERT_OK(access("/sys/kernel/debug/tracing/kprobe_events", W_OK),
+ "ftrace_still_operational");
+
+ /* Check that ftrace wasn't disabled */
+ {
+ char buf[64] = {};
+ int fd = open("/proc/sys/kernel/ftrace_enabled", O_RDONLY);
+
+ if (ASSERT_GE(fd, 0, "open_ftrace_enabled")) {
+ int n = read(fd, buf, sizeof(buf) - 1);
+
+ close(fd);
+ if (n > 0)
+ ASSERT_EQ(atoi(buf), 1, "ftrace_enabled");
+ }
+ }
+}
--
2.47.3
----
Splat:
[ 24.170803] ------------[ cut here ]------------
[ 24.171055] WARNING: kernel/trace/ftrace.c:2715 at ftrace_get_addr_curr+0x149/0x190, CPU#13: kworker/13:6/873
[ 24.171315] Modules linked in: bpf_test_modorder_y(OE+) bpf_test_modorder_x(OE) bpf_testmod(OE)
[ 24.171561] CPU: 13 UID: 0 PID: 873 Comm: kworker/13:6 Tainted: G OE 7.0.0-rc1-gda78c0a81eea #83 PREEMPT(full)
[ 24.171827] Tainted: [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
[ 24.171941] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-5.el9 11/05/2023
[ 24.172132] Workqueue: events bpf_link_put_deferred
[ 24.172261] RIP: 0010:ftrace_get_addr_curr+0x149/0x190
[ 24.172376] Code: 00 4c 89 f7 e8 88 f8 ff ff 84 c0 75 92 4d 8b 7f 08 e8 fb b3 c1 00 4d 85 ff 0f 94 c0 49 81 ff b0 1c 6e 83 0f 94 c1 08 c1 74 96 <0f> 0b c6 05
62 e8 2b 02 01 c7 05 54 e8 2b 02 00 00 00 00 48 c7 05
[ 24.172745] RSP: 0018:ffa0000504cafb78 EFLAGS: 00010202
[ 24.172861] RAX: 0000000000000000 RBX: ff110001000e48d0 RCX: ff1100011cd3a201
[ 24.173034] RDX: 6e21cb51d943709c RSI: 0000000000000000 RDI: ffffffff81d416d4
[ 24.173194] RBP: 0000000000000001 R08: 0000000080000000 R09: ffffffffffffffff
[ 24.173366] R10: ffffffff81285522 R11: 0000000000000000 R12: ff110001000e48d0
[ 24.173530] R13: ffffffff81d416d4 R14: ffffffff81d416d4 R15: ffffffff836e1cb0
[ 24.173691] FS: 0000000000000000(0000) GS:ff1100203becc000(0000) knlGS:0000000000000000
[ 24.173849] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 24.173995] CR2: 00007f615e966270 CR3: 000000010bd9d005 CR4: 0000000000771ef0
[ 24.174155] PKRU: 55555554
[ 24.174214] Call Trace:
[ 24.174285] <TASK>
[ 24.174348] ftrace_replace_code+0x7e/0x210
[ 24.174443] ftrace_modify_all_code+0x59/0x110
[ 24.174553] __ftrace_hash_move_and_update_ops+0x227/0x2c0
[ 24.174659] ? kfree+0x1ac/0x4c0
[ 24.174751] ? srso_return_thunk+0x5/0x5f
[ 24.174834] ? kfree+0x250/0x4c0
[ 24.174926] ? kfree+0x1ac/0x4c0
[ 24.175010] ? bpf_lsm_sk_alloc_security+0x4/0x20
[ 24.175132] ftrace_update_ops+0x40/0x80
[ 24.175217] update_ftrace_direct_del+0x263/0x290
[ 24.175341] ? bpf_lsm_sk_alloc_security+0x4/0x20
[ 24.175456] ? 0xffffffffc0006a80
[ 24.175543] bpf_trampoline_update+0x1fb/0x810
[ 24.175654] bpf_trampoline_unlink_prog+0x103/0x1a0
[ 24.175767] ? process_scheduled_works+0x271/0x640
[ 24.175886] bpf_shim_tramp_link_release+0x20/0x40
[ 24.176001] bpf_link_free+0x54/0xd0
[ 24.176092] process_scheduled_works+0x2c2/0x640
[ 24.176222] worker_thread+0x22a/0x340 21:11:27 [422/10854]
[ 24.176319] ? srso_return_thunk+0x5/0x5f
[ 24.176405] ? __pfx_worker_thread+0x10/0x10
[ 24.176522] kthread+0x10c/0x140
[ 24.176611] ? __pfx_kthread+0x10/0x10
[ 24.176698] ret_from_fork+0x148/0x290
[ 24.176785] ? __pfx_kthread+0x10/0x10
[ 24.176872] ret_from_fork_asm+0x1a/0x30
[ 24.176985] </TASK>
[ 24.177043] irq event stamp: 6965
[ 24.177126] hardirqs last enabled at (6973): [<ffffffff8136008c>] __console_unlock+0x5c/0x70
[ 24.177325] hardirqs last disabled at (6982): [<ffffffff81360071>] __console_unlock+0x41/0x70
[ 24.177520] softirqs last enabled at (6524): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
[ 24.177675] softirqs last disabled at (6123): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
[ 24.177844] ---[ end trace 0000000000000000 ]---
[ 24.177963] Bad trampoline accounting at: 000000003143da54 (bpf_fentry_test3+0x4/0x20)
[ 24.178134] ------------[ cut here ]------------
[ 24.178261] WARNING: arch/x86/kernel/ftrace.c:105 at ftrace_replace_code+0xf7/0x210, CPU#13: kworker/13:6/873
[ 24.178476] Modules linked in: bpf_test_modorder_y(OE+) bpf_test_modorder_x(OE) bpf_testmod(OE)
[ 24.178680] CPU: 13 UID: 0 PID: 873 Comm: kworker/13:6 Tainted: G W OE 7.0.0-rc1-gda78c0a81eea #83 PREEMPT(full)
[ 24.178925] Tainted: [W]=WARN, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
[ 24.179059] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-5.el9 11/05/2023
[ 24.179258] Workqueue: events bpf_link_put_deferred
[ 24.179374] RIP: 0010:ftrace_replace_code+0xf7/0x210
[ 24.179485] Code: c0 0f 85 ec 00 00 00 8b 44 24 03 41 33 45 00 0f b6 4c 24 07 41 32 4d 04 0f b6 c9 09 c1 0f 84 49 ff ff ff 4c 89 2d b9 df 8b 03 <0f> 0b bf ea
ff ff ff e9 c4 00 00 00 e8 f8 e5 19 00 48 85 c0 0f 84
[ 24.179847] RSP: 0018:ffa0000504cafb98 EFLAGS: 00010202
[ 24.179965] RAX: 0000000038608000 RBX: 0000000000000001 RCX: 00000000386080c1
[ 24.180126] RDX: ffffffff81d41000 RSI: 0000000000000005 RDI: ffffffff81d416d4
[ 24.180295] RBP: 0000000000000001 R08: 000000000000ffff R09: ffffffff82e98430
[ 24.180455] R10: 000000000002fffd R11: 00000000fffeffff R12: ff110001000e48d0
[ 24.180617] R13: ffffffff83ec0f2d R14: ffffffff84b43820 R15: ffa0000504cafb9b
[ 24.180777] FS: 0000000000000000(0000) GS:ff1100203becc000(0000) knlGS:0000000000000000
[ 24.180939] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 24.181077] CR2: 00007f615e966270 CR3: 000000010bd9d005 CR4: 0000000000771ef0
[ 24.181247] PKRU: 55555554
[ 24.181303] Call Trace:
[ 24.181360] <TASK>
[ 24.181424] ftrace_modify_all_code+0x59/0x110
[ 24.181536] __ftrace_hash_move_and_update_ops+0x227/0x2c0
[ 24.181650] ? kfree+0x1ac/0x4c0
[ 24.181743] ? srso_return_thunk+0x5/0x5f
[ 24.181828] ? kfree+0x250/0x4c0
[ 24.181916] ? kfree+0x1ac/0x4c0
[ 24.182004] ? bpf_lsm_sk_alloc_security+0x4/0x20
[ 24.182123] ftrace_update_ops+0x40/0x80
[ 24.182213] update_ftrace_direct_del+0x263/0x290
[ 24.182337] ? bpf_lsm_sk_alloc_security+0x4/0x20
[ 24.182455] ? 0xffffffffc0006a80
[ 24.182543] bpf_trampoline_update+0x1fb/0x810
[ 24.182655] bpf_trampoline_unlink_prog+0x103/0x1a0
[ 24.182768] ? process_scheduled_works+0x271/0x640
[ 24.182887] bpf_shim_tramp_link_release+0x20/0x40
[ 24.183001] bpf_link_free+0x54/0xd0
[ 24.183088] process_scheduled_works+0x2c2/0x640
[ 24.183220] worker_thread+0x22a/0x340 21:11:27 [367/10854]
[ 24.183319] ? srso_return_thunk+0x5/0x5f
[ 24.183405] ? __pfx_worker_thread+0x10/0x10
[ 24.183521] kthread+0x10c/0x140
[ 24.183610] ? __pfx_kthread+0x10/0x10
[ 24.183697] ret_from_fork+0x148/0x290
[ 24.183783] ? __pfx_kthread+0x10/0x10
[ 24.183868] ret_from_fork_asm+0x1a/0x30
[ 24.183979] </TASK>
[ 24.184056] irq event stamp: 7447
[ 24.184138] hardirqs last enabled at (7455): [<ffffffff8136008c>] __console_unlock+0x5c/0x70
[ 24.184339] hardirqs last disabled at (7464): [<ffffffff81360071>] __console_unlock+0x41/0x70
[ 24.184522] softirqs last enabled at (6524): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
[ 24.184675] softirqs last disabled at (6123): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
[ 24.184836] ---[ end trace 0000000000000000 ]---
[ 24.185177] ------------[ ftrace bug ]------------
[ 24.185310] ftrace failed to modify
[ 24.185312] [<ffffffff81d416d4>] bpf_fentry_test3+0x4/0x20
[ 24.185544] actual: e8:27:29:6c:3e
[ 24.185627] expected: e8:a7:49:54:ff
[ 24.185717] ftrace record flags: e8180000
[ 24.185798] (0) R tramp: ERROR!
[ 24.185798] expected tramp: ffffffffc0404000
[ 24.185975] ------------[ cut here ]------------
[ 24.186086] WARNING: kernel/trace/ftrace.c:2254 at ftrace_bug+0x101/0x290, CPU#13: kworker/13:6/873
[ 24.186285] Modules linked in: bpf_test_modorder_y(OE+) bpf_test_modorder_x(OE) bpf_testmod(OE)
[ 24.186484] CPU: 13 UID: 0 PID: 873 Comm: kworker/13:6 Tainted: G W OE 7.0.0-rc1-gda78c0a81eea #83 PREEMPT(full)
[ 24.186728] Tainted: [W]=WARN, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
[ 24.186863] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-5.el9 11/05/2023
[ 24.187057] Workqueue: events bpf_link_put_deferred
[ 24.187172] RIP: 0010:ftrace_bug+0x101/0x290
[ 24.187294] Code: 05 72 03 83 f8 02 7f 13 83 f8 01 74 46 83 f8 02 75 13 48 c7 c7 41 a3 69 82 eb 51 83 f8 03 74 3c 83 f8 04 74 40 48 85 db 75 4c <0f> 0b c6 05
ba eb 2b 02 01 c7 05 ac eb 2b 02 00 00 00 00 48 c7 05
[ 24.187663] RSP: 0018:ffa0000504cafb70 EFLAGS: 00010246
[ 24.187772] RAX: 0000000000000022 RBX: ff110001000e48d0 RCX: e5ff63967b168c00
[ 24.187934] RDX: 0000000000000000 RSI: 00000000fffeffff RDI: ffffffff83018490
[ 24.188096] RBP: 00000000ffffffea R08: 000000000000ffff R09: ffffffff82e98430
[ 24.188267] R10: 000000000002fffd R11: 00000000fffeffff R12: ff110001000e48d0
[ 24.188423] R13: ffffffff83ec0f2d R14: ffffffff81d416d4 R15: ffffffff836e1cb0
[ 24.188581] FS: 0000000000000000(0000) GS:ff1100203becc000(0000) knlGS:0000000000000000
[ 24.188738] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 24.188870] CR2: 00007f615e966270 CR3: 000000010bd9d005 CR4: 0000000000771ef0
[ 24.189032] PKRU: 55555554
[ 24.189088] Call Trace:
[ 24.189144] <TASK>
[ 24.189204] ftrace_replace_code+0x1d6/0x210
[ 24.189335] ftrace_modify_all_code+0x59/0x110
[ 24.189443] __ftrace_hash_move_and_update_ops+0x227/0x2c0
[ 24.189554] ? kfree+0x1ac/0x4c0
[ 24.189638] ? srso_return_thunk+0x5/0x5f
[ 24.189720] ? kfree+0x250/0x4c0
[ 24.189802] ? kfree+0x1ac/0x4c0
[ 24.189889] ? bpf_lsm_sk_alloc_security+0x4/0x20
[ 24.190010] ftrace_update_ops+0x40/0x80
[ 24.190095] update_ftrace_direct_del+0x263/0x290
[ 24.190205] ? bpf_lsm_sk_alloc_security+0x4/0x20 21:11:28 [312/10854]
[ 24.190335] ? 0xffffffffc0006a80
[ 24.190422] bpf_trampoline_update+0x1fb/0x810
[ 24.190542] bpf_trampoline_unlink_prog+0x103/0x1a0
[ 24.190651] ? process_scheduled_works+0x271/0x640
[ 24.190764] bpf_shim_tramp_link_release+0x20/0x40
[ 24.190871] bpf_link_free+0x54/0xd0
[ 24.190964] process_scheduled_works+0x2c2/0x640
[ 24.191093] worker_thread+0x22a/0x340
[ 24.191177] ? srso_return_thunk+0x5/0x5f
[ 24.191274] ? __pfx_worker_thread+0x10/0x10
[ 24.191388] kthread+0x10c/0x140
[ 24.191478] ? __pfx_kthread+0x10/0x10
[ 24.191565] ret_from_fork+0x148/0x290
[ 24.191641] ? __pfx_kthread+0x10/0x10
[ 24.191729] ret_from_fork_asm+0x1a/0x30
[ 24.191833] </TASK>
[ 24.191896] irq event stamp: 8043
[ 24.191979] hardirqs last enabled at (8051): [<ffffffff8136008c>] __console_unlock+0x5c/0x70
[ 24.192167] hardirqs last disabled at (8058): [<ffffffff81360071>] __console_unlock+0x41/0x70
[ 24.192368] softirqs last enabled at (7828): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
[ 24.192528] softirqs last disabled at (7817): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
[ 24.192689] ---[ end trace 0000000000000000 ]---
[ 24.193549] ------------[ cut here ]------------
[ 24.193773] WARNING: kernel/trace/ftrace.c:2709 at ftrace_get_addr_curr+0x6c/0x190, CPU#10: test_progs/311
[ 24.193973] Modules linked in: bpf_test_modorder_y(OE+) bpf_test_modorder_x(OE) bpf_testmod(OE)
[ 24.194206] CPU: 10 UID: 0 PID: 311 Comm: test_progs Tainted: G W OE 7.0.0-rc1-gda78c0a81eea #83 PREEMPT(full)
[ 24.194461] Tainted: [W]=WARN, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
[ 24.194594] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-5.el9 11/05/2023
[ 24.194778] RIP: 0010:ftrace_get_addr_curr+0x6c/0x190
[ 24.194891] Code: 48 0f 44 ce 4c 8b 3c c8 e8 e1 b4 c1 00 4d 85 ff 74 18 4d 39 77 10 74 05 4d 8b 3f eb eb 49 8b 47 18 48 85 c0 0f 85 19 01 00 00 <0f> 0b 48 8b
43 08 a9 00 00 00 08 75 1c a9 00 00 00 20 48 c7 c1 80
[ 24.195270] RSP: 0018:ffa0000000d4bb38 EFLAGS: 00010246
[ 24.195381] RAX: 0000000000000001 RBX: ff11000100125710 RCX: ff1100010b28a2c0
[ 24.195540] RDX: 0000000000000003 RSI: 0000000000000003 RDI: ff11000100125710
[ 24.195698] RBP: 0000000000000001 R08: 0000000080000000 R09: ffffffffffffffff
[ 24.195863] R10: ffffffff82046a38 R11: 0000000000000000 R12: ff11000100125710
[ 24.196033] R13: ffffffff81529fc4 R14: ffffffff81529fc4 R15: 0000000000000000
[ 24.196199] FS: 00007f46532a54c0(0000) GS:ff1100203be0c000(0000) knlGS:0000000000000000
[ 24.196374] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 24.196509] CR2: 000055e885be1470 CR3: 000000010eef9003 CR4: 0000000000771ef0
[ 24.196663] PKRU: 55555554
[ 24.196720] Call Trace:
[ 24.196778] <TASK>
[ 24.196844] ftrace_replace_code+0x7e/0x210
[ 24.196948] ftrace_modify_all_code+0x59/0x110
[ 24.197059] __ftrace_hash_move_and_update_ops+0x227/0x2c0
[ 24.197174] ? srso_return_thunk+0x5/0x5f
[ 24.197271] ? __mutex_lock+0x22a/0xc60
[ 24.197360] ? kfree+0x1ac/0x4c0
[ 24.197455] ? srso_return_thunk+0x5/0x5f
[ 24.197538] ? kfree+0x250/0x4c0
[ 24.197626] ? bpf_fentry_test3+0x4/0x20
[ 24.197712] ftrace_set_hash+0x13c/0x3d0
[ 24.197811] ftrace_set_filter_ip+0x88/0xb0
[ 24.197909] ? bpf_fentry_test3+0x4/0x20 21:11:28 [257/10854]
[ 24.198000] disarm_kprobe_ftrace+0x83/0xd0
[ 24.198089] __disable_kprobe+0x129/0x160
[ 24.198178] disable_kprobe+0x27/0x60
[ 24.198272] kprobe_register+0xa2/0xe0
[ 24.198362] perf_trace_event_unreg+0x33/0xd0
[ 24.198473] perf_kprobe_destroy+0x3b/0x80
[ 24.198557] __free_event+0x119/0x290
[ 24.198640] perf_event_release_kernel+0x1ef/0x220
[ 24.198758] perf_release+0x12/0x20
[ 24.198843] __fput+0x11b/0x2a0
[ 24.198946] task_work_run+0x8b/0xc0
[ 24.199035] exit_to_user_mode_loop+0x107/0x4d0
[ 24.199155] do_syscall_64+0x25b/0x390
[ 24.199249] ? entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 24.199360] ? trace_irq_disable+0x1d/0xc0
[ 24.199451] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 24.199559] RIP: 0033:0x7f46530ff85b
[ 24.199675] Code: 03 00 00 00 0f 05 48 3d 00 f0 ff ff 77 41 c3 48 83 ec 18 89 7c 24 0c e8 e3 83 f8 ff 8b 7c 24 0c 41 89 c0 b8 03 00 00 00 0f 05 <48> 3d 00 f0
ff ff 77 35 44 89 c7 89 44 24 0c e8 41 84 f8 ff 8b 44
[ 24.200034] RSP: 002b:00007ffc40859770 EFLAGS: 00000293 ORIG_RAX: 0000000000000003
[ 24.200192] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007f46530ff85b
[ 24.200382] RDX: 0000000000000002 RSI: 0000000000000002 RDI: 0000000000000019
[ 24.200552] RBP: 00007ffc408597c0 R08: 0000000000000000 R09: 00007ffc40859757
[ 24.200702] R10: 0000000000000000 R11: 0000000000000293 R12: 00007ffc4085ddc8
[ 24.200855] R13: 000055e8800de120 R14: 000055e88118d390 R15: 00007f46533de000
[ 24.201035] </TASK>
[ 24.201091] irq event stamp: 200379
[ 24.201208] hardirqs last enabled at (200387): [<ffffffff8136008c>] __console_unlock+0x5c/0x70
[ 24.201453] hardirqs last disabled at (200396): [<ffffffff81360071>] __console_unlock+0x41/0x70
[ 24.201667] softirqs last enabled at (200336): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
[ 24.201890] softirqs last disabled at (200329): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
[ 24.202121] ---[ end trace 0000000000000000 ]---
[ 24.202398] ------------[ cut here ]------------
[ 24.202534] WARNING: kernel/trace/ftrace.c:2715 at ftrace_get_addr_curr+0x149/0x190, CPU#10: test_progs/311
[ 24.202753] Modules linked in: bpf_test_modorder_y(OE+) bpf_test_modorder_x(OE) bpf_testmod(OE)
[ 24.202962] CPU: 10 UID: 0 PID: 311 Comm: test_progs Tainted: G W OE 7.0.0-rc1-gda78c0a81eea #83 PREEMPT(full)
[ 24.203203] Tainted: [W]=WARN, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
[ 24.203344] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-5.el9 11/05/2023
[ 24.203526] RIP: 0010:ftrace_get_addr_curr+0x149/0x190
[ 24.203629] Code: 00 4c 89 f7 e8 88 f8 ff ff 84 c0 75 92 4d 8b 7f 08 e8 fb b3 c1 00 4d 85 ff 0f 94 c0 49 81 ff b0 1c 6e 83 0f 94 c1 08 c1 74 96 <0f> 0b c6 05
62 e8 2b 02 01 c7 05 54 e8 2b 02 00 00 00 00 48 c7 05
[ 24.203996] RSP: 0018:ffa0000000d4bb38 EFLAGS: 00010202
[ 24.204110] RAX: 0000000000000000 RBX: ff11000100125710 RCX: ff1100010b28a201
[ 24.204280] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffffffff81529fc4
[ 24.204437] RBP: 0000000000000001 R08: 0000000080000000 R09: ffffffffffffffff
[ 24.204595] R10: ffffffff82046a38 R11: 0000000000000000 R12: ff11000100125710
[ 24.204755] R13: ffffffff81529fc4 R14: ffffffff81529fc4 R15: ffffffff836e1cb0
[ 24.204914] FS: 00007f46532a54c0(0000) GS:ff1100203be0c000(0000) knlGS:0000000000000000
[ 24.205072] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 24.205204] CR2: 000055e885be1470 CR3: 000000010eef9003 CR4: 0000000000771ef0
[ 24.205386] PKRU: 55555554
[ 24.205443] Call Trace:
[ 24.205503] <TASK>
[ 24.205565] ftrace_replace_code+0x7e/0x210
[ 24.205669] ftrace_modify_all_code+0x59/0x110 21:11:28 [202/10854]
[ 24.205784] __ftrace_hash_move_and_update_ops+0x227/0x2c0
[ 24.205902] ? srso_return_thunk+0x5/0x5f
[ 24.205987] ? __mutex_lock+0x22a/0xc60
[ 24.206072] ? kfree+0x1ac/0x4c0
[ 24.206163] ? srso_return_thunk+0x5/0x5f
[ 24.206254] ? kfree+0x250/0x4c0
[ 24.206344] ? bpf_fentry_test3+0x4/0x20
[ 24.206428] ftrace_set_hash+0x13c/0x3d0
[ 24.206523] ftrace_set_filter_ip+0x88/0xb0
[ 24.206614] ? bpf_fentry_test3+0x4/0x20
[ 24.206703] disarm_kprobe_ftrace+0x83/0xd0
[ 24.206789] __disable_kprobe+0x129/0x160
[ 24.206880] disable_kprobe+0x27/0x60
[ 24.206972] kprobe_register+0xa2/0xe0
[ 24.207057] perf_trace_event_unreg+0x33/0xd0
[ 24.207169] perf_kprobe_destroy+0x3b/0x80
[ 24.207262] __free_event+0x119/0x290
[ 24.207348] perf_event_release_kernel+0x1ef/0x220
[ 24.207461] perf_release+0x12/0x20
[ 24.207543] __fput+0x11b/0x2a0
[ 24.207626] task_work_run+0x8b/0xc0
[ 24.207711] exit_to_user_mode_loop+0x107/0x4d0
[ 24.207827] do_syscall_64+0x25b/0x390
[ 24.207915] ? entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 24.208021] ? trace_irq_disable+0x1d/0xc0
[ 24.208110] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 24.208215] RIP: 0033:0x7f46530ff85b
[ 24.208307] Code: 03 00 00 00 0f 05 48 3d 00 f0 ff ff 77 41 c3 48 83 ec 18 89 7c 24 0c e8 e3 83 f8 ff 8b 7c 24 0c 41 89 c0 b8 03 00 00 00 0f 05 <48> 3d 00 f0
ff ff 77 35 44 89 c7 89 44 24 0c e8 41 84 f8 ff 8b 44
[ 24.208657] RSP: 002b:00007ffc40859770 EFLAGS: 00000293 ORIG_RAX: 0000000000000003
[ 24.208816] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007f46530ff85b
[ 24.208978] RDX: 0000000000000002 RSI: 0000000000000002 RDI: 0000000000000019
[ 24.209133] RBP: 00007ffc408597c0 R08: 0000000000000000 R09: 00007ffc40859757
[ 24.209300] R10: 0000000000000000 R11: 0000000000000293 R12: 00007ffc4085ddc8
[ 24.209457] R13: 000055e8800de120 R14: 000055e88118d390 R15: 00007f46533de000
[ 24.209633] </TASK>
[ 24.209689] irq event stamp: 200963
[ 24.209770] hardirqs last enabled at (200971): [<ffffffff8136008c>] __console_unlock+0x5c/0x70
[ 24.209971] hardirqs last disabled at (200978): [<ffffffff81360071>] __console_unlock+0x41/0x70
[ 24.210156] softirqs last enabled at (200568): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
[ 24.210370] softirqs last disabled at (200557): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
[ 24.210554] ---[ end trace 0000000000000000 ]---
[ 24.210665] Bad trampoline accounting at: 00000000ab641fec (bpf_lsm_sk_alloc_security+0x4/0x20)
[ 24.210866] ------------[ cut here ]------------
[ 24.210993] WARNING: arch/x86/kernel/ftrace.c:105 at ftrace_replace_code+0xf7/0x210, CPU#10: test_progs/311
[ 24.211182] Modules linked in: bpf_test_modorder_y(OE+) bpf_test_modorder_x(OE) bpf_testmod(OE)
[ 24.211412] CPU: 10 UID: 0 PID: 311 Comm: test_progs Tainted: G W OE 7.0.0-rc1-gda78c0a81eea #83 PREEMPT(full)
[ 24.211656] Tainted: [W]=WARN, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
[ 24.211788] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-5.el9 11/05/2023
[ 24.211980] RIP: 0010:ftrace_replace_code+0xf7/0x210
[ 24.212091] Code: c0 0f 85 ec 00 00 00 8b 44 24 03 41 33 45 00 0f b6 4c 24 07 41 32 4d 04 0f b6 c9 09 c1 0f 84 49 ff ff ff 4c 89 2d b9 df 8b 03 <0f> 0b bf ea
ff ff ff e9 c4 00 00 00 e8 f8 e5 19 00 48 85 c0 0f 84
[ 24.212503] RSP: 0018:ffa0000000d4bb58 EFLAGS: 00010202
[ 24.212628] RAX: 00000000780a0001 RBX: 0000000000000001 RCX: 00000000780a00c1
[ 24.212798] RDX: ffffffff81529000 RSI: 0000000000000005 RDI: ffffffff81529fc4
[ 24.212970] RBP: 0000000000000001 R08: 000000000000ffff R09: ffffffff82e98430
[ 24.213130] R10: 000000000002fffd R11: 00000000fffeffff R12: ff11000100125710
[ 24.213317] R13: ffffffff83ec0f2d R14: ffffffff84b43820 R15: ffa0000000d4bb5b
[ 24.213488] FS: 00007f46532a54c0(0000) GS:ff1100203be0c000(0000) knlGS:0000000000000000
[ 24.213674] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 24.213813] CR2: 000055e885be1470 CR3: 000000010eef9003 CR4: 0000000000771ef0
[ 24.213986] PKRU: 55555554
[ 24.214044] Call Trace:
[ 24.214100] <TASK>
[ 24.214167] ftrace_modify_all_code+0x59/0x110
[ 24.214301] __ftrace_hash_move_and_update_ops+0x227/0x2c0
[ 24.214415] ? srso_return_thunk+0x5/0x5f
[ 24.214502] ? __mutex_lock+0x22a/0xc60
[ 24.214588] ? kfree+0x1ac/0x4c0
[ 24.214682] ? srso_return_thunk+0x5/0x5f
[ 24.214765] ? kfree+0x250/0x4c0
[ 24.214855] ? bpf_fentry_test3+0x4/0x20
[ 24.214943] ftrace_set_hash+0x13c/0x3d0
[ 24.215041] ftrace_set_filter_ip+0x88/0xb0
[ 24.215132] ? bpf_fentry_test3+0x4/0x20
[ 24.215221] disarm_kprobe_ftrace+0x83/0xd0
[ 24.215328] __disable_kprobe+0x129/0x160
[ 24.215418] disable_kprobe+0x27/0x60
[ 24.215507] kprobe_register+0xa2/0xe0
[ 24.215594] perf_trace_event_unreg+0x33/0xd0
[ 24.215701] perf_kprobe_destroy+0x3b/0x80
[ 24.215790] __free_event+0x119/0x290
[ 24.215888] perf_event_release_kernel+0x1ef/0x220
[ 24.216007] perf_release+0x12/0x20
[ 24.216091] __fput+0x11b/0x2a0
[ 24.216183] task_work_run+0x8b/0xc0
[ 24.216293] exit_to_user_mode_loop+0x107/0x4d0
[ 24.216411] do_syscall_64+0x25b/0x390
[ 24.216497] ? entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 24.216606] ? trace_irq_disable+0x1d/0xc0
[ 24.216699] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 24.216807] RIP: 0033:0x7f46530ff85b
[ 24.216895] Code: 03 00 00 00 0f 05 48 3d 00 f0 ff ff 77 41 c3 48 83 ec 18 89 7c 24 0c e8 e3 83 f8 ff 8b 7c 24 0c 41 89 c0 b8 03 00 00 00 0f 05 <48> 3d 00 f0
ff ff 77 35 44 89 c7 89 44 24 0c e8 41 84 f8 ff 8b 44
[ 24.217293] RSP: 002b:00007ffc40859770 EFLAGS: 00000293 ORIG_RAX: 0000000000000003
[ 24.217461] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007f46530ff85b
[ 24.217627] RDX: 0000000000000002 RSI: 0000000000000002 RDI: 0000000000000019
[ 24.217785] RBP: 00007ffc408597c0 R08: 0000000000000000 R09: 00007ffc40859757
[ 24.217950] R10: 0000000000000000 R11: 0000000000000293 R12: 00007ffc4085ddc8
[ 24.218107] R13: 000055e8800de120 R14: 000055e88118d390 R15: 00007f46533de000
[ 24.218306] </TASK>
[ 24.218363] irq event stamp: 201623
[ 24.218445] hardirqs last enabled at (201631): [<ffffffff8136008c>] __console_unlock+0x5c/0x70
[ 24.218625] hardirqs last disabled at (201638): [<ffffffff81360071>] __console_unlock+0x41/0x70
[ 24.218810] softirqs last enabled at (201612): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
[ 24.219012] softirqs last disabled at (201601): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
[ 24.219208] ---[ end trace 0000000000000000 ]---
[ 24.219693] ------------[ ftrace bug ]------------
[ 24.219801] ftrace failed to modify
[ 24.219804] [<ffffffff81529fc4>] bpf_lsm_sk_alloc_security+0x4/0x20
[ 24.220022] actual: e9:b7:ca:ad:3e
[ 24.220113] expected: e8:b7:c0:d5:ff
[ 24.220203] ftrace record flags: e8980000
[ 24.220307] (0) R tramp: ERROR!
[ 24.220321] ------------[ cut here ]------------
[ 24.220507] WARNING: kernel/trace/ftrace.c:2715 at ftrace_get_addr_curr+0x149/0x190, CPU#10: test_progs/311
[ 24.220693] Modules linked in: bpf_test_modorder_y(OE+) bpf_test_modorder_x(OE) bpf_testmod(OE)
[ 24.220895] CPU: 10 UID: 0 PID: 311 Comm: test_progs Tainted: G W OE 7.0.0-rc1-gda78c0a81eea #83 PREEMPT(full)
[ 24.221135] Tainted: [W]=WARN, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
[ 24.221284] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-5.el9 11/05/2023
[ 24.221467] RIP: 0010:ftrace_get_addr_curr+0x149/0x190
[ 24.221577] Code: 00 4c 89 f7 e8 88 f8 ff ff 84 c0 75 92 4d 8b 7f 08 e8 fb b3 c1 00 4d 85 ff 0f 94 c0 49 81 ff b0 1c 6e 83 0f 94 c1 08 c1 74 96 <0f> 0b c6 05
62 e8 2b 02 01 c7 05 54 e8 2b 02 00 00 00 00 48 c7 05
[ 24.221938] RSP: 0018:ffa0000000d4bb10 EFLAGS: 00010202
[ 24.222052] RAX: 0000000000000000 RBX: ff11000100125710 RCX: ff1100010b28a201
[ 24.222205] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffffffff81529fc4
[ 24.222384] RBP: 00000000ffffffea R08: 000000000000ffff R09: ffffffff82e98430
[ 24.222542] R10: 000000000002fffd R11: 00000000fffeffff R12: ff11000100125710
[ 24.222708] R13: ffffffff83ec0f2d R14: ffffffff81529fc4 R15: ffffffff836e1cb0
[ 24.222866] FS: 00007f46532a54c0(0000) GS:ff1100203be0c000(0000) knlGS:0000000000000000
[ 24.223034] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 24.223171] CR2: 000055e885be1470 CR3: 000000010eef9003 CR4: 0000000000771ef0
[ 24.223341] PKRU: 55555554
[ 24.223397] Call Trace:
[ 24.223454] <TASK>
[ 24.223511] ? bpf_lsm_sk_alloc_security+0x4/0x20
[ 24.223623] ftrace_bug+0x1ff/0x290
[ 24.223710] ftrace_replace_code+0x1d6/0x210
[ 24.223829] ftrace_modify_all_code+0x59/0x110
[ 24.223946] __ftrace_hash_move_and_update_ops+0x227/0x2c0
[ 24.224060] ? srso_return_thunk+0x5/0x5f
[ 24.224148] ? __mutex_lock+0x22a/0xc60
[ 24.224245] ? kfree+0x1ac/0x4c0
[ 24.224337] ? srso_return_thunk+0x5/0x5f
[ 24.224420] ? kfree+0x250/0x4c0
[ 24.224512] ? bpf_fentry_test3+0x4/0x20
[ 24.224597] ftrace_set_hash+0x13c/0x3d0
[ 24.224690] ftrace_set_filter_ip+0x88/0xb0
[ 24.224776] ? bpf_fentry_test3+0x4/0x20
[ 24.224869] disarm_kprobe_ftrace+0x83/0xd0
[ 24.224965] __disable_kprobe+0x129/0x160
[ 24.225051] disable_kprobe+0x27/0x60
[ 24.225136] kprobe_register+0xa2/0xe0
[ 24.225223] perf_trace_event_unreg+0x33/0xd0
[ 24.225346] perf_kprobe_destroy+0x3b/0x80
[ 24.225431] __free_event+0x119/0x290
[ 24.225518] perf_event_release_kernel+0x1ef/0x220
[ 24.225631] perf_release+0x12/0x20
[ 24.225715] __fput+0x11b/0x2a0
[ 24.225804] task_work_run+0x8b/0xc0
[ 24.225895] exit_to_user_mode_loop+0x107/0x4d0
[ 24.226016] do_syscall_64+0x25b/0x390
[ 24.226099] ? entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 24.226207] ? trace_irq_disable+0x1d/0xc0
[ 24.226308] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 24.226415] RIP: 0033:0x7f46530ff85b
[ 24.226498] Code: 03 00 00 00 0f 05 48 3d 00 f0 ff ff 77 41 c3 48 83 ec 18 89 7c 24 0c e8 e3 83 f8 ff 8b 7c 24 0c 41 89 c0 b8 03 00 00 00 0f 05 <48> 3d 00 f0
ff ff 77 35 44 89 c7 89 44 24 0c e8 41 84 f8 ff 8b 44
[ 24.226851] RSP: 002b:00007ffc40859770 EFLAGS: 00000293 ORIG_RAX: 0000000000000003
[ 24.227016] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007f46530ff85b
[ 24.227173] RDX: 0000000000000002 RSI: 0000000000000002 RDI: 0000000000000019
[ 24.227341] RBP: 00007ffc408597c0 R08: 0000000000000000 R09: 00007ffc40859757
[ 24.227500] R10: 0000000000000000 R11: 0000000000000293 R12: 00007ffc4085ddc8
[ 24.227652] R13: 000055e8800de120 R14: 000055e88118d390 R15: 00007f46533de000
[ 24.227830] </TASK>
[ 24.227891] irq event stamp: 202299
[ 24.227974] hardirqs last enabled at (202307): [<ffffffff8136008c>] __console_unlock+0x5c/0x70
[ 24.228162] hardirqs last disabled at (202314): [<ffffffff81360071>] __console_unlock+0x41/0x70
[ 24.228357] softirqs last enabled at (201682): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
[ 24.228540] softirqs last disabled at (201671): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
[ 24.228716] ---[ end trace 0000000000000000 ]---
[ 24.228834] Bad trampoline accounting at: 00000000ab641fec (bpf_lsm_sk_alloc_security+0x4/0x20)
[ 24.229029]
[ 24.229029] expected tramp: ffffffff81286080
[ 24.261301] BUG: unable to handle page fault for address: ffa00000004b9050
[ 24.261436] #PF: supervisor read access in kernel mode
[ 24.261528] #PF: error_code(0x0000) - not-present page
[ 24.261621] PGD 100000067 P4D 100832067 PUD 100833067 PMD 100efb067 PTE 0
[ 24.261745] Oops: Oops: 0000 [#1] SMP NOPTI
[ 24.261821] CPU: 9 UID: 0 PID: 1338 Comm: ip Tainted: G W OE 7.0.0-rc1-gda78c0a81eea #83 PREEMPT(full)
[ 24.262006] Tainted: [W]=WARN, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
[ 24.262119] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-5.el9 11/05/2023
[ 24.262281] RIP: 0010:__cgroup_bpf_run_lsm_current+0xc5/0x2f0
[ 24.262393] Code: a6 6f 1a 02 01 48 c7 c7 31 5b 71 82 be bf 01 00 00 48 c7 c2 d3 70 65 82 e8 d8 53 ce ff 4d 8b 7f 60 4d 85 ff 0f 84 14 02 00 00 <49> 8b 46 f0
4c 63 b0 34 05 00 00 c7 44 24 10 00 00 00 00 41 0f b7
[ 24.262693] RSP: 0018:ffa0000004dfbc98 EFLAGS: 00010282
[ 24.262784] RAX: 0000000000000001 RBX: ffa0000004dfbd10 RCX: 0000000000000001
[ 24.262923] RDX: 00000000d7c4159d RSI: ffffffff8359b368 RDI: ff1100011b5c50c8
[ 24.263055] RBP: ffa0000004dfbd30 R08: 0000000000020000 R09: ffffffffffffffff
[ 24.263187] R10: ffffffff814f76b3 R11: 0000000000000000 R12: ff1100011b5c4580
[ 24.263325] R13: 0000000000000000 R14: ffa00000004b9060 R15: ffffffff835b3040
[ 24.263465] FS: 00007f0007064800(0000) GS:ff1100203bdcc000(0000) knlGS:0000000000000000
[ 24.263599] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 24.263709] CR2: ffa00000004b9050 CR3: 0000000120f4d002 CR4: 0000000000771ef0
[ 24.263841] PKRU: 55555554
[ 24.263890] Call Trace:
[ 24.263938] <TASK>
[ 24.263992] bpf_trampoline_6442513766+0x6a/0x10d
[ 24.264088] security_sk_alloc+0x83/0xd0
[ 24.264162] sk_prot_alloc+0xf4/0x150
[ 24.264236] sk_alloc+0x34/0x2a0
[ 24.264305] ? srso_return_thunk+0x5/0x5f
[ 24.264375] ? _raw_spin_unlock_irqrestore+0x35/0x50
[ 24.264465] ? srso_return_thunk+0x5/0x5f
[ 24.264533] ? __wake_up_common_lock+0xa8/0xd0
[ 24.264625] __netlink_create+0x2f/0xf0
[ 24.264695] netlink_create+0x1c4/0x230
[ 24.264765] ? __pfx_rtnetlink_bind+0x10/0x10
[ 24.264858] __sock_create+0x21d/0x400
[ 24.264937] __sys_socket+0x65/0x100
[ 24.265007] ? srso_return_thunk+0x5/0x5f
[ 24.265077] __x64_sys_socket+0x19/0x30
[ 24.265146] do_syscall_64+0xde/0x390
[ 24.265216] ? entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 24.265307] ? trace_irq_disable+0x1d/0xc0
[ 24.265379] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 24.265469] RIP: 0033:0x7f0006f112ab
[ 24.265538] Code: 73 01 c3 48 8b 0d 6d 8b 0e 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 29 00 00 00 0f 05 <48> 3d 01 f0
ff ff 73 01 c3 48 8b 0d 3d 8b 0e 00 f7 d8 64 89 01 48
[ 24.265822] RSP: 002b:00007ffd8ecb3be8 EFLAGS: 00000246 ORIG_RAX: 0000000000000029
[ 24.265960] RAX: ffffffffffffffda RBX: 000056212b30d040 RCX: 00007f0006f112ab
[ 24.266088] RDX: 0000000000000000 RSI: 0000000000080003 RDI: 0000000000000010
[ 24.266217] RBP: 0000000000000000 R08: 00007ffd8ecb3bc0 R09: 0000000000000000
[ 24.266346] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[ 24.266474] R13: 000056212b30d040 R14: 00007ffd8ecb3d88 R15: 0000000000000004
[ 24.266617] </TASK>
[ 24.266663] Modules linked in: bpf_test_modorder_y(OE+) bpf_test_modorder_x(OE) bpf_testmod(OE)
[ 24.266824] CR2: ffa00000004b9050
[ 24.266897] ---[ end trace 0000000000000000 ]---
[ 24.266989] RIP: 0010:__cgroup_bpf_run_lsm_current+0xc5/0x2f0
[ 24.267101] Code: a6 6f 1a 02 01 48 c7 c7 31 5b 71 82 be bf 01 00 00 48 c7 c2 d3 70 65 82 e8 d8 53 ce ff 4d 8b 7f 60 4d 85 ff 0f 84 14 02 00 00 <49> 8b 46 f0
4c 63 b0 34 05 00 00 c7 44 24 10 00 00 00 00 41 0f b7
[ 24.267406] RSP: 0018:ffa0000004dfbc98 EFLAGS: 00010282
[ 24.267499] RAX: 0000000000000001 RBX: ffa0000004dfbd10 RCX: 0000000000000001
[ 24.267629] RDX: 00000000d7c4159d RSI: ffffffff8359b368 RDI: ff1100011b5c50c8
[ 24.267758] RBP: ffa0000004dfbd30 R08: 0000000000020000 R09: ffffffffffffffff
[ 24.267897] R10: ffffffff814f76b3 R11: 0000000000000000 R12: ff1100011b5c4580
[ 24.268030] R13: 0000000000000000 R14: ffa00000004b9060 R15: ffffffff835b3040
[ 24.268167] FS: 00007f0007064800(0000) GS:ff1100203bdcc000(0000) knlGS:0000000000000000
[ 24.268311] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 24.268428] CR2: ffa00000004b9050 CR3: 0000000120f4d002 CR4: 0000000000771ef0
[ 24.268565] PKRU: 55555554
[ 24.268613] Kernel panic - not syncing: Fatal exception
[ 24.268977] Kernel Offset: disabled
[ 24.269046] ---[ end Kernel panic - not syncing: Fatal exception ]---
> ---
> arch/x86/Kconfig | 1 +
> kernel/bpf/trampoline.c | 220 ++++++++++++++++++++++++++++++++++------
> kernel/trace/Kconfig | 3 +
> kernel/trace/ftrace.c | 7 +-
> 4 files changed, 200 insertions(+), 31 deletions(-)
>
> [...]
^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PATCHv6 bpf-next 9/9] bpf,x86: Use single ftrace_ops for direct calls
2026-02-27 17:40 ` Ihor Solodrai
@ 2026-02-27 20:37 ` Jiri Olsa
2026-02-27 21:24 ` Jiri Olsa
0 siblings, 1 reply; 27+ messages in thread
From: Jiri Olsa @ 2026-02-27 20:37 UTC (permalink / raw)
To: Ihor Solodrai
Cc: Steven Rostedt, Florent Revest, Mark Rutland, bpf, linux-kernel,
linux-trace-kernel, linux-arm-kernel, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Menglong Dong, Song Liu,
Kumar Kartikeya Dwivedi
On Fri, Feb 27, 2026 at 09:40:12AM -0800, Ihor Solodrai wrote:
> On 12/30/25 6:50 AM, Jiri Olsa wrote:
> > Using single ftrace_ops for direct calls update instead of allocating
> > ftrace_ops object for each trampoline.
> >
> > With single ftrace_ops object we can use update_ftrace_direct_* api
> > that allows multiple ip sites updates on single ftrace_ops object.
> >
> > Adding HAVE_SINGLE_FTRACE_DIRECT_OPS config option to be enabled on
> > each arch that supports this.
> >
> > At the moment we can enable this only on x86 arch, because arm relies
> > on ftrace_ops object representing just single trampoline image (stored
> > in ftrace_ops::direct_call). Archs that do not support this will continue
> > to use *_ftrace_direct api.
> >
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
>
> Hi Jiri,
>
> Me and Kumar stumbled on kernel splats with "ftrace failed to modify",
> and if running with KASAN:
>
> BUG: KASAN: slab-use-after-free in __get_valid_kprobe+0x224/0x2a0
>
> Pasting a full splat example at the bottom.
>
> I was able to create a reproducer with AI, and then used it to bisect
> to this patch. You can run it with ./test_progs -t ftrace_direct_race
>
> Below is my (human-generated, haha) summary of AI's analysis of what's
> happening. It makes sense to me conceptually, but I don't know enough
> details here to call bullshit. Please take a look:
hi, nice :)
>
> With CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS ftrace_replace_code()
> operates on all call sites in the shared ops. Then if a concurrent
> ftrace user (like kprobe) modifies a call site in between
> ftrace_replace_code's verify pass and its patch pass, then ftrace_bug
> fires and sets ftrace_disabled to 1.
hum, I'd think that's all under ftrace_lock/direct_mutex,
but we might be missing some paths
>
> Once ftrace is disabled, direct_ops_del silently fails to unregister
> the direct call, and the call site still redirects to the stale
> trampoline. After the BPF program is freed, we'll get use-after-free
> on the next trace hit.
>
> The reproducer is not great, because if everything is fine it just hangs.
> But with the bug the kernel crashes pretty fast.
perfect, I reproduced it on first run.. will check
> Maybe it makes sense to refine it to a proper "stress" selftest?
it might, let's see what's the problem
great report, thanks a lot for all the details and reproducer,
jirka
>
> Reproducer patch:
>
> From c595ef5a0ad9bc62d768080ff09502bc982c40e6 Mon Sep 17 00:00:00 2001
> From: Ihor Solodrai <ihor.solodrai@linux.dev>
> Date: Thu, 26 Feb 2026 17:00:39 -0800
> Subject: [PATCH] reproducer
>
> ---
> .../bpf/prog_tests/ftrace_direct_race.c | 243 ++++++++++++++++++
> 1 file changed, 243 insertions(+)
> create mode 100644 tools/testing/selftests/bpf/prog_tests/ftrace_direct_race.c
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/ftrace_direct_race.c b/tools/testing/selftests/bpf/prog_tests/ftrace_direct_race.c
> new file mode 100644
> index 000000000000..369c55364d05
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/ftrace_direct_race.c
> @@ -0,0 +1,243 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
> +
> +/* Test to reproduce ftrace race between BPF trampoline attach/detach
> + * and kprobe attach/detach on the same function.
> + *
> + * With CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS, all BPF trampolines share
> + * a single ftrace_ops. Concurrent modifications (BPF trampoline vs kprobe)
> + * can race in ftrace_replace_code's verify-then-patch sequence, causing
> + * ftrace to become permanently disabled and leaving stale trampolines
> + * that reference freed BPF programs.
> + *
> + * Run with: ./test_progs -t ftrace_direct_race
> + */
> +#include <test_progs.h>
> +#include <bpf/libbpf.h>
> +#include <pthread.h>
> +#include <sys/ioctl.h>
> +#include <linux/perf_event.h>
> +#include <sys/syscall.h>
> +
> +#include "fentry_test.lskel.h"
> +
> +#define NUM_ITERATIONS 200
> +
> +static volatile bool stop;
> +
> +/* Thread 1: Rapidly attach and detach fentry BPF trampolines */
> +static void *fentry_thread_fn(void *arg)
> +{
> + int i;
> +
> + for (i = 0; i < NUM_ITERATIONS && !stop; i++) {
> + struct fentry_test_lskel *skel;
> + int err;
> +
> + skel = fentry_test_lskel__open();
> + if (!skel)
> + continue;
> +
> + skel->keyring_id = KEY_SPEC_SESSION_KEYRING;
> + err = fentry_test_lskel__load(skel);
> + if (err) {
> + fentry_test_lskel__destroy(skel);
> + continue;
> + }
> +
> + err = fentry_test_lskel__attach(skel);
> + if (err) {
> + fentry_test_lskel__destroy(skel);
> + continue;
> + }
> +
> + /* Brief sleep to let the trampoline be live while kprobes race */
> + usleep(100 + rand() % 500);
> +
> + fentry_test_lskel__detach(skel);
> + fentry_test_lskel__destroy(skel);
> + }
> +
> + return NULL;
> +}
> +
> +/* Thread 2: Rapidly create and destroy kprobes via tracefs on
> + * bpf_fentry_test* functions (the same functions the fentry thread targets).
> + * Creating/removing kprobe events goes through the ftrace code patching
> + * path that can race with BPF trampoline direct call operations.
> + */
> +static void *kprobe_thread_fn(void *arg)
> +{
> + const char *funcs[] = {
> + "bpf_fentry_test1",
> + "bpf_fentry_test2",
> + "bpf_fentry_test3",
> + "bpf_fentry_test4",
> + "bpf_fentry_test5",
> + "bpf_fentry_test6",
> + };
> + int i;
> +
> + for (i = 0; i < NUM_ITERATIONS && !stop; i++) {
> + int j;
> +
> + for (j = 0; j < 6 && !stop; j++) {
> + char cmd[256];
> +
> + /* Create kprobe via tracefs */
> + snprintf(cmd, sizeof(cmd),
> + "echo 'p:kprobe_race_%d %s' >> /sys/kernel/debug/tracing/kprobe_events 2>/dev/null",
> + j, funcs[j]);
> + system(cmd);
> +
> + /* Small delay */
> + usleep(50 + rand() % 200);
> +
> + /* Remove kprobe */
> + snprintf(cmd, sizeof(cmd),
> + "echo '-:kprobe_race_%d' >> /sys/kernel/debug/tracing/kprobe_events 2>/dev/null",
> + j);
> + system(cmd);
> + }
> + }
> +
> + return NULL;
> +}
> +
> +/* Thread 3: Create kprobes via perf_event_open (the ftrace-based kind)
> + * which go through the arm_kprobe / disarm_kprobe ftrace path.
> + */
> +static void *perf_kprobe_thread_fn(void *arg)
> +{
> + const char *funcs[] = {
> + "bpf_fentry_test1",
> + "bpf_fentry_test2",
> + "bpf_fentry_test3",
> + };
> + int i;
> +
> + for (i = 0; i < NUM_ITERATIONS && !stop; i++) {
> + int fds[3] = {-1, -1, -1};
> + int j;
> +
> + for (j = 0; j < 3 && !stop; j++) {
> + struct perf_event_attr attr = {};
> + char path[256];
> + char buf[32];
> + char cmd[256];
> + int id_fd, id;
> +
> + /* Create kprobe event */
> + snprintf(cmd, sizeof(cmd),
> + "echo 'p:perf_race_%d %s' >> /sys/kernel/debug/tracing/kprobe_events 2>/dev/null",
> + j, funcs[j]);
> + system(cmd);
> +
> + /* Try to get the event id */
> + snprintf(path, sizeof(path),
> + "/sys/kernel/debug/tracing/events/kprobes/perf_race_%d/id", j);
> + id_fd = open(path, O_RDONLY);
> + if (id_fd < 0)
> + continue;
> +
> + memset(buf, 0, sizeof(buf));
> + if (read(id_fd, buf, sizeof(buf) - 1) > 0)
> + id = atoi(buf);
> + else
> + id = -1;
> + close(id_fd);
> +
> + if (id < 0)
> + continue;
> +
> + /* Open perf event to arm the kprobe via ftrace */
> + attr.type = PERF_TYPE_TRACEPOINT;
> + attr.size = sizeof(attr);
> + attr.config = id;
> + attr.sample_type = PERF_SAMPLE_RAW;
> + attr.sample_period = 1;
> + attr.wakeup_events = 1;
> +
> + fds[j] = syscall(__NR_perf_event_open, &attr, -1, 0, -1, 0);
> + if (fds[j] >= 0)
> + ioctl(fds[j], PERF_EVENT_IOC_ENABLE, 0);
> + }
> +
> + usleep(100 + rand() % 300);
> +
> + /* Close perf events (disarms kprobes via ftrace) */
> + for (j = 0; j < 3; j++) {
> + char cmd[256];
> +
> + if (fds[j] >= 0)
> + close(fds[j]);
> +
> + snprintf(cmd, sizeof(cmd),
> + "echo '-:perf_race_%d' >> /sys/kernel/debug/tracing/kprobe_events 2>/dev/null",
> + j);
> + system(cmd);
> + }
> + }
> +
> + return NULL;
> +}
> +
> +void test_ftrace_direct_race(void)
> +{
> + pthread_t fentry_tid, kprobe_tid, perf_kprobe_tid;
> + int err;
> +
> + /* Check if ftrace is currently operational */
> + if (!ASSERT_OK(access("/sys/kernel/debug/tracing/kprobe_events", W_OK),
> + "tracefs_access"))
> + return;
> +
> + stop = false;
> +
> + err = pthread_create(&fentry_tid, NULL, fentry_thread_fn, NULL);
> + if (!ASSERT_OK(err, "create_fentry_thread"))
> + return;
> +
> + err = pthread_create(&kprobe_tid, NULL, kprobe_thread_fn, NULL);
> + if (!ASSERT_OK(err, "create_kprobe_thread")) {
> + stop = true;
> + pthread_join(fentry_tid, NULL);
> + return;
> + }
> +
> + err = pthread_create(&perf_kprobe_tid, NULL, perf_kprobe_thread_fn, NULL);
> + if (!ASSERT_OK(err, "create_perf_kprobe_thread")) {
> + stop = true;
> + pthread_join(fentry_tid, NULL);
> + pthread_join(kprobe_tid, NULL);
> + return;
> + }
> +
> + pthread_join(fentry_tid, NULL);
> + pthread_join(kprobe_tid, NULL);
> + pthread_join(perf_kprobe_tid, NULL);
> +
> + /* If we get here without a kernel panic/oops, the test passed.
> + * The real check is in dmesg: look for
> + * "WARNING: arch/x86/kernel/ftrace.c" or
> + * "BUG: KASAN: vmalloc-out-of-bounds in __bpf_prog_enter_recur"
> + *
> + * A more robust check: verify ftrace is still operational.
> + */
> + ASSERT_OK(access("/sys/kernel/debug/tracing/kprobe_events", W_OK),
> + "ftrace_still_operational");
> +
> + /* Check that ftrace wasn't disabled */
> + {
> + char buf[64] = {};
> + int fd = open("/proc/sys/kernel/ftrace_enabled", O_RDONLY);
> +
> + if (ASSERT_GE(fd, 0, "open_ftrace_enabled")) {
> + int n = read(fd, buf, sizeof(buf) - 1);
> +
> + close(fd);
> + if (n > 0)
> + ASSERT_EQ(atoi(buf), 1, "ftrace_enabled");
> + }
> + }
> +}
> --
> 2.47.3
>
>
> ----
>
> Splat:
>
> [ 24.170803] ------------[ cut here ]------------
> [ 24.171055] WARNING: kernel/trace/ftrace.c:2715 at ftrace_get_addr_curr+0x149/0x190, CPU#13: kworker/13:6/873
> [ 24.171315] Modules linked in: bpf_test_modorder_y(OE+) bpf_test_modorder_x(OE) bpf_testmod(OE)
> [ 24.171561] CPU: 13 UID: 0 PID: 873 Comm: kworker/13:6 Tainted: G OE 7.0.0-rc1-gda78c0a81eea #83 PREEMPT(full)
> [ 24.171827] Tainted: [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
> [ 24.171941] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-5.el9 11/05/2023
> [ 24.172132] Workqueue: events bpf_link_put_deferred
> [ 24.172261] RIP: 0010:ftrace_get_addr_curr+0x149/0x190
> [ 24.172376] Code: 00 4c 89 f7 e8 88 f8 ff ff 84 c0 75 92 4d 8b 7f 08 e8 fb b3 c1 00 4d 85 ff 0f 94 c0 49 81 ff b0 1c 6e 83 0f 94 c1 08 c1 74 96 <0f> 0b c6 05
> 62 e8 2b 02 01 c7 05 54 e8 2b 02 00 00 00 00 48 c7 05
> [ 24.172745] RSP: 0018:ffa0000504cafb78 EFLAGS: 00010202
> [ 24.172861] RAX: 0000000000000000 RBX: ff110001000e48d0 RCX: ff1100011cd3a201
> [ 24.173034] RDX: 6e21cb51d943709c RSI: 0000000000000000 RDI: ffffffff81d416d4
> [ 24.173194] RBP: 0000000000000001 R08: 0000000080000000 R09: ffffffffffffffff
> [ 24.173366] R10: ffffffff81285522 R11: 0000000000000000 R12: ff110001000e48d0
> [ 24.173530] R13: ffffffff81d416d4 R14: ffffffff81d416d4 R15: ffffffff836e1cb0
> [ 24.173691] FS: 0000000000000000(0000) GS:ff1100203becc000(0000) knlGS:0000000000000000
> [ 24.173849] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 24.173995] CR2: 00007f615e966270 CR3: 000000010bd9d005 CR4: 0000000000771ef0
> [ 24.174155] PKRU: 55555554
> [ 24.174214] Call Trace:
> [ 24.174285] <TASK>
> [ 24.174348] ftrace_replace_code+0x7e/0x210
> [ 24.174443] ftrace_modify_all_code+0x59/0x110
> [ 24.174553] __ftrace_hash_move_and_update_ops+0x227/0x2c0
> [ 24.174659] ? kfree+0x1ac/0x4c0
> [ 24.174751] ? srso_return_thunk+0x5/0x5f
> [ 24.174834] ? kfree+0x250/0x4c0
> [ 24.174926] ? kfree+0x1ac/0x4c0
> [ 24.175010] ? bpf_lsm_sk_alloc_security+0x4/0x20
> [ 24.175132] ftrace_update_ops+0x40/0x80
> [ 24.175217] update_ftrace_direct_del+0x263/0x290
> [ 24.175341] ? bpf_lsm_sk_alloc_security+0x4/0x20
> [ 24.175456] ? 0xffffffffc0006a80
> [ 24.175543] bpf_trampoline_update+0x1fb/0x810
> [ 24.175654] bpf_trampoline_unlink_prog+0x103/0x1a0
> [ 24.175767] ? process_scheduled_works+0x271/0x640
> [ 24.175886] bpf_shim_tramp_link_release+0x20/0x40
> [ 24.176001] bpf_link_free+0x54/0xd0
> [ 24.176092] process_scheduled_works+0x2c2/0x640
> [ 24.176222] worker_thread+0x22a/0x340 21:11:27 [422/10854]
> [ 24.176319] ? srso_return_thunk+0x5/0x5f
> [ 24.176405] ? __pfx_worker_thread+0x10/0x10
> [ 24.176522] kthread+0x10c/0x140
> [ 24.176611] ? __pfx_kthread+0x10/0x10
> [ 24.176698] ret_from_fork+0x148/0x290
> [ 24.176785] ? __pfx_kthread+0x10/0x10
> [ 24.176872] ret_from_fork_asm+0x1a/0x30
> [ 24.176985] </TASK>
> [ 24.177043] irq event stamp: 6965
> [ 24.177126] hardirqs last enabled at (6973): [<ffffffff8136008c>] __console_unlock+0x5c/0x70
> [ 24.177325] hardirqs last disabled at (6982): [<ffffffff81360071>] __console_unlock+0x41/0x70
> [ 24.177520] softirqs last enabled at (6524): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
> [ 24.177675] softirqs last disabled at (6123): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
> [ 24.177844] ---[ end trace 0000000000000000 ]---
> [ 24.177963] Bad trampoline accounting at: 000000003143da54 (bpf_fentry_test3+0x4/0x20)
> [ 24.178134] ------------[ cut here ]------------
> [ 24.178261] WARNING: arch/x86/kernel/ftrace.c:105 at ftrace_replace_code+0xf7/0x210, CPU#13: kworker/13:6/873
> [ 24.178476] Modules linked in: bpf_test_modorder_y(OE+) bpf_test_modorder_x(OE) bpf_testmod(OE)
> [ 24.178680] CPU: 13 UID: 0 PID: 873 Comm: kworker/13:6 Tainted: G W OE 7.0.0-rc1-gda78c0a81eea #83 PREEMPT(full)
> [ 24.178925] Tainted: [W]=WARN, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
> [ 24.179059] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-5.el9 11/05/2023
> [ 24.179258] Workqueue: events bpf_link_put_deferred
> [ 24.179374] RIP: 0010:ftrace_replace_code+0xf7/0x210
> [ 24.179485] Code: c0 0f 85 ec 00 00 00 8b 44 24 03 41 33 45 00 0f b6 4c 24 07 41 32 4d 04 0f b6 c9 09 c1 0f 84 49 ff ff ff 4c 89 2d b9 df 8b 03 <0f> 0b bf ea
> ff ff ff e9 c4 00 00 00 e8 f8 e5 19 00 48 85 c0 0f 84
> [ 24.179847] RSP: 0018:ffa0000504cafb98 EFLAGS: 00010202
> [ 24.179965] RAX: 0000000038608000 RBX: 0000000000000001 RCX: 00000000386080c1
> [ 24.180126] RDX: ffffffff81d41000 RSI: 0000000000000005 RDI: ffffffff81d416d4
> [ 24.180295] RBP: 0000000000000001 R08: 000000000000ffff R09: ffffffff82e98430
> [ 24.180455] R10: 000000000002fffd R11: 00000000fffeffff R12: ff110001000e48d0
> [ 24.180617] R13: ffffffff83ec0f2d R14: ffffffff84b43820 R15: ffa0000504cafb9b
> [ 24.180777] FS: 0000000000000000(0000) GS:ff1100203becc000(0000) knlGS:0000000000000000
> [ 24.180939] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 24.181077] CR2: 00007f615e966270 CR3: 000000010bd9d005 CR4: 0000000000771ef0
> [ 24.181247] PKRU: 55555554
> [ 24.181303] Call Trace:
> [ 24.181360] <TASK>
> [ 24.181424] ftrace_modify_all_code+0x59/0x110
> [ 24.181536] __ftrace_hash_move_and_update_ops+0x227/0x2c0
> [ 24.181650] ? kfree+0x1ac/0x4c0
> [ 24.181743] ? srso_return_thunk+0x5/0x5f
> [ 24.181828] ? kfree+0x250/0x4c0
> [ 24.181916] ? kfree+0x1ac/0x4c0
> [ 24.182004] ? bpf_lsm_sk_alloc_security+0x4/0x20
> [ 24.182123] ftrace_update_ops+0x40/0x80
> [ 24.182213] update_ftrace_direct_del+0x263/0x290
> [ 24.182337] ? bpf_lsm_sk_alloc_security+0x4/0x20
> [ 24.182455] ? 0xffffffffc0006a80
> [ 24.182543] bpf_trampoline_update+0x1fb/0x810
> [ 24.182655] bpf_trampoline_unlink_prog+0x103/0x1a0
> [ 24.182768] ? process_scheduled_works+0x271/0x640
> [ 24.182887] bpf_shim_tramp_link_release+0x20/0x40
> [ 24.183001] bpf_link_free+0x54/0xd0
> [ 24.183088] process_scheduled_works+0x2c2/0x640
> [ 24.183220] worker_thread+0x22a/0x340 21:11:27 [367/10854]
> [ 24.183319] ? srso_return_thunk+0x5/0x5f
> [ 24.183405] ? __pfx_worker_thread+0x10/0x10
> [ 24.183521] kthread+0x10c/0x140
> [ 24.183610] ? __pfx_kthread+0x10/0x10
> [ 24.183697] ret_from_fork+0x148/0x290
> [ 24.183783] ? __pfx_kthread+0x10/0x10
> [ 24.183868] ret_from_fork_asm+0x1a/0x30
> [ 24.183979] </TASK>
> [ 24.184056] irq event stamp: 7447
> [ 24.184138] hardirqs last enabled at (7455): [<ffffffff8136008c>] __console_unlock+0x5c/0x70
> [ 24.184339] hardirqs last disabled at (7464): [<ffffffff81360071>] __console_unlock+0x41/0x70
> [ 24.184522] softirqs last enabled at (6524): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
> [ 24.184675] softirqs last disabled at (6123): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
> [ 24.184836] ---[ end trace 0000000000000000 ]---
> [ 24.185177] ------------[ ftrace bug ]------------
> [ 24.185310] ftrace failed to modify
> [ 24.185312] [<ffffffff81d416d4>] bpf_fentry_test3+0x4/0x20
> [ 24.185544] actual: e8:27:29:6c:3e
> [ 24.185627] expected: e8:a7:49:54:ff
> [ 24.185717] ftrace record flags: e8180000
> [ 24.185798] (0) R tramp: ERROR!
> [ 24.185798] expected tramp: ffffffffc0404000
> [ 24.185975] ------------[ cut here ]------------
> [ 24.186086] WARNING: kernel/trace/ftrace.c:2254 at ftrace_bug+0x101/0x290, CPU#13: kworker/13:6/873
> [ 24.186285] Modules linked in: bpf_test_modorder_y(OE+) bpf_test_modorder_x(OE) bpf_testmod(OE)
> [ 24.186484] CPU: 13 UID: 0 PID: 873 Comm: kworker/13:6 Tainted: G W OE 7.0.0-rc1-gda78c0a81eea #83 PREEMPT(full)
> [ 24.186728] Tainted: [W]=WARN, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
> [ 24.186863] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-5.el9 11/05/2023
> [ 24.187057] Workqueue: events bpf_link_put_deferred
> [ 24.187172] RIP: 0010:ftrace_bug+0x101/0x290
> [ 24.187294] Code: 05 72 03 83 f8 02 7f 13 83 f8 01 74 46 83 f8 02 75 13 48 c7 c7 41 a3 69 82 eb 51 83 f8 03 74 3c 83 f8 04 74 40 48 85 db 75 4c <0f> 0b c6 05
> ba eb 2b 02 01 c7 05 ac eb 2b 02 00 00 00 00 48 c7 05
> [ 24.187663] RSP: 0018:ffa0000504cafb70 EFLAGS: 00010246
> [ 24.187772] RAX: 0000000000000022 RBX: ff110001000e48d0 RCX: e5ff63967b168c00
> [ 24.187934] RDX: 0000000000000000 RSI: 00000000fffeffff RDI: ffffffff83018490
> [ 24.188096] RBP: 00000000ffffffea R08: 000000000000ffff R09: ffffffff82e98430
> [ 24.188267] R10: 000000000002fffd R11: 00000000fffeffff R12: ff110001000e48d0
> [ 24.188423] R13: ffffffff83ec0f2d R14: ffffffff81d416d4 R15: ffffffff836e1cb0
> [ 24.188581] FS: 0000000000000000(0000) GS:ff1100203becc000(0000) knlGS:0000000000000000
> [ 24.188738] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 24.188870] CR2: 00007f615e966270 CR3: 000000010bd9d005 CR4: 0000000000771ef0
> [ 24.189032] PKRU: 55555554
> [ 24.189088] Call Trace:
> [ 24.189144] <TASK>
> [ 24.189204] ftrace_replace_code+0x1d6/0x210
> [ 24.189335] ftrace_modify_all_code+0x59/0x110
> [ 24.189443] __ftrace_hash_move_and_update_ops+0x227/0x2c0
> [ 24.189554] ? kfree+0x1ac/0x4c0
> [ 24.189638] ? srso_return_thunk+0x5/0x5f
> [ 24.189720] ? kfree+0x250/0x4c0
> [ 24.189802] ? kfree+0x1ac/0x4c0
> [ 24.189889] ? bpf_lsm_sk_alloc_security+0x4/0x20
> [ 24.190010] ftrace_update_ops+0x40/0x80
> [ 24.190095] update_ftrace_direct_del+0x263/0x290
> [ 24.190205] ? bpf_lsm_sk_alloc_security+0x4/0x20 21:11:28 [312/10854]
> [ 24.190335] ? 0xffffffffc0006a80
> [ 24.190422] bpf_trampoline_update+0x1fb/0x810
> [ 24.190542] bpf_trampoline_unlink_prog+0x103/0x1a0
> [ 24.190651] ? process_scheduled_works+0x271/0x640
> [ 24.190764] bpf_shim_tramp_link_release+0x20/0x40
> [ 24.190871] bpf_link_free+0x54/0xd0
> [ 24.190964] process_scheduled_works+0x2c2/0x640
> [ 24.191093] worker_thread+0x22a/0x340
> [ 24.191177] ? srso_return_thunk+0x5/0x5f
> [ 24.191274] ? __pfx_worker_thread+0x10/0x10
> [ 24.191388] kthread+0x10c/0x140
> [ 24.191478] ? __pfx_kthread+0x10/0x10
> [ 24.191565] ret_from_fork+0x148/0x290
> [ 24.191641] ? __pfx_kthread+0x10/0x10
> [ 24.191729] ret_from_fork_asm+0x1a/0x30
> [ 24.191833] </TASK>
> [ 24.191896] irq event stamp: 8043
> [ 24.191979] hardirqs last enabled at (8051): [<ffffffff8136008c>] __console_unlock+0x5c/0x70
> [ 24.192167] hardirqs last disabled at (8058): [<ffffffff81360071>] __console_unlock+0x41/0x70
> [ 24.192368] softirqs last enabled at (7828): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
> [ 24.192528] softirqs last disabled at (7817): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
> [ 24.192689] ---[ end trace 0000000000000000 ]---
> [ 24.193549] ------------[ cut here ]------------
> [ 24.193773] WARNING: kernel/trace/ftrace.c:2709 at ftrace_get_addr_curr+0x6c/0x190, CPU#10: test_progs/311
> [ 24.193973] Modules linked in: bpf_test_modorder_y(OE+) bpf_test_modorder_x(OE) bpf_testmod(OE)
> [ 24.194206] CPU: 10 UID: 0 PID: 311 Comm: test_progs Tainted: G W OE 7.0.0-rc1-gda78c0a81eea #83 PREEMPT(full)
> [ 24.194461] Tainted: [W]=WARN, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
> [ 24.194594] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-5.el9 11/05/2023
> [ 24.194778] RIP: 0010:ftrace_get_addr_curr+0x6c/0x190
> [ 24.194891] Code: 48 0f 44 ce 4c 8b 3c c8 e8 e1 b4 c1 00 4d 85 ff 74 18 4d 39 77 10 74 05 4d 8b 3f eb eb 49 8b 47 18 48 85 c0 0f 85 19 01 00 00 <0f> 0b 48 8b
> 43 08 a9 00 00 00 08 75 1c a9 00 00 00 20 48 c7 c1 80
> [ 24.195270] RSP: 0018:ffa0000000d4bb38 EFLAGS: 00010246
> [ 24.195381] RAX: 0000000000000001 RBX: ff11000100125710 RCX: ff1100010b28a2c0
> [ 24.195540] RDX: 0000000000000003 RSI: 0000000000000003 RDI: ff11000100125710
> [ 24.195698] RBP: 0000000000000001 R08: 0000000080000000 R09: ffffffffffffffff
> [ 24.195863] R10: ffffffff82046a38 R11: 0000000000000000 R12: ff11000100125710
> [ 24.196033] R13: ffffffff81529fc4 R14: ffffffff81529fc4 R15: 0000000000000000
> [ 24.196199] FS: 00007f46532a54c0(0000) GS:ff1100203be0c000(0000) knlGS:0000000000000000
> [ 24.196374] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 24.196509] CR2: 000055e885be1470 CR3: 000000010eef9003 CR4: 0000000000771ef0
> [ 24.196663] PKRU: 55555554
> [ 24.196720] Call Trace:
> [ 24.196778] <TASK>
> [ 24.196844] ftrace_replace_code+0x7e/0x210
> [ 24.196948] ftrace_modify_all_code+0x59/0x110
> [ 24.197059] __ftrace_hash_move_and_update_ops+0x227/0x2c0
> [ 24.197174] ? srso_return_thunk+0x5/0x5f
> [ 24.197271] ? __mutex_lock+0x22a/0xc60
> [ 24.197360] ? kfree+0x1ac/0x4c0
> [ 24.197455] ? srso_return_thunk+0x5/0x5f
> [ 24.197538] ? kfree+0x250/0x4c0
> [ 24.197626] ? bpf_fentry_test3+0x4/0x20
> [ 24.197712] ftrace_set_hash+0x13c/0x3d0
> [ 24.197811] ftrace_set_filter_ip+0x88/0xb0
> [ 24.197909] ? bpf_fentry_test3+0x4/0x20 21:11:28 [257/10854]
> [ 24.198000] disarm_kprobe_ftrace+0x83/0xd0
> [ 24.198089] __disable_kprobe+0x129/0x160
> [ 24.198178] disable_kprobe+0x27/0x60
> [ 24.198272] kprobe_register+0xa2/0xe0
> [ 24.198362] perf_trace_event_unreg+0x33/0xd0
> [ 24.198473] perf_kprobe_destroy+0x3b/0x80
> [ 24.198557] __free_event+0x119/0x290
> [ 24.198640] perf_event_release_kernel+0x1ef/0x220
> [ 24.198758] perf_release+0x12/0x20
> [ 24.198843] __fput+0x11b/0x2a0
> [ 24.198946] task_work_run+0x8b/0xc0
> [ 24.199035] exit_to_user_mode_loop+0x107/0x4d0
> [ 24.199155] do_syscall_64+0x25b/0x390
> [ 24.199249] ? entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [ 24.199360] ? trace_irq_disable+0x1d/0xc0
> [ 24.199451] entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [ 24.199559] RIP: 0033:0x7f46530ff85b
> [ 24.199675] Code: 03 00 00 00 0f 05 48 3d 00 f0 ff ff 77 41 c3 48 83 ec 18 89 7c 24 0c e8 e3 83 f8 ff 8b 7c 24 0c 41 89 c0 b8 03 00 00 00 0f 05 <48> 3d 00 f0
> ff ff 77 35 44 89 c7 89 44 24 0c e8 41 84 f8 ff 8b 44
> [ 24.200034] RSP: 002b:00007ffc40859770 EFLAGS: 00000293 ORIG_RAX: 0000000000000003
> [ 24.200192] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007f46530ff85b
> [ 24.200382] RDX: 0000000000000002 RSI: 0000000000000002 RDI: 0000000000000019
> [ 24.200552] RBP: 00007ffc408597c0 R08: 0000000000000000 R09: 00007ffc40859757
> [ 24.200702] R10: 0000000000000000 R11: 0000000000000293 R12: 00007ffc4085ddc8
> [ 24.200855] R13: 000055e8800de120 R14: 000055e88118d390 R15: 00007f46533de000
> [ 24.201035] </TASK>
> [ 24.201091] irq event stamp: 200379
> [ 24.201208] hardirqs last enabled at (200387): [<ffffffff8136008c>] __console_unlock+0x5c/0x70
> [ 24.201453] hardirqs last disabled at (200396): [<ffffffff81360071>] __console_unlock+0x41/0x70
> [ 24.201667] softirqs last enabled at (200336): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
> [ 24.201890] softirqs last disabled at (200329): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
> [ 24.202121] ---[ end trace 0000000000000000 ]---
> [ 24.202398] ------------[ cut here ]------------
> [ 24.202534] WARNING: kernel/trace/ftrace.c:2715 at ftrace_get_addr_curr+0x149/0x190, CPU#10: test_progs/311
> [ 24.202753] Modules linked in: bpf_test_modorder_y(OE+) bpf_test_modorder_x(OE) bpf_testmod(OE)
> [ 24.202962] CPU: 10 UID: 0 PID: 311 Comm: test_progs Tainted: G W OE 7.0.0-rc1-gda78c0a81eea #83 PREEMPT(full)
> [ 24.203203] Tainted: [W]=WARN, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
> [ 24.203344] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-5.el9 11/05/2023
> [ 24.203526] RIP: 0010:ftrace_get_addr_curr+0x149/0x190
> [ 24.203629] Code: 00 4c 89 f7 e8 88 f8 ff ff 84 c0 75 92 4d 8b 7f 08 e8 fb b3 c1 00 4d 85 ff 0f 94 c0 49 81 ff b0 1c 6e 83 0f 94 c1 08 c1 74 96 <0f> 0b c6 05
> 62 e8 2b 02 01 c7 05 54 e8 2b 02 00 00 00 00 48 c7 05
> [ 24.203996] RSP: 0018:ffa0000000d4bb38 EFLAGS: 00010202
> [ 24.204110] RAX: 0000000000000000 RBX: ff11000100125710 RCX: ff1100010b28a201
> [ 24.204280] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffffffff81529fc4
> [ 24.204437] RBP: 0000000000000001 R08: 0000000080000000 R09: ffffffffffffffff
> [ 24.204595] R10: ffffffff82046a38 R11: 0000000000000000 R12: ff11000100125710
> [ 24.204755] R13: ffffffff81529fc4 R14: ffffffff81529fc4 R15: ffffffff836e1cb0
> [ 24.204914] FS: 00007f46532a54c0(0000) GS:ff1100203be0c000(0000) knlGS:0000000000000000
> [ 24.205072] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 24.205204] CR2: 000055e885be1470 CR3: 000000010eef9003 CR4: 0000000000771ef0
> [ 24.205386] PKRU: 55555554
> [ 24.205443] Call Trace:
> [ 24.205503] <TASK>
> [ 24.205565] ftrace_replace_code+0x7e/0x210
> [ 24.205669] ftrace_modify_all_code+0x59/0x110 21:11:28 [202/10854]
> [ 24.205784] __ftrace_hash_move_and_update_ops+0x227/0x2c0
> [ 24.205902] ? srso_return_thunk+0x5/0x5f
> [ 24.205987] ? __mutex_lock+0x22a/0xc60
> [ 24.206072] ? kfree+0x1ac/0x4c0
> [ 24.206163] ? srso_return_thunk+0x5/0x5f
> [ 24.206254] ? kfree+0x250/0x4c0
> [ 24.206344] ? bpf_fentry_test3+0x4/0x20
> [ 24.206428] ftrace_set_hash+0x13c/0x3d0
> [ 24.206523] ftrace_set_filter_ip+0x88/0xb0
> [ 24.206614] ? bpf_fentry_test3+0x4/0x20
> [ 24.206703] disarm_kprobe_ftrace+0x83/0xd0
> [ 24.206789] __disable_kprobe+0x129/0x160
> [ 24.206880] disable_kprobe+0x27/0x60
> [ 24.206972] kprobe_register+0xa2/0xe0
> [ 24.207057] perf_trace_event_unreg+0x33/0xd0
> [ 24.207169] perf_kprobe_destroy+0x3b/0x80
> [ 24.207262] __free_event+0x119/0x290
> [ 24.207348] perf_event_release_kernel+0x1ef/0x220
> [ 24.207461] perf_release+0x12/0x20
> [ 24.207543] __fput+0x11b/0x2a0
> [ 24.207626] task_work_run+0x8b/0xc0
> [ 24.207711] exit_to_user_mode_loop+0x107/0x4d0
> [ 24.207827] do_syscall_64+0x25b/0x390
> [ 24.207915] ? entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [ 24.208021] ? trace_irq_disable+0x1d/0xc0
> [ 24.208110] entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [ 24.208215] RIP: 0033:0x7f46530ff85b
> [ 24.208307] Code: 03 00 00 00 0f 05 48 3d 00 f0 ff ff 77 41 c3 48 83 ec 18 89 7c 24 0c e8 e3 83 f8 ff 8b 7c 24 0c 41 89 c0 b8 03 00 00 00 0f 05 <48> 3d 00 f0
> ff ff 77 35 44 89 c7 89 44 24 0c e8 41 84 f8 ff 8b 44
> [ 24.208657] RSP: 002b:00007ffc40859770 EFLAGS: 00000293 ORIG_RAX: 0000000000000003
> [ 24.208816] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007f46530ff85b
> [ 24.208978] RDX: 0000000000000002 RSI: 0000000000000002 RDI: 0000000000000019
> [ 24.209133] RBP: 00007ffc408597c0 R08: 0000000000000000 R09: 00007ffc40859757
> [ 24.209300] R10: 0000000000000000 R11: 0000000000000293 R12: 00007ffc4085ddc8
> [ 24.209457] R13: 000055e8800de120 R14: 000055e88118d390 R15: 00007f46533de000
> [ 24.209633] </TASK>
> [ 24.209689] irq event stamp: 200963
> [ 24.209770] hardirqs last enabled at (200971): [<ffffffff8136008c>] __console_unlock+0x5c/0x70
> [ 24.209971] hardirqs last disabled at (200978): [<ffffffff81360071>] __console_unlock+0x41/0x70
> [ 24.210156] softirqs last enabled at (200568): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
> [ 24.210370] softirqs last disabled at (200557): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
> [ 24.210554] ---[ end trace 0000000000000000 ]---
> [ 24.210665] Bad trampoline accounting at: 00000000ab641fec (bpf_lsm_sk_alloc_security+0x4/0x20)
> [ 24.210866] ------------[ cut here ]------------
> [ 24.210993] WARNING: arch/x86/kernel/ftrace.c:105 at ftrace_replace_code+0xf7/0x210, CPU#10: test_progs/311
> [ 24.211182] Modules linked in: bpf_test_modorder_y(OE+) bpf_test_modorder_x(OE) bpf_testmod(OE)
> [ 24.211412] CPU: 10 UID: 0 PID: 311 Comm: test_progs Tainted: G W OE 7.0.0-rc1-gda78c0a81eea #83 PREEMPT(full)
> [ 24.211656] Tainted: [W]=WARN, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
> [ 24.211788] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-5.el9 11/05/2023
> [ 24.211980] RIP: 0010:ftrace_replace_code+0xf7/0x210
> [ 24.212091] Code: c0 0f 85 ec 00 00 00 8b 44 24 03 41 33 45 00 0f b6 4c 24 07 41 32 4d 04 0f b6 c9 09 c1 0f 84 49 ff ff ff 4c 89 2d b9 df 8b 03 <0f> 0b bf ea
> ff ff ff e9 c4 00 00 00 e8 f8 e5 19 00 48 85 c0 0f 84
> [ 24.212503] RSP: 0018:ffa0000000d4bb58 EFLAGS: 00010202
> [ 24.212628] RAX: 00000000780a0001 RBX: 0000000000000001 RCX: 00000000780a00c1
> [ 24.212798] RDX: ffffffff81529000 RSI: 0000000000000005 RDI: ffffffff81529fc4
> [ 24.212970] RBP: 0000000000000001 R08: 000000000000ffff R09: ffffffff82e98430
> [ 24.213130] R10: 000000000002fffd R11: 00000000fffeffff R12: ff11000100125710
> [ 24.213317] R13: ffffffff83ec0f2d R14: ffffffff84b43820 R15: ffa0000000d4bb5b
> [ 24.213488] FS: 00007f46532a54c0(0000) GS:ff1100203be0c000(0000) knlGS:0000000000000000
> [ 24.213674] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 24.213813] CR2: 000055e885be1470 CR3: 000000010eef9003 CR4: 0000000000771ef0
> [ 24.213986] PKRU: 55555554
> [ 24.214044] Call Trace:
> [ 24.214100] <TASK>
> [ 24.214167] ftrace_modify_all_code+0x59/0x110
> [ 24.214301] __ftrace_hash_move_and_update_ops+0x227/0x2c0
> [ 24.214415] ? srso_return_thunk+0x5/0x5f
> [ 24.214502] ? __mutex_lock+0x22a/0xc60
> [ 24.214588] ? kfree+0x1ac/0x4c0
> [ 24.214682] ? srso_return_thunk+0x5/0x5f
> [ 24.214765] ? kfree+0x250/0x4c0
> [ 24.214855] ? bpf_fentry_test3+0x4/0x20
> [ 24.214943] ftrace_set_hash+0x13c/0x3d0
> [ 24.215041] ftrace_set_filter_ip+0x88/0xb0
> [ 24.215132] ? bpf_fentry_test3+0x4/0x20
> [ 24.215221] disarm_kprobe_ftrace+0x83/0xd0
> [ 24.215328] __disable_kprobe+0x129/0x160
> [ 24.215418] disable_kprobe+0x27/0x60
> [ 24.215507] kprobe_register+0xa2/0xe0
> [ 24.215594] perf_trace_event_unreg+0x33/0xd0
> [ 24.215701] perf_kprobe_destroy+0x3b/0x80
> [ 24.215790] __free_event+0x119/0x290
> [ 24.215888] perf_event_release_kernel+0x1ef/0x220
> [ 24.216007] perf_release+0x12/0x20
> [ 24.216091] __fput+0x11b/0x2a0
> [ 24.216183] task_work_run+0x8b/0xc0
> [ 24.216293] exit_to_user_mode_loop+0x107/0x4d0
> [ 24.216411] do_syscall_64+0x25b/0x390
> [ 24.216497] ? entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [ 24.216606] ? trace_irq_disable+0x1d/0xc0
> [ 24.216699] entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [ 24.216807] RIP: 0033:0x7f46530ff85b
> [ 24.216895] Code: 03 00 00 00 0f 05 48 3d 00 f0 ff ff 77 41 c3 48 83 ec 18 89 7c 24 0c e8 e3 83 f8 ff 8b 7c 24 0c 41 89 c0 b8 03 00 00 00 0f 05 <48> 3d 00 f0
> ff ff 77 35 44 89 c7 89 44 24 0c e8 41 84 f8 ff 8b 44
> [ 24.217293] RSP: 002b:00007ffc40859770 EFLAGS: 00000293 ORIG_RAX: 0000000000000003
> [ 24.217461] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007f46530ff85b
> [ 24.217627] RDX: 0000000000000002 RSI: 0000000000000002 RDI: 0000000000000019
> [ 24.217785] RBP: 00007ffc408597c0 R08: 0000000000000000 R09: 00007ffc40859757
> [ 24.217950] R10: 0000000000000000 R11: 0000000000000293 R12: 00007ffc4085ddc8
> [ 24.218107] R13: 000055e8800de120 R14: 000055e88118d390 R15: 00007f46533de000
> [ 24.218306] </TASK>
> [ 24.218363] irq event stamp: 201623
> [ 24.218445] hardirqs last enabled at (201631): [<ffffffff8136008c>] __console_unlock+0x5c/0x70
> [ 24.218625] hardirqs last disabled at (201638): [<ffffffff81360071>] __console_unlock+0x41/0x70
> [ 24.218810] softirqs last enabled at (201612): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
> [ 24.219012] softirqs last disabled at (201601): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
> [ 24.219208] ---[ end trace 0000000000000000 ]---
> [ 24.219693] ------------[ ftrace bug ]------------
> [ 24.219801] ftrace failed to modify
> [ 24.219804] [<ffffffff81529fc4>] bpf_lsm_sk_alloc_security+0x4/0x20
> [ 24.220022] actual: e9:b7:ca:ad:3e
> [ 24.220113] expected: e8:b7:c0:d5:ff
> [ 24.220203] ftrace record flags: e8980000
> [ 24.220307] (0) R tramp: ERROR!
> [ 24.220321] ------------[ cut here ]------------
> [ 24.220507] WARNING: kernel/trace/ftrace.c:2715 at ftrace_get_addr_curr+0x149/0x190, CPU#10: test_progs/311
> [ 24.220693] Modules linked in: bpf_test_modorder_y(OE+) bpf_test_modorder_x(OE) bpf_testmod(OE)
> [ 24.220895] CPU: 10 UID: 0 PID: 311 Comm: test_progs Tainted: G W OE 7.0.0-rc1-gda78c0a81eea #83 PREEMPT(full)
> [ 24.221135] Tainted: [W]=WARN, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
> [ 24.221284] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-5.el9 11/05/2023
> [ 24.221467] RIP: 0010:ftrace_get_addr_curr+0x149/0x190
> [ 24.221577] Code: 00 4c 89 f7 e8 88 f8 ff ff 84 c0 75 92 4d 8b 7f 08 e8 fb b3 c1 00 4d 85 ff 0f 94 c0 49 81 ff b0 1c 6e 83 0f 94 c1 08 c1 74 96 <0f> 0b c6 05
> 62 e8 2b 02 01 c7 05 54 e8 2b 02 00 00 00 00 48 c7 05
> [ 24.221938] RSP: 0018:ffa0000000d4bb10 EFLAGS: 00010202
> [ 24.222052] RAX: 0000000000000000 RBX: ff11000100125710 RCX: ff1100010b28a201
> [ 24.222205] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffffffff81529fc4
> [ 24.222384] RBP: 00000000ffffffea R08: 000000000000ffff R09: ffffffff82e98430
> [ 24.222542] R10: 000000000002fffd R11: 00000000fffeffff R12: ff11000100125710
> [ 24.222708] R13: ffffffff83ec0f2d R14: ffffffff81529fc4 R15: ffffffff836e1cb0
> [ 24.222866] FS: 00007f46532a54c0(0000) GS:ff1100203be0c000(0000) knlGS:0000000000000000
> [ 24.223034] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 24.223171] CR2: 000055e885be1470 CR3: 000000010eef9003 CR4: 0000000000771ef0
> [ 24.223341] PKRU: 55555554
> [ 24.223397] Call Trace:
> [ 24.223454] <TASK>
> [ 24.223511] ? bpf_lsm_sk_alloc_security+0x4/0x20
> [ 24.223623] ftrace_bug+0x1ff/0x290
> [ 24.223710] ftrace_replace_code+0x1d6/0x210
> [ 24.223829] ftrace_modify_all_code+0x59/0x110
> [ 24.223946] __ftrace_hash_move_and_update_ops+0x227/0x2c0
> [ 24.224060] ? srso_return_thunk+0x5/0x5f
> [ 24.224148] ? __mutex_lock+0x22a/0xc60
> [ 24.224245] ? kfree+0x1ac/0x4c0
> [ 24.224337] ? srso_return_thunk+0x5/0x5f
> [ 24.224420] ? kfree+0x250/0x4c0
> [ 24.224512] ? bpf_fentry_test3+0x4/0x20
> [ 24.224597] ftrace_set_hash+0x13c/0x3d0
> [ 24.224690] ftrace_set_filter_ip+0x88/0xb0
> [ 24.224776] ? bpf_fentry_test3+0x4/0x20
> [ 24.224869] disarm_kprobe_ftrace+0x83/0xd0
> [ 24.224965] __disable_kprobe+0x129/0x160
> [ 24.225051] disable_kprobe+0x27/0x60
> [ 24.225136] kprobe_register+0xa2/0xe0
> [ 24.225223] perf_trace_event_unreg+0x33/0xd0
> [ 24.225346] perf_kprobe_destroy+0x3b/0x80
> [ 24.225431] __free_event+0x119/0x290
> [ 24.225518] perf_event_release_kernel+0x1ef/0x220
> [ 24.225631] perf_release+0x12/0x20
> [ 24.225715] __fput+0x11b/0x2a0
> [ 24.225804] task_work_run+0x8b/0xc0
> [ 24.225895] exit_to_user_mode_loop+0x107/0x4d0
> [ 24.226016] do_syscall_64+0x25b/0x390
> [ 24.226099] ? entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [ 24.226207] ? trace_irq_disable+0x1d/0xc0
> [ 24.226308] entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [ 24.226415] RIP: 0033:0x7f46530ff85b
> [ 24.226498] Code: 03 00 00 00 0f 05 48 3d 00 f0 ff ff 77 41 c3 48 83 ec 18 89 7c 24 0c e8 e3 83 f8 ff 8b 7c 24 0c 41 89 c0 b8 03 00 00 00 0f 05 <48> 3d 00 f0
> ff ff 77 35 44 89 c7 89 44 24 0c e8 41 84 f8 ff 8b 44
> [ 24.226851] RSP: 002b:00007ffc40859770 EFLAGS: 00000293 ORIG_RAX: 0000000000000003
> [ 24.227016] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007f46530ff85b
> [ 24.227173] RDX: 0000000000000002 RSI: 0000000000000002 RDI: 0000000000000019
> [ 24.227341] RBP: 00007ffc408597c0 R08: 0000000000000000 R09: 00007ffc40859757
> [ 24.227500] R10: 0000000000000000 R11: 0000000000000293 R12: 00007ffc4085ddc8
> [ 24.227652] R13: 000055e8800de120 R14: 000055e88118d390 R15: 00007f46533de000
> [ 24.227830] </TASK>
> [ 24.227891] irq event stamp: 202299
> [ 24.227974] hardirqs last enabled at (202307): [<ffffffff8136008c>] __console_unlock+0x5c/0x70
> [ 24.228162] hardirqs last disabled at (202314): [<ffffffff81360071>] __console_unlock+0x41/0x70
> [ 24.228357] softirqs last enabled at (201682): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
> [ 24.228540] softirqs last disabled at (201671): [<ffffffff812b8b97>] __irq_exit_rcu+0x47/0xc0
> [ 24.228716] ---[ end trace 0000000000000000 ]---
> [ 24.228834] Bad trampoline accounting at: 00000000ab641fec (bpf_lsm_sk_alloc_security+0x4/0x20)
> [ 24.229029]
> [ 24.229029] expected tramp: ffffffff81286080
> [ 24.261301] BUG: unable to handle page fault for address: ffa00000004b9050
> [ 24.261436] #PF: supervisor read access in kernel mode
> [ 24.261528] #PF: error_code(0x0000) - not-present page
> [ 24.261621] PGD 100000067 P4D 100832067 PUD 100833067 PMD 100efb067 PTE 0
> [ 24.261745] Oops: Oops: 0000 [#1] SMP NOPTI
> [ 24.261821] CPU: 9 UID: 0 PID: 1338 Comm: ip Tainted: G W OE 7.0.0-rc1-gda78c0a81eea #83 PREEMPT(full)
> [ 24.262006] Tainted: [W]=WARN, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
> [ 24.262119] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-5.el9 11/05/2023
> [ 24.262281] RIP: 0010:__cgroup_bpf_run_lsm_current+0xc5/0x2f0
> [ 24.262393] Code: a6 6f 1a 02 01 48 c7 c7 31 5b 71 82 be bf 01 00 00 48 c7 c2 d3 70 65 82 e8 d8 53 ce ff 4d 8b 7f 60 4d 85 ff 0f 84 14 02 00 00 <49> 8b 46 f0
> 4c 63 b0 34 05 00 00 c7 44 24 10 00 00 00 00 41 0f b7
> [ 24.262693] RSP: 0018:ffa0000004dfbc98 EFLAGS: 00010282
> [ 24.262784] RAX: 0000000000000001 RBX: ffa0000004dfbd10 RCX: 0000000000000001
> [ 24.262923] RDX: 00000000d7c4159d RSI: ffffffff8359b368 RDI: ff1100011b5c50c8
> [ 24.263055] RBP: ffa0000004dfbd30 R08: 0000000000020000 R09: ffffffffffffffff
> [ 24.263187] R10: ffffffff814f76b3 R11: 0000000000000000 R12: ff1100011b5c4580
> [ 24.263325] R13: 0000000000000000 R14: ffa00000004b9060 R15: ffffffff835b3040
> [ 24.263465] FS: 00007f0007064800(0000) GS:ff1100203bdcc000(0000) knlGS:0000000000000000
> [ 24.263599] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 24.263709] CR2: ffa00000004b9050 CR3: 0000000120f4d002 CR4: 0000000000771ef0
> [ 24.263841] PKRU: 55555554
> [ 24.263890] Call Trace:
> [ 24.263938] <TASK>
> [ 24.263992] bpf_trampoline_6442513766+0x6a/0x10d
> [ 24.264088] security_sk_alloc+0x83/0xd0
> [ 24.264162] sk_prot_alloc+0xf4/0x150
> [ 24.264236] sk_alloc+0x34/0x2a0
> [ 24.264305] ? srso_return_thunk+0x5/0x5f
> [ 24.264375] ? _raw_spin_unlock_irqrestore+0x35/0x50
> [ 24.264465] ? srso_return_thunk+0x5/0x5f
> [ 24.264533] ? __wake_up_common_lock+0xa8/0xd0
> [ 24.264625] __netlink_create+0x2f/0xf0
> [ 24.264695] netlink_create+0x1c4/0x230
> [ 24.264765] ? __pfx_rtnetlink_bind+0x10/0x10
> [ 24.264858] __sock_create+0x21d/0x400
> [ 24.264937] __sys_socket+0x65/0x100
> [ 24.265007] ? srso_return_thunk+0x5/0x5f
> [ 24.265077] __x64_sys_socket+0x19/0x30
> [ 24.265146] do_syscall_64+0xde/0x390
> [ 24.265216] ? entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [ 24.265307] ? trace_irq_disable+0x1d/0xc0
> [ 24.265379] entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [ 24.265469] RIP: 0033:0x7f0006f112ab
> [ 24.265538] Code: 73 01 c3 48 8b 0d 6d 8b 0e 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 29 00 00 00 0f 05 <48> 3d 01 f0
> ff ff 73 01 c3 48 8b 0d 3d 8b 0e 00 f7 d8 64 89 01 48
> [ 24.265822] RSP: 002b:00007ffd8ecb3be8 EFLAGS: 00000246 ORIG_RAX: 0000000000000029
> [ 24.265960] RAX: ffffffffffffffda RBX: 000056212b30d040 RCX: 00007f0006f112ab
> [ 24.266088] RDX: 0000000000000000 RSI: 0000000000080003 RDI: 0000000000000010
> [ 24.266217] RBP: 0000000000000000 R08: 00007ffd8ecb3bc0 R09: 0000000000000000
> [ 24.266346] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
> [ 24.266474] R13: 000056212b30d040 R14: 00007ffd8ecb3d88 R15: 0000000000000004
> [ 24.266617] </TASK>
> [ 24.266663] Modules linked in: bpf_test_modorder_y(OE+) bpf_test_modorder_x(OE) bpf_testmod(OE)
> [ 24.266824] CR2: ffa00000004b9050
> [ 24.266897] ---[ end trace 0000000000000000 ]---
> [ 24.266989] RIP: 0010:__cgroup_bpf_run_lsm_current+0xc5/0x2f0
> [ 24.267101] Code: a6 6f 1a 02 01 48 c7 c7 31 5b 71 82 be bf 01 00 00 48 c7 c2 d3 70 65 82 e8 d8 53 ce ff 4d 8b 7f 60 4d 85 ff 0f 84 14 02 00 00 <49> 8b 46 f0
> 4c 63 b0 34 05 00 00 c7 44 24 10 00 00 00 00 41 0f b7
> [ 24.267406] RSP: 0018:ffa0000004dfbc98 EFLAGS: 00010282
> [ 24.267499] RAX: 0000000000000001 RBX: ffa0000004dfbd10 RCX: 0000000000000001
> [ 24.267629] RDX: 00000000d7c4159d RSI: ffffffff8359b368 RDI: ff1100011b5c50c8
> [ 24.267758] RBP: ffa0000004dfbd30 R08: 0000000000020000 R09: ffffffffffffffff
> [ 24.267897] R10: ffffffff814f76b3 R11: 0000000000000000 R12: ff1100011b5c4580
> [ 24.268030] R13: 0000000000000000 R14: ffa00000004b9060 R15: ffffffff835b3040
> [ 24.268167] FS: 00007f0007064800(0000) GS:ff1100203bdcc000(0000) knlGS:0000000000000000
> [ 24.268311] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 24.268428] CR2: ffa00000004b9050 CR3: 0000000120f4d002 CR4: 0000000000771ef0
> [ 24.268565] PKRU: 55555554
> [ 24.268613] Kernel panic - not syncing: Fatal exception
> [ 24.268977] Kernel Offset: disabled
> [ 24.269046] ---[ end Kernel panic - not syncing: Fatal exception ]---
>
>
>
> > ---
> > arch/x86/Kconfig | 1 +
> > kernel/bpf/trampoline.c | 220 ++++++++++++++++++++++++++++++++++------
> > kernel/trace/Kconfig | 3 +
> > kernel/trace/ftrace.c | 7 +-
> > 4 files changed, 200 insertions(+), 31 deletions(-)
> >
> > [...]
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCHv6 bpf-next 9/9] bpf,x86: Use single ftrace_ops for direct calls
2026-02-27 20:37 ` Jiri Olsa
@ 2026-02-27 21:24 ` Jiri Olsa
2026-02-27 22:00 ` Ihor Solodrai
2026-02-28 20:39 ` Steven Rostedt
0 siblings, 2 replies; 27+ messages in thread
From: Jiri Olsa @ 2026-02-27 21:24 UTC (permalink / raw)
To: Ihor Solodrai
Cc: Jiri Olsa, Steven Rostedt, Florent Revest, Mark Rutland, bpf,
linux-kernel, linux-trace-kernel, linux-arm-kernel,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Menglong Dong, Song Liu, Kumar Kartikeya Dwivedi
On Fri, Feb 27, 2026 at 09:37:52PM +0100, Jiri Olsa wrote:
> On Fri, Feb 27, 2026 at 09:40:12AM -0800, Ihor Solodrai wrote:
> > On 12/30/25 6:50 AM, Jiri Olsa wrote:
> > > Using single ftrace_ops for direct calls update instead of allocating
> > > ftrace_ops object for each trampoline.
> > >
> > > With single ftrace_ops object we can use update_ftrace_direct_* api
> > > that allows multiple ip sites updates on single ftrace_ops object.
> > >
> > > Adding HAVE_SINGLE_FTRACE_DIRECT_OPS config option to be enabled on
> > > each arch that supports this.
> > >
> > > At the moment we can enable this only on x86 arch, because arm relies
> > > on ftrace_ops object representing just single trampoline image (stored
> > > in ftrace_ops::direct_call). Archs that do not support this will continue
> > > to use *_ftrace_direct api.
> > >
> > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> >
> > Hi Jiri,
> >
> > Me and Kumar stumbled on kernel splats with "ftrace failed to modify",
> > and if running with KASAN:
> >
> > BUG: KASAN: slab-use-after-free in __get_valid_kprobe+0x224/0x2a0
> >
> > Pasting a full splat example at the bottom.
> >
> > I was able to create a reproducer with AI, and then used it to bisect
> > to this patch. You can run it with ./test_progs -t ftrace_direct_race
> >
> > Below is my (human-generated, haha) summary of AI's analysis of what's
> > happening. It makes sense to me conceptually, but I don't know enough
> > details here to call bullshit. Please take a look:
>
> hi, nice :)
>
> >
> > With CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS ftrace_replace_code()
> > operates on all call sites in the shared ops. Then if a concurrent
> > ftrace user (like kprobe) modifies a call site in between
> > ftrace_replace_code's verify pass and its patch pass, then ftrace_bug
> > fires and sets ftrace_disabled to 1.
>
> hum, I'd think that's all under ftrace_lock/direct_mutex,
> but we might be missing some paths
>
could you please try with change below? I can no longer trigger the bug with it
thanks,
jirka
---
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 827fb9a0bf0d..e333749a5896 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -6404,7 +6404,9 @@ int update_ftrace_direct_add(struct ftrace_ops *ops, struct ftrace_hash *hash)
new_filter_hash = old_filter_hash;
}
} else {
+ mutex_lock(&ftrace_lock);
err = ftrace_update_ops(ops, new_filter_hash, EMPTY_HASH);
+ mutex_unlock(&ftrace_lock);
/*
* new_filter_hash is dup-ed, so we need to release it anyway,
* old_filter_hash either stays on error or is already released
@@ -6530,7 +6532,9 @@ int update_ftrace_direct_del(struct ftrace_ops *ops, struct ftrace_hash *hash)
ops->func_hash->filter_hash = NULL;
}
} else {
+ mutex_lock(&ftrace_lock);
err = ftrace_update_ops(ops, new_filter_hash, EMPTY_HASH);
+ mutex_unlock(&ftrace_lock);
/*
* new_filter_hash is dup-ed, so we need to release it anyway,
* old_filter_hash either stays on error or is already released
^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PATCHv6 bpf-next 9/9] bpf,x86: Use single ftrace_ops for direct calls
2026-02-27 21:24 ` Jiri Olsa
@ 2026-02-27 22:00 ` Ihor Solodrai
2026-02-28 20:39 ` Steven Rostedt
1 sibling, 0 replies; 27+ messages in thread
From: Ihor Solodrai @ 2026-02-27 22:00 UTC (permalink / raw)
To: Jiri Olsa
Cc: Steven Rostedt, Florent Revest, Mark Rutland, bpf, linux-kernel,
linux-trace-kernel, linux-arm-kernel, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Menglong Dong, Song Liu,
Kumar Kartikeya Dwivedi
On 2/27/26 1:24 PM, Jiri Olsa wrote:
> On Fri, Feb 27, 2026 at 09:37:52PM +0100, Jiri Olsa wrote:
>> [...]
>>
>>>
>>> With CONFIG_HAVE_SINGLE_FTRACE_DIRECT_OPS ftrace_replace_code()
>>> operates on all call sites in the shared ops. Then if a concurrent
>>> ftrace user (like kprobe) modifies a call site in between
>>> ftrace_replace_code's verify pass and its patch pass, then ftrace_bug
>>> fires and sets ftrace_disabled to 1.
>>
>> hum, I'd think that's all under ftrace_lock/direct_mutex,
>> but we might be missing some paths
>>
>
> could you please try with change below? I can no longer trigger the bug with it
Can confirm that the bug doesn't trigger with this change.
At least by the reproducer test.
Tested-by: Ihor Solodrai <ihor.solodrai@linux.dev>
Thanks!
>
> thanks,
> jirka
>
>
> ---
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index 827fb9a0bf0d..e333749a5896 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -6404,7 +6404,9 @@ int update_ftrace_direct_add(struct ftrace_ops *ops, struct ftrace_hash *hash)
> new_filter_hash = old_filter_hash;
> }
> } else {
> + mutex_lock(&ftrace_lock);
> err = ftrace_update_ops(ops, new_filter_hash, EMPTY_HASH);
> + mutex_unlock(&ftrace_lock);
> /*
> * new_filter_hash is dup-ed, so we need to release it anyway,
> * old_filter_hash either stays on error or is already released
> @@ -6530,7 +6532,9 @@ int update_ftrace_direct_del(struct ftrace_ops *ops, struct ftrace_hash *hash)
> ops->func_hash->filter_hash = NULL;
> }
> } else {
> + mutex_lock(&ftrace_lock);
> err = ftrace_update_ops(ops, new_filter_hash, EMPTY_HASH);
> + mutex_unlock(&ftrace_lock);
> /*
> * new_filter_hash is dup-ed, so we need to release it anyway,
> * old_filter_hash either stays on error or is already released
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCHv6 bpf-next 9/9] bpf,x86: Use single ftrace_ops for direct calls
2026-02-27 21:24 ` Jiri Olsa
2026-02-27 22:00 ` Ihor Solodrai
@ 2026-02-28 20:39 ` Steven Rostedt
2026-03-02 8:08 ` Jiri Olsa
1 sibling, 1 reply; 27+ messages in thread
From: Steven Rostedt @ 2026-02-28 20:39 UTC (permalink / raw)
To: Jiri Olsa
Cc: Ihor Solodrai, Florent Revest, Mark Rutland, bpf, linux-kernel,
linux-trace-kernel, linux-arm-kernel, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Menglong Dong, Song Liu,
Kumar Kartikeya Dwivedi
On Fri, 27 Feb 2026 22:24:37 +0100
Jiri Olsa <olsajiri@gmail.com> wrote:
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index 827fb9a0bf0d..e333749a5896 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -6404,7 +6404,9 @@ int update_ftrace_direct_add(struct ftrace_ops *ops, struct ftrace_hash *hash)
> new_filter_hash = old_filter_hash;
> }
> } else {
As this looks to fix the issue, just add:
guard(mutex)(&ftrace_lock);
> + mutex_lock(&ftrace_lock);
> err = ftrace_update_ops(ops, new_filter_hash, EMPTY_HASH);
> + mutex_unlock(&ftrace_lock);
> /*
> * new_filter_hash is dup-ed, so we need to release it anyway,
> * old_filter_hash either stays on error or is already released
> @@ -6530,7 +6532,9 @@ int update_ftrace_direct_del(struct ftrace_ops *ops, struct ftrace_hash *hash)
> ops->func_hash->filter_hash = NULL;
> }
> } else {
And here too.
As there's nothing after the comment and before the end of the block.
-- Steve
> + mutex_lock(&ftrace_lock);
> err = ftrace_update_ops(ops, new_filter_hash, EMPTY_HASH);
> + mutex_unlock(&ftrace_lock);
> /*
> * new_filter_hash is dup-ed, so we need to release it anyway,
> * old_filter_hash either stays on error or is already released
-- Steve
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCHv6 bpf-next 9/9] bpf,x86: Use single ftrace_ops for direct calls
2026-02-28 20:39 ` Steven Rostedt
@ 2026-03-02 8:08 ` Jiri Olsa
2026-03-02 15:10 ` Steven Rostedt
0 siblings, 1 reply; 27+ messages in thread
From: Jiri Olsa @ 2026-03-02 8:08 UTC (permalink / raw)
To: Steven Rostedt
Cc: Jiri Olsa, Ihor Solodrai, Florent Revest, Mark Rutland, bpf,
linux-kernel, linux-trace-kernel, linux-arm-kernel,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Menglong Dong, Song Liu, Kumar Kartikeya Dwivedi
On Sat, Feb 28, 2026 at 03:39:21PM -0500, Steven Rostedt wrote:
> On Fri, 27 Feb 2026 22:24:37 +0100
> Jiri Olsa <olsajiri@gmail.com> wrote:
>
> > diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> > index 827fb9a0bf0d..e333749a5896 100644
> > --- a/kernel/trace/ftrace.c
> > +++ b/kernel/trace/ftrace.c
> > @@ -6404,7 +6404,9 @@ int update_ftrace_direct_add(struct ftrace_ops *ops, struct ftrace_hash *hash)
> > new_filter_hash = old_filter_hash;
> > }
> > } else {
>
> As this looks to fix the issue, just add:
>
> guard(mutex)(&ftrace_lock);
>
> > + mutex_lock(&ftrace_lock);
> > err = ftrace_update_ops(ops, new_filter_hash, EMPTY_HASH);
> > + mutex_unlock(&ftrace_lock);
> > /*
> > * new_filter_hash is dup-ed, so we need to release it anyway,
> > * old_filter_hash either stays on error or is already released
> > @@ -6530,7 +6532,9 @@ int update_ftrace_direct_del(struct ftrace_ops *ops, struct ftrace_hash *hash)
> > ops->func_hash->filter_hash = NULL;
> > }
> > } else {
>
> And here too.
>
> As there's nothing after the comment and before the end of the block.
ok, will do.. the original changes:
05dc5e9c1fe1 ("ftrace: Add update_ftrace_direct_add function")
8d2c1233f371 ("ftrace: Add update_ftrace_direct_del function")
went through bpf tree, so I'll send the fix the same way,
please let me know otherwise
thanks,
jirka
>
> -- Steve
>
> > + mutex_lock(&ftrace_lock);
> > err = ftrace_update_ops(ops, new_filter_hash, EMPTY_HASH);
> > + mutex_unlock(&ftrace_lock);
> > /*
> > * new_filter_hash is dup-ed, so we need to release it anyway,
> > * old_filter_hash either stays on error or is already released
>
>
>
> -- Steve
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCHv6 bpf-next 9/9] bpf,x86: Use single ftrace_ops for direct calls
2026-03-02 8:08 ` Jiri Olsa
@ 2026-03-02 15:10 ` Steven Rostedt
0 siblings, 0 replies; 27+ messages in thread
From: Steven Rostedt @ 2026-03-02 15:10 UTC (permalink / raw)
To: Jiri Olsa
Cc: Ihor Solodrai, Florent Revest, Mark Rutland, bpf, linux-kernel,
linux-trace-kernel, linux-arm-kernel, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Menglong Dong, Song Liu,
Kumar Kartikeya Dwivedi
On Mon, 2 Mar 2026 09:08:25 +0100
Jiri Olsa <olsajiri@gmail.com> wrote:
> > As there's nothing after the comment and before the end of the block.
>
> ok, will do.. the original changes:
>
> 05dc5e9c1fe1 ("ftrace: Add update_ftrace_direct_add function")
> 8d2c1233f371 ("ftrace: Add update_ftrace_direct_del function")
>
> went through bpf tree, so I'll send the fix the same way,
> please let me know otherwise
As long as I give a reviewed-by tag.
Thanks,
-- Steve
^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2026-03-02 15:10 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-30 14:50 [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines Jiri Olsa
2025-12-30 14:50 ` [PATCHv6 bpf-next 1/9] ftrace,bpf: Remove FTRACE_OPS_FL_JMP ftrace_ops flag Jiri Olsa
2026-01-10 0:36 ` Andrii Nakryiko
2025-12-30 14:50 ` [PATCHv6 bpf-next 2/9] ftrace: Make alloc_and_copy_ftrace_hash direct friendly Jiri Olsa
2025-12-30 14:50 ` [PATCHv6 bpf-next 3/9] ftrace: Export some of hash related functions Jiri Olsa
2025-12-30 14:50 ` [PATCHv6 bpf-next 4/9] ftrace: Add update_ftrace_direct_add function Jiri Olsa
2025-12-30 14:50 ` [PATCHv6 bpf-next 5/9] ftrace: Add update_ftrace_direct_del function Jiri Olsa
2025-12-30 14:50 ` [PATCHv6 bpf-next 6/9] ftrace: Add update_ftrace_direct_mod function Jiri Olsa
2025-12-30 14:50 ` [PATCHv6 bpf-next 7/9] bpf: Add trampoline ip hash table Jiri Olsa
2026-01-10 0:36 ` Andrii Nakryiko
2026-01-12 21:27 ` Jiri Olsa
2026-01-13 11:02 ` Alan Maguire
2026-01-13 11:58 ` Jiri Olsa
2025-12-30 14:50 ` [PATCHv6 bpf-next 8/9] ftrace: Factor ftrace_ops ops_func interface Jiri Olsa
2025-12-30 14:50 ` [PATCHv6 bpf-next 9/9] bpf,x86: Use single ftrace_ops for direct calls Jiri Olsa
2026-01-10 0:36 ` Andrii Nakryiko
2026-02-27 17:40 ` Ihor Solodrai
2026-02-27 20:37 ` Jiri Olsa
2026-02-27 21:24 ` Jiri Olsa
2026-02-27 22:00 ` Ihor Solodrai
2026-02-28 20:39 ` Steven Rostedt
2026-03-02 8:08 ` Jiri Olsa
2026-03-02 15:10 ` Steven Rostedt
2026-01-15 18:54 ` [PATCHv6 bpf-next 0/9] ftrace,bpf: Use single direct ops for bpf trampolines Andrii Nakryiko
2026-01-26 9:48 ` Jiri Olsa
2026-01-28 14:48 ` Steven Rostedt
2026-01-28 20:00 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox