* [PATCH bpf-next v3 0/6] bpf trampoline support "jmp" mode
@ 2025-11-18 12:36 Menglong Dong
2025-11-18 12:36 ` [PATCH bpf-next v3 1/6] ftrace: introduce FTRACE_OPS_FL_JMP Menglong Dong
` (7 more replies)
0 siblings, 8 replies; 22+ messages in thread
From: Menglong Dong @ 2025-11-18 12:36 UTC (permalink / raw)
To: ast, rostedt
Cc: daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, jolsa, mhiramat,
mark.rutland, mathieu.desnoyers, jiang.biao, bpf, linux-kernel,
linux-trace-kernel
For now, the bpf trampoline is called by the "call" instruction. However,
it break the RSB and introduce extra overhead in x86_64 arch.
For example, we hook the function "foo" with fexit, the call and return
logic will be like this:
call foo -> call trampoline -> call foo-body ->
return foo-body -> return foo
As we can see above, there are 3 call, but 2 return, which break the RSB
balance. We can pseudo a "return" here, but it's not the best choice,
as it will still cause once RSB miss:
call foo -> call trampoline -> call foo-body ->
return foo-body -> return dummy -> return foo
The "return dummy" doesn't pair the "call trampoline", which can also
cause the RSB miss.
Therefore, we introduce the "jmp" mode for bpf trampoline, as advised by
Alexei in [1]. And the logic will become this:
call foo -> jmp trampoline -> call foo-body ->
return foo-body -> return foo
As we can see above, the RSB is totally balanced after this series.
In this series, we introduce the FTRACE_OPS_FL_JMP for ftrace to make it
use the "jmp" instruction instead of "call".
And we also do some adjustment to bpf_arch_text_poke() to allow us specify
the old and new poke_type.
For the BPF_TRAMP_F_SHARE_IPMODIFY case, we will fallback to the "call"
mode, as it need to get the function address from the stack, which is not
supported in "jmp" mode.
Before this series, we have the following performance with the bpf
benchmark:
$ cd tools/testing/selftests/bpf
$ ./benchs/run_bench_trigger.sh
usermode-count : 890.171 ± 1.522M/s
kernel-count : 409.184 ± 0.330M/s
syscall-count : 26.792 ± 0.010M/s
fentry : 171.242 ± 0.322M/s
fexit : 80.544 ± 0.045M/s
fmodret : 78.301 ± 0.065M/s
rawtp : 192.906 ± 0.900M/s
tp : 81.883 ± 0.209M/s
kprobe : 52.029 ± 0.113M/s
kprobe-multi : 62.237 ± 0.060M/s
kprobe-multi-all: 4.761 ± 0.014M/s
kretprobe : 23.779 ± 0.046M/s
kretprobe-multi: 29.134 ± 0.012M/s
kretprobe-multi-all: 3.822 ± 0.003M/
And after this series, we have the following performance:
usermode-count : 890.443 ± 0.307M/s
kernel-count : 416.139 ± 0.055M/s
syscall-count : 31.037 ± 0.813M/s
fentry : 169.549 ± 0.519M/s
fexit : 136.540 ± 0.518M/s
fmodret : 159.248 ± 0.188M/s
rawtp : 194.475 ± 0.144M/s
tp : 84.505 ± 0.041M/s
kprobe : 59.951 ± 0.071M/s
kprobe-multi : 63.153 ± 0.177M/s
kprobe-multi-all: 4.699 ± 0.012M/s
kretprobe : 23.740 ± 0.015M/s
kretprobe-multi: 29.301 ± 0.022M/s
kretprobe-multi-all: 3.869 ± 0.005M/s
As we can see above, the performance of fexit increase from 80.544M/s to
136.540M/s, and the "fmodret" increase from 78.301M/s to 159.248M/s.
Link: https://lore.kernel.org/bpf/20251117034906.32036-1-dongml2@chinatelecom.cn/
Changes since v2:
* reject if the addr is already "jmp" in register_ftrace_direct() and
__modify_ftrace_direct() in the 1st patch.
* fix compile error in powerpc in the 5th patch.
* changes in the 6th patch:
- fix the compile error by wrapping the write to tr->fops->flags with
CONFIG_DYNAMIC_FTRACE_WITH_JMP
- reset BPF_TRAMP_F_SKIP_FRAME when the second try of modify_fentry in
bpf_trampoline_update()
Link: https://lore.kernel.org/bpf/20251114092450.172024-1-dongml2@chinatelecom.cn/
Changes since v1:
* change the bool parameter that we add to save_args() to "u32 flags"
* rename bpf_trampoline_need_jmp() to bpf_trampoline_use_jmp()
* add new function parameter to bpf_arch_text_poke instead of introduce
bpf_arch_text_poke_type()
* rename bpf_text_poke to bpf_trampoline_update_fentry
* remove the BPF_TRAMP_F_JMPED and check the current mode with the origin
flags instead.
Link: https://lore.kernel.org/bpf/CAADnVQLX54sVi1oaHrkSiLqjJaJdm3TQjoVrgU-LZimK6iDcSA@mail.gmail.com/[1]
Menglong Dong (6):
ftrace: introduce FTRACE_OPS_FL_JMP
x86/ftrace: implement DYNAMIC_FTRACE_WITH_JMP
bpf: fix the usage of BPF_TRAMP_F_SKIP_FRAME
bpf,x86: adjust the "jmp" mode for bpf trampoline
bpf: specify the old and new poke_type for bpf_arch_text_poke
bpf: implement "jmp" mode for trampoline
arch/arm64/net/bpf_jit_comp.c | 14 +++---
arch/loongarch/net/bpf_jit.c | 9 ++--
arch/powerpc/net/bpf_jit_comp.c | 10 +++--
arch/riscv/net/bpf_jit_comp64.c | 11 +++--
arch/s390/net/bpf_jit_comp.c | 7 +--
arch/x86/Kconfig | 1 +
arch/x86/kernel/ftrace.c | 7 ++-
arch/x86/kernel/ftrace_64.S | 12 +++++-
arch/x86/net/bpf_jit_comp.c | 55 ++++++++++++++----------
include/linux/bpf.h | 18 +++++++-
include/linux/ftrace.h | 33 ++++++++++++++
kernel/bpf/core.c | 5 ++-
kernel/bpf/trampoline.c | 76 ++++++++++++++++++++++++++-------
kernel/trace/Kconfig | 12 ++++++
kernel/trace/ftrace.c | 14 +++++-
15 files changed, 219 insertions(+), 65 deletions(-)
--
2.51.2
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH bpf-next v3 1/6] ftrace: introduce FTRACE_OPS_FL_JMP
2025-11-18 12:36 [PATCH bpf-next v3 0/6] bpf trampoline support "jmp" mode Menglong Dong
@ 2025-11-18 12:36 ` Menglong Dong
2025-11-18 13:25 ` bot+bpf-ci
2025-11-18 12:36 ` [PATCH bpf-next v3 2/6] x86/ftrace: implement DYNAMIC_FTRACE_WITH_JMP Menglong Dong
` (6 subsequent siblings)
7 siblings, 1 reply; 22+ messages in thread
From: Menglong Dong @ 2025-11-18 12:36 UTC (permalink / raw)
To: ast, rostedt
Cc: daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, jolsa, mhiramat,
mark.rutland, mathieu.desnoyers, jiang.biao, bpf, linux-kernel,
linux-trace-kernel
For now, the "nop" will be replaced with a "call" instruction when a
function is hooked by the ftrace. However, sometimes the "call" can break
the RSB and introduce extra overhead. Therefore, introduce the flag
FTRACE_OPS_FL_JMP, which indicate that the ftrace_ops should be called
with a "jmp" instead of "call". For now, it is only used by the direct
call case.
When a direct ftrace_ops is marked with FTRACE_OPS_FL_JMP, the last bit of
the ops->direct_call will be set to 1. Therefore, we can tell if we should
use "jmp" for the callback in ftrace_call_replace().
Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
---
v3:
- reject if the addr is already "jmp" in register_ftrace_direct() and
__modify_ftrace_direct()
---
include/linux/ftrace.h | 33 +++++++++++++++++++++++++++++++++
kernel/trace/Kconfig | 12 ++++++++++++
kernel/trace/ftrace.c | 17 ++++++++++++++++-
3 files changed, 61 insertions(+), 1 deletion(-)
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 07f8c309e432..015dd1049bea 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -359,6 +359,7 @@ enum {
FTRACE_OPS_FL_DIRECT = BIT(17),
FTRACE_OPS_FL_SUBOP = BIT(18),
FTRACE_OPS_FL_GRAPH = BIT(19),
+ FTRACE_OPS_FL_JMP = BIT(20),
};
#ifndef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
@@ -577,6 +578,38 @@ static inline void arch_ftrace_set_direct_caller(struct ftrace_regs *fregs,
unsigned long addr) { }
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_JMP
+static inline bool ftrace_is_jmp(unsigned long addr)
+{
+ return addr & 1;
+}
+
+static inline unsigned long ftrace_jmp_set(unsigned long addr)
+{
+ return addr | 1UL;
+}
+
+static inline unsigned long ftrace_jmp_get(unsigned long addr)
+{
+ return addr & ~1UL;
+}
+#else
+static inline bool ftrace_is_jmp(unsigned long addr)
+{
+ return false;
+}
+
+static inline unsigned long ftrace_jmp_set(unsigned long addr)
+{
+ return addr;
+}
+
+static inline unsigned long ftrace_jmp_get(unsigned long addr)
+{
+ return addr;
+}
+#endif /* CONFIG_DYNAMIC_FTRACE_WITH_JMP */
+
#ifdef CONFIG_STACK_TRACER
int stack_trace_sysctl(const struct ctl_table *table, int write, void *buffer,
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index d2c79da81e4f..4661b9e606e0 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -80,6 +80,12 @@ config HAVE_DYNAMIC_FTRACE_NO_PATCHABLE
If the architecture generates __patchable_function_entries sections
but does not want them included in the ftrace locations.
+config HAVE_DYNAMIC_FTRACE_WITH_JMP
+ bool
+ help
+ If the architecture supports to replace the __fentry__ with a
+ "jmp" instruction.
+
config HAVE_SYSCALL_TRACEPOINTS
bool
help
@@ -330,6 +336,12 @@ config DYNAMIC_FTRACE_WITH_ARGS
depends on DYNAMIC_FTRACE
depends on HAVE_DYNAMIC_FTRACE_WITH_ARGS
+config DYNAMIC_FTRACE_WITH_JMP
+ def_bool y
+ depends on DYNAMIC_FTRACE
+ depends on DYNAMIC_FTRACE_WITH_DIRECT_CALLS
+ depends on HAVE_DYNAMIC_FTRACE_WITH_JMP
+
config FPROBE
bool "Kernel Function Probe (fprobe)"
depends on HAVE_FUNCTION_GRAPH_FREGS && HAVE_FTRACE_GRAPH_FUNC
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 59cfacb8a5bb..bbb37c0f8c6c 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -5951,7 +5951,8 @@ static void remove_direct_functions_hash(struct ftrace_hash *hash, unsigned long
for (i = 0; i < size; i++) {
hlist_for_each_entry(entry, &hash->buckets[i], hlist) {
del = __ftrace_lookup_ip(direct_functions, entry->ip);
- if (del && del->direct == addr) {
+ if (del && ftrace_jmp_get(del->direct) ==
+ ftrace_jmp_get(addr)) {
remove_hash_entry(direct_functions, del);
kfree(del);
}
@@ -6016,8 +6017,15 @@ int register_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
if (ftrace_hash_empty(hash))
return -EINVAL;
+ /* This is a "raw" address, and this should never happen. */
+ if (WARN_ON_ONCE(ftrace_is_jmp(addr)))
+ return -EINVAL;
+
mutex_lock(&direct_mutex);
+ if (ops->flags & FTRACE_OPS_FL_JMP)
+ addr = ftrace_jmp_set(addr);
+
/* Make sure requested entries are not already registered.. */
size = 1 << hash->size_bits;
for (i = 0; i < size; i++) {
@@ -6138,6 +6146,13 @@ __modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
lockdep_assert_held_once(&direct_mutex);
+ /* This is a "raw" address, and this should never happen. */
+ if (WARN_ON_ONCE(ftrace_is_jmp(addr)))
+ return -EINVAL;
+
+ if (ops->flags & FTRACE_OPS_FL_JMP)
+ addr = ftrace_jmp_set(addr);
+
/* Enable the tmp_ops to have the same functions as the direct ops */
ftrace_ops_init(&tmp_ops);
tmp_ops.func_hash = ops->func_hash;
--
2.51.2
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH bpf-next v3 2/6] x86/ftrace: implement DYNAMIC_FTRACE_WITH_JMP
2025-11-18 12:36 [PATCH bpf-next v3 0/6] bpf trampoline support "jmp" mode Menglong Dong
2025-11-18 12:36 ` [PATCH bpf-next v3 1/6] ftrace: introduce FTRACE_OPS_FL_JMP Menglong Dong
@ 2025-11-18 12:36 ` Menglong Dong
2025-11-18 22:01 ` Jiri Olsa
2025-11-18 12:36 ` [PATCH bpf-next v3 3/6] bpf: fix the usage of BPF_TRAMP_F_SKIP_FRAME Menglong Dong
` (5 subsequent siblings)
7 siblings, 1 reply; 22+ messages in thread
From: Menglong Dong @ 2025-11-18 12:36 UTC (permalink / raw)
To: ast, rostedt
Cc: daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, jolsa, mhiramat,
mark.rutland, mathieu.desnoyers, jiang.biao, bpf, linux-kernel,
linux-trace-kernel
Implement the DYNAMIC_FTRACE_WITH_JMP for x86_64. In ftrace_call_replace,
we will use JMP32_INSN_OPCODE instead of CALL_INSN_OPCODE if the address
should use "jmp".
Meanwhile, adjust the direct call in the ftrace_regs_caller. The RSB is
balanced in the "jmp" mode. Take the function "foo" for example:
original_caller:
call foo -> foo:
call fentry -> fentry:
[do ftrace callbacks ]
move tramp_addr to stack
RET -> tramp_addr
tramp_addr:
[..]
call foo_body -> foo_body:
[..]
RET -> back to tramp_addr
[..]
RET -> back to original_caller
Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
---
arch/x86/Kconfig | 1 +
arch/x86/kernel/ftrace.c | 7 ++++++-
arch/x86/kernel/ftrace_64.S | 12 +++++++++++-
3 files changed, 18 insertions(+), 2 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index fa3b616af03a..462250a20311 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -230,6 +230,7 @@ config X86
select HAVE_DYNAMIC_FTRACE_WITH_ARGS if X86_64
select HAVE_FTRACE_REGS_HAVING_PT_REGS if X86_64
select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
+ select HAVE_DYNAMIC_FTRACE_WITH_JMP if X86_64
select HAVE_SAMPLE_FTRACE_DIRECT if X86_64
select HAVE_SAMPLE_FTRACE_DIRECT_MULTI if X86_64
select HAVE_EBPF_JIT
diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 4450acec9390..0543b57f54ee 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -74,7 +74,12 @@ static const char *ftrace_call_replace(unsigned long ip, unsigned long addr)
* No need to translate into a callthunk. The trampoline does
* the depth accounting itself.
*/
- return text_gen_insn(CALL_INSN_OPCODE, (void *)ip, (void *)addr);
+ if (ftrace_is_jmp(addr)) {
+ addr = ftrace_jmp_get(addr);
+ return text_gen_insn(JMP32_INSN_OPCODE, (void *)ip, (void *)addr);
+ } else {
+ return text_gen_insn(CALL_INSN_OPCODE, (void *)ip, (void *)addr);
+ }
}
static int ftrace_verify_code(unsigned long ip, const char *old_code)
diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S
index 823dbdd0eb41..a132608265f6 100644
--- a/arch/x86/kernel/ftrace_64.S
+++ b/arch/x86/kernel/ftrace_64.S
@@ -285,8 +285,18 @@ SYM_INNER_LABEL(ftrace_regs_caller_end, SYM_L_GLOBAL)
ANNOTATE_NOENDBR
RET
+1:
+ testb $1, %al
+ jz 2f
+ andq $0xfffffffffffffffe, %rax
+ movq %rax, MCOUNT_REG_SIZE+8(%rsp)
+ restore_mcount_regs
+ /* Restore flags */
+ popfq
+ RET
+
/* Swap the flags with orig_rax */
-1: movq MCOUNT_REG_SIZE(%rsp), %rdi
+2: movq MCOUNT_REG_SIZE(%rsp), %rdi
movq %rdi, MCOUNT_REG_SIZE-8(%rsp)
movq %rax, MCOUNT_REG_SIZE(%rsp)
--
2.51.2
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH bpf-next v3 3/6] bpf: fix the usage of BPF_TRAMP_F_SKIP_FRAME
2025-11-18 12:36 [PATCH bpf-next v3 0/6] bpf trampoline support "jmp" mode Menglong Dong
2025-11-18 12:36 ` [PATCH bpf-next v3 1/6] ftrace: introduce FTRACE_OPS_FL_JMP Menglong Dong
2025-11-18 12:36 ` [PATCH bpf-next v3 2/6] x86/ftrace: implement DYNAMIC_FTRACE_WITH_JMP Menglong Dong
@ 2025-11-18 12:36 ` Menglong Dong
2025-11-18 12:36 ` [PATCH bpf-next v3 4/6] bpf,x86: adjust the "jmp" mode for bpf trampoline Menglong Dong
` (4 subsequent siblings)
7 siblings, 0 replies; 22+ messages in thread
From: Menglong Dong @ 2025-11-18 12:36 UTC (permalink / raw)
To: ast, rostedt
Cc: daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, jolsa, mhiramat,
mark.rutland, mathieu.desnoyers, jiang.biao, bpf, linux-kernel,
linux-trace-kernel
Some places calculate the origin_call by checking if
BPF_TRAMP_F_SKIP_FRAME is set. However, it should use
BPF_TRAMP_F_ORIG_STACK for this propose. Just fix them.
Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
arch/riscv/net/bpf_jit_comp64.c | 2 +-
arch/x86/net/bpf_jit_comp.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
index 45cbc7c6fe49..21c70ae3296b 100644
--- a/arch/riscv/net/bpf_jit_comp64.c
+++ b/arch/riscv/net/bpf_jit_comp64.c
@@ -1131,7 +1131,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
store_args(nr_arg_slots, args_off, ctx);
/* skip to actual body of traced function */
- if (flags & BPF_TRAMP_F_SKIP_FRAME)
+ if (flags & BPF_TRAMP_F_ORIG_STACK)
orig_call += RV_FENTRY_NINSNS * 4;
if (flags & BPF_TRAMP_F_CALL_ORIG) {
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 36a0d4db9f68..808d4343f6cf 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -3289,7 +3289,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
arg_stack_off = stack_size;
- if (flags & BPF_TRAMP_F_SKIP_FRAME) {
+ if (flags & BPF_TRAMP_F_CALL_ORIG) {
/* skip patched call instruction and point orig_call to actual
* body of the kernel function.
*/
--
2.51.2
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH bpf-next v3 4/6] bpf,x86: adjust the "jmp" mode for bpf trampoline
2025-11-18 12:36 [PATCH bpf-next v3 0/6] bpf trampoline support "jmp" mode Menglong Dong
` (2 preceding siblings ...)
2025-11-18 12:36 ` [PATCH bpf-next v3 3/6] bpf: fix the usage of BPF_TRAMP_F_SKIP_FRAME Menglong Dong
@ 2025-11-18 12:36 ` Menglong Dong
2025-11-18 12:36 ` [PATCH bpf-next v3 5/6] bpf: specify the old and new poke_type for bpf_arch_text_poke Menglong Dong
` (3 subsequent siblings)
7 siblings, 0 replies; 22+ messages in thread
From: Menglong Dong @ 2025-11-18 12:36 UTC (permalink / raw)
To: ast, rostedt
Cc: daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, jolsa, mhiramat,
mark.rutland, mathieu.desnoyers, jiang.biao, bpf, linux-kernel,
linux-trace-kernel
In the origin call case, if BPF_TRAMP_F_SKIP_FRAME is not set, it means
that the trampoline is not called, but "jmp".
Introduce the function bpf_trampoline_use_jmp() to check if the trampoline
is in "jmp" mode.
Do some adjustment on the "jmp" mode for the x86_64. The main adjustment
that we make is for the stack parameter passing case, as the stack
alignment logic changes in the "jmp" mode without the "rip". What's more,
the location of the parameters on the stack also changes.
Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
---
v2:
- rename bpf_trampoline_need_jmp() to bpf_trampoline_use_jmp()
---
arch/x86/net/bpf_jit_comp.c | 16 +++++++++++-----
include/linux/bpf.h | 12 ++++++++++++
2 files changed, 23 insertions(+), 5 deletions(-)
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 808d4343f6cf..632a83381c2d 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -2847,9 +2847,10 @@ static int get_nr_used_regs(const struct btf_func_model *m)
}
static void save_args(const struct btf_func_model *m, u8 **prog,
- int stack_size, bool for_call_origin)
+ int stack_size, bool for_call_origin, u32 flags)
{
int arg_regs, first_off = 0, nr_regs = 0, nr_stack_slots = 0;
+ bool use_jmp = bpf_trampoline_use_jmp(flags);
int i, j;
/* Store function arguments to stack.
@@ -2890,7 +2891,7 @@ static void save_args(const struct btf_func_model *m, u8 **prog,
*/
for (j = 0; j < arg_regs; j++) {
emit_ldx(prog, BPF_DW, BPF_REG_0, BPF_REG_FP,
- nr_stack_slots * 8 + 0x18);
+ nr_stack_slots * 8 + 16 + (!use_jmp) * 8);
emit_stx(prog, BPF_DW, BPF_REG_FP, BPF_REG_0,
-stack_size);
@@ -3284,7 +3285,12 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
* should be 16-byte aligned. Following code depend on
* that stack_size is already 8-byte aligned.
*/
- stack_size += (stack_size % 16) ? 0 : 8;
+ if (bpf_trampoline_use_jmp(flags)) {
+ /* no rip in the "jmp" case */
+ stack_size += (stack_size % 16) ? 8 : 0;
+ } else {
+ stack_size += (stack_size % 16) ? 0 : 8;
+ }
}
arg_stack_off = stack_size;
@@ -3344,7 +3350,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -ip_off);
}
- save_args(m, &prog, regs_off, false);
+ save_args(m, &prog, regs_off, false, flags);
if (flags & BPF_TRAMP_F_CALL_ORIG) {
/* arg1: mov rdi, im */
@@ -3377,7 +3383,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
if (flags & BPF_TRAMP_F_CALL_ORIG) {
restore_regs(m, &prog, regs_off);
- save_args(m, &prog, arg_stack_off, true);
+ save_args(m, &prog, arg_stack_off, true, flags);
if (flags & BPF_TRAMP_F_TAIL_CALL_CTX) {
/* Before calling the original function, load the
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 09d5dc541d1c..4187b7578580 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1264,6 +1264,18 @@ typedef void (*bpf_trampoline_exit_t)(struct bpf_prog *prog, u64 start,
bpf_trampoline_enter_t bpf_trampoline_enter(const struct bpf_prog *prog);
bpf_trampoline_exit_t bpf_trampoline_exit(const struct bpf_prog *prog);
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_JMP
+static inline bool bpf_trampoline_use_jmp(u64 flags)
+{
+ return flags & BPF_TRAMP_F_CALL_ORIG && !(flags & BPF_TRAMP_F_SKIP_FRAME);
+}
+#else
+static inline bool bpf_trampoline_use_jmp(u64 flags)
+{
+ return false;
+}
+#endif
+
struct bpf_ksym {
unsigned long start;
unsigned long end;
--
2.51.2
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH bpf-next v3 5/6] bpf: specify the old and new poke_type for bpf_arch_text_poke
2025-11-18 12:36 [PATCH bpf-next v3 0/6] bpf trampoline support "jmp" mode Menglong Dong
` (3 preceding siblings ...)
2025-11-18 12:36 ` [PATCH bpf-next v3 4/6] bpf,x86: adjust the "jmp" mode for bpf trampoline Menglong Dong
@ 2025-11-18 12:36 ` Menglong Dong
2025-11-18 12:36 ` [PATCH bpf-next v3 6/6] bpf: implement "jmp" mode for trampoline Menglong Dong
` (2 subsequent siblings)
7 siblings, 0 replies; 22+ messages in thread
From: Menglong Dong @ 2025-11-18 12:36 UTC (permalink / raw)
To: ast, rostedt
Cc: daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, jolsa, mhiramat,
mark.rutland, mathieu.desnoyers, jiang.biao, bpf, linux-kernel,
linux-trace-kernel
In the origin logic, the bpf_arch_text_poke() assume that the old and new
instructions have the same opcode. However, they can have different opcode
if we want to replace a "call" insn with a "jmp" insn.
Therefore, add the new function parameter "old_t" along with the "new_t",
which are used to indicate the old and new poke type. Meanwhile, adjust
the implement of bpf_arch_text_poke() for all the archs.
"BPF_MOD_NOP" is added to make the code more readable. In
bpf_arch_text_poke(), we still check if the new and old address is NULL to
determine if nop insn should be used, which I think is more safe.
Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
---
v3:
- fix compile error in powerpc
v2:
- add new function parameter to bpf_arch_text_poke instead of introduce
bpf_arch_text_poke_type()
---
arch/arm64/net/bpf_jit_comp.c | 14 ++++++-------
arch/loongarch/net/bpf_jit.c | 9 +++++---
arch/powerpc/net/bpf_jit_comp.c | 10 +++++----
arch/riscv/net/bpf_jit_comp64.c | 9 +++++---
arch/s390/net/bpf_jit_comp.c | 7 ++++---
arch/x86/net/bpf_jit_comp.c | 37 +++++++++++++++++++--------------
include/linux/bpf.h | 6 ++++--
kernel/bpf/core.c | 5 +++--
kernel/bpf/trampoline.c | 20 ++++++++++++------
9 files changed, 71 insertions(+), 46 deletions(-)
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index 0c9a50a1e73e..c64df579b7e0 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -2923,8 +2923,9 @@ static int gen_branch_or_nop(enum aarch64_insn_branch_type type, void *ip,
* The dummy_tramp is used to prevent another CPU from jumping to unknown
* locations during the patching process, making the patching process easier.
*/
-int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type poke_type,
- void *old_addr, void *new_addr)
+int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type old_t,
+ enum bpf_text_poke_type new_t, void *old_addr,
+ void *new_addr)
{
int ret;
u32 old_insn;
@@ -2968,14 +2969,13 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type poke_type,
!poking_bpf_entry))
return -EINVAL;
- if (poke_type == BPF_MOD_CALL)
- branch_type = AARCH64_INSN_BRANCH_LINK;
- else
- branch_type = AARCH64_INSN_BRANCH_NOLINK;
-
+ branch_type = old_t == BPF_MOD_CALL ? AARCH64_INSN_BRANCH_LINK :
+ AARCH64_INSN_BRANCH_NOLINK;
if (gen_branch_or_nop(branch_type, ip, old_addr, plt, &old_insn) < 0)
return -EFAULT;
+ branch_type = new_t == BPF_MOD_CALL ? AARCH64_INSN_BRANCH_LINK :
+ AARCH64_INSN_BRANCH_NOLINK;
if (gen_branch_or_nop(branch_type, ip, new_addr, plt, &new_insn) < 0)
return -EFAULT;
diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c
index cbe53d0b7fb0..2e7dacbbef5c 100644
--- a/arch/loongarch/net/bpf_jit.c
+++ b/arch/loongarch/net/bpf_jit.c
@@ -1284,11 +1284,12 @@ void *bpf_arch_text_copy(void *dst, void *src, size_t len)
return ret ? ERR_PTR(-EINVAL) : dst;
}
-int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type poke_type,
- void *old_addr, void *new_addr)
+int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type old_t,
+ enum bpf_text_poke_type new_t, void *old_addr,
+ void *new_addr)
{
int ret;
- bool is_call = (poke_type == BPF_MOD_CALL);
+ bool is_call;
u32 old_insns[LOONGARCH_LONG_JUMP_NINSNS] = {[0 ... 4] = INSN_NOP};
u32 new_insns[LOONGARCH_LONG_JUMP_NINSNS] = {[0 ... 4] = INSN_NOP};
@@ -1298,6 +1299,7 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type poke_type,
if (!is_bpf_text_address((unsigned long)ip))
return -ENOTSUPP;
+ is_call = old_t == BPF_MOD_CALL;
ret = emit_jump_or_nops(old_addr, ip, old_insns, is_call);
if (ret)
return ret;
@@ -1305,6 +1307,7 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type poke_type,
if (memcmp(ip, old_insns, LOONGARCH_LONG_JUMP_NBYTES))
return -EFAULT;
+ is_call = new_t == BPF_MOD_CALL;
ret = emit_jump_or_nops(new_addr, ip, new_insns, is_call);
if (ret)
return ret;
diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
index 88ad5ba7b87f..5e976730b2f5 100644
--- a/arch/powerpc/net/bpf_jit_comp.c
+++ b/arch/powerpc/net/bpf_jit_comp.c
@@ -1107,8 +1107,9 @@ static void do_isync(void *info __maybe_unused)
* execute isync (or some CSI) so that they don't go back into the
* trampoline again.
*/
-int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type poke_type,
- void *old_addr, void *new_addr)
+int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type old_t,
+ enum bpf_text_poke_type new_t, void *old_addr,
+ void *new_addr)
{
unsigned long bpf_func, bpf_func_end, size, offset;
ppc_inst_t old_inst, new_inst;
@@ -1119,7 +1120,6 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type poke_type,
return -EOPNOTSUPP;
bpf_func = (unsigned long)ip;
- branch_flags = poke_type == BPF_MOD_CALL ? BRANCH_SET_LINK : 0;
/* We currently only support poking bpf programs */
if (!__bpf_address_lookup(bpf_func, &size, &offset, name)) {
@@ -1132,7 +1132,7 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type poke_type,
* an unconditional branch instruction at im->ip_after_call
*/
if (offset) {
- if (poke_type != BPF_MOD_JUMP) {
+ if (old_t == BPF_MOD_CALL || new_t == BPF_MOD_CALL) {
pr_err("%s (0x%lx): calls are not supported in bpf prog body\n", __func__,
bpf_func);
return -EOPNOTSUPP;
@@ -1166,6 +1166,7 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type poke_type,
}
old_inst = ppc_inst(PPC_RAW_NOP());
+ branch_flags = old_t == BPF_MOD_CALL ? BRANCH_SET_LINK : 0;
if (old_addr) {
if (is_offset_in_branch_range(ip - old_addr))
create_branch(&old_inst, ip, (unsigned long)old_addr, branch_flags);
@@ -1174,6 +1175,7 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type poke_type,
branch_flags);
}
new_inst = ppc_inst(PPC_RAW_NOP());
+ branch_flags = new_t == BPF_MOD_CALL ? BRANCH_SET_LINK : 0;
if (new_addr) {
if (is_offset_in_branch_range(ip - new_addr))
create_branch(&new_inst, ip, (unsigned long)new_addr, branch_flags);
diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
index 21c70ae3296b..5f9457e910e8 100644
--- a/arch/riscv/net/bpf_jit_comp64.c
+++ b/arch/riscv/net/bpf_jit_comp64.c
@@ -852,17 +852,19 @@ static int gen_jump_or_nops(void *target, void *ip, u32 *insns, bool is_call)
return emit_jump_and_link(is_call ? RV_REG_T0 : RV_REG_ZERO, rvoff, false, &ctx);
}
-int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type poke_type,
- void *old_addr, void *new_addr)
+int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type old_t,
+ enum bpf_text_poke_type new_t, void *old_addr,
+ void *new_addr)
{
u32 old_insns[RV_FENTRY_NINSNS], new_insns[RV_FENTRY_NINSNS];
- bool is_call = poke_type == BPF_MOD_CALL;
+ bool is_call;
int ret;
if (!is_kernel_text((unsigned long)ip) &&
!is_bpf_text_address((unsigned long)ip))
return -ENOTSUPP;
+ is_call = old_t == BPF_MOD_CALL;
ret = gen_jump_or_nops(old_addr, ip, old_insns, is_call);
if (ret)
return ret;
@@ -870,6 +872,7 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type poke_type,
if (memcmp(ip, old_insns, RV_FENTRY_NBYTES))
return -EFAULT;
+ is_call = new_t == BPF_MOD_CALL;
ret = gen_jump_or_nops(new_addr, ip, new_insns, is_call);
if (ret)
return ret;
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index cf461d76e9da..a2072cabba76 100644
--- a/arch/s390/net/bpf_jit_comp.c
+++ b/arch/s390/net/bpf_jit_comp.c
@@ -2413,8 +2413,9 @@ bool bpf_jit_supports_far_kfunc_call(void)
return true;
}
-int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
- void *old_addr, void *new_addr)
+int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type old_t,
+ enum bpf_text_poke_type new_t, void *old_addr,
+ void *new_addr)
{
struct bpf_plt expected_plt, current_plt, new_plt, *plt;
struct {
@@ -2431,7 +2432,7 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
if (insn.opc != (0xc004 | (old_addr ? 0xf0 : 0)))
return -EINVAL;
- if (t == BPF_MOD_JUMP &&
+ if ((new_t == BPF_MOD_JUMP || old_t == BPF_MOD_JUMP) &&
insn.disp == ((char *)new_addr - (char *)ip) >> 1) {
/*
* The branch already points to the destination,
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 632a83381c2d..b69dc7194e2c 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -597,7 +597,8 @@ static int emit_jump(u8 **pprog, void *func, void *ip)
return emit_patch(pprog, func, ip, 0xE9);
}
-static int __bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
+static int __bpf_arch_text_poke(void *ip, enum bpf_text_poke_type old_t,
+ enum bpf_text_poke_type new_t,
void *old_addr, void *new_addr)
{
const u8 *nop_insn = x86_nops[5];
@@ -607,9 +608,9 @@ static int __bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
int ret;
memcpy(old_insn, nop_insn, X86_PATCH_SIZE);
- if (old_addr) {
+ if (old_t != BPF_MOD_NOP && old_addr) {
prog = old_insn;
- ret = t == BPF_MOD_CALL ?
+ ret = old_t == BPF_MOD_CALL ?
emit_call(&prog, old_addr, ip) :
emit_jump(&prog, old_addr, ip);
if (ret)
@@ -617,9 +618,9 @@ static int __bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
}
memcpy(new_insn, nop_insn, X86_PATCH_SIZE);
- if (new_addr) {
+ if (new_t != BPF_MOD_NOP && new_addr) {
prog = new_insn;
- ret = t == BPF_MOD_CALL ?
+ ret = new_t == BPF_MOD_CALL ?
emit_call(&prog, new_addr, ip) :
emit_jump(&prog, new_addr, ip);
if (ret)
@@ -640,8 +641,9 @@ static int __bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
return ret;
}
-int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
- void *old_addr, void *new_addr)
+int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type old_t,
+ enum bpf_text_poke_type new_t, void *old_addr,
+ void *new_addr)
{
if (!is_kernel_text((long)ip) &&
!is_bpf_text_address((long)ip))
@@ -655,7 +657,7 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
if (is_endbr(ip))
ip += ENDBR_INSN_SIZE;
- return __bpf_arch_text_poke(ip, t, old_addr, new_addr);
+ return __bpf_arch_text_poke(ip, old_t, new_t, old_addr, new_addr);
}
#define EMIT_LFENCE() EMIT3(0x0F, 0xAE, 0xE8)
@@ -897,12 +899,13 @@ static void bpf_tail_call_direct_fixup(struct bpf_prog *prog)
target = array->ptrs[poke->tail_call.key];
if (target) {
ret = __bpf_arch_text_poke(poke->tailcall_target,
- BPF_MOD_JUMP, NULL,
+ BPF_MOD_NOP, BPF_MOD_JUMP,
+ NULL,
(u8 *)target->bpf_func +
poke->adj_off);
BUG_ON(ret < 0);
ret = __bpf_arch_text_poke(poke->tailcall_bypass,
- BPF_MOD_JUMP,
+ BPF_MOD_JUMP, BPF_MOD_NOP,
(u8 *)poke->tailcall_target +
X86_PATCH_SIZE, NULL);
BUG_ON(ret < 0);
@@ -3985,6 +3988,7 @@ void bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke,
struct bpf_prog *new, struct bpf_prog *old)
{
u8 *old_addr, *new_addr, *old_bypass_addr;
+ enum bpf_text_poke_type t;
int ret;
old_bypass_addr = old ? NULL : poke->bypass_addr;
@@ -3997,21 +4001,22 @@ void bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke,
* the kallsyms check.
*/
if (new) {
+ t = old_addr ? BPF_MOD_JUMP : BPF_MOD_NOP;
ret = __bpf_arch_text_poke(poke->tailcall_target,
- BPF_MOD_JUMP,
+ t, BPF_MOD_JUMP,
old_addr, new_addr);
BUG_ON(ret < 0);
if (!old) {
ret = __bpf_arch_text_poke(poke->tailcall_bypass,
- BPF_MOD_JUMP,
+ BPF_MOD_JUMP, BPF_MOD_NOP,
poke->bypass_addr,
NULL);
BUG_ON(ret < 0);
}
} else {
+ t = old_bypass_addr ? BPF_MOD_JUMP : BPF_MOD_NOP;
ret = __bpf_arch_text_poke(poke->tailcall_bypass,
- BPF_MOD_JUMP,
- old_bypass_addr,
+ t, BPF_MOD_JUMP, old_bypass_addr,
poke->bypass_addr);
BUG_ON(ret < 0);
/* let other CPUs finish the execution of program
@@ -4020,9 +4025,9 @@ void bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke,
*/
if (!ret)
synchronize_rcu();
+ t = old_addr ? BPF_MOD_JUMP : BPF_MOD_NOP;
ret = __bpf_arch_text_poke(poke->tailcall_target,
- BPF_MOD_JUMP,
- old_addr, NULL);
+ t, BPF_MOD_NOP, old_addr, NULL);
BUG_ON(ret < 0);
}
}
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 4187b7578580..d5e2af29c7c8 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -3708,12 +3708,14 @@ static inline u32 bpf_xdp_sock_convert_ctx_access(enum bpf_access_type type,
#endif /* CONFIG_INET */
enum bpf_text_poke_type {
+ BPF_MOD_NOP,
BPF_MOD_CALL,
BPF_MOD_JUMP,
};
-int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
- void *addr1, void *addr2);
+int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type old_t,
+ enum bpf_text_poke_type new_t, void *old_addr,
+ void *new_addr);
void bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke,
struct bpf_prog *new, struct bpf_prog *old);
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index ef4448f18aad..c8ae6ab31651 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -3150,8 +3150,9 @@ int __weak skb_copy_bits(const struct sk_buff *skb, int offset, void *to,
return -EFAULT;
}
-int __weak bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
- void *addr1, void *addr2)
+int __weak bpf_arch_text_poke(void *ip, enum bpf_text_poke_type old_t,
+ enum bpf_text_poke_type new_t, void *old_addr,
+ void *new_addr)
{
return -ENOTSUPP;
}
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 04104397c432..0230ad19533e 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -183,7 +183,8 @@ static int unregister_fentry(struct bpf_trampoline *tr, void *old_addr)
if (tr->func.ftrace_managed)
ret = unregister_ftrace_direct(tr->fops, (long)old_addr, false);
else
- ret = bpf_arch_text_poke(ip, BPF_MOD_CALL, old_addr, NULL);
+ ret = bpf_arch_text_poke(ip, BPF_MOD_CALL, BPF_MOD_NOP,
+ old_addr, NULL);
return ret;
}
@@ -200,7 +201,10 @@ static int modify_fentry(struct bpf_trampoline *tr, void *old_addr, void *new_ad
else
ret = modify_ftrace_direct_nolock(tr->fops, (long)new_addr);
} else {
- ret = bpf_arch_text_poke(ip, BPF_MOD_CALL, old_addr, new_addr);
+ ret = bpf_arch_text_poke(ip,
+ old_addr ? BPF_MOD_CALL : BPF_MOD_NOP,
+ new_addr ? BPF_MOD_CALL : BPF_MOD_NOP,
+ old_addr, new_addr);
}
return ret;
}
@@ -225,7 +229,8 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
return ret;
ret = register_ftrace_direct(tr->fops, (long)new_addr);
} else {
- ret = bpf_arch_text_poke(ip, BPF_MOD_CALL, NULL, new_addr);
+ ret = bpf_arch_text_poke(ip, BPF_MOD_NOP, BPF_MOD_CALL,
+ NULL, new_addr);
}
return ret;
@@ -336,8 +341,9 @@ static void bpf_tramp_image_put(struct bpf_tramp_image *im)
* call_rcu_tasks() is not necessary.
*/
if (im->ip_after_call) {
- int err = bpf_arch_text_poke(im->ip_after_call, BPF_MOD_JUMP,
- NULL, im->ip_epilogue);
+ int err = bpf_arch_text_poke(im->ip_after_call, BPF_MOD_NOP,
+ BPF_MOD_JUMP, NULL,
+ im->ip_epilogue);
WARN_ON(err);
if (IS_ENABLED(CONFIG_TASKS_RCU))
call_rcu_tasks(&im->rcu, __bpf_tramp_image_put_rcu_tasks);
@@ -570,7 +576,8 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
if (err)
return err;
tr->extension_prog = link->link.prog;
- return bpf_arch_text_poke(tr->func.addr, BPF_MOD_JUMP, NULL,
+ return bpf_arch_text_poke(tr->func.addr, BPF_MOD_NOP,
+ BPF_MOD_JUMP, NULL,
link->link.prog->bpf_func);
}
if (cnt >= BPF_MAX_TRAMP_LINKS)
@@ -618,6 +625,7 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
if (kind == BPF_TRAMP_REPLACE) {
WARN_ON_ONCE(!tr->extension_prog);
err = bpf_arch_text_poke(tr->func.addr, BPF_MOD_JUMP,
+ BPF_MOD_NOP,
tr->extension_prog->bpf_func, NULL);
tr->extension_prog = NULL;
guard(mutex)(&tgt_prog->aux->ext_mutex);
--
2.51.2
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH bpf-next v3 6/6] bpf: implement "jmp" mode for trampoline
2025-11-18 12:36 [PATCH bpf-next v3 0/6] bpf trampoline support "jmp" mode Menglong Dong
` (4 preceding siblings ...)
2025-11-18 12:36 ` [PATCH bpf-next v3 5/6] bpf: specify the old and new poke_type for bpf_arch_text_poke Menglong Dong
@ 2025-11-18 12:36 ` Menglong Dong
2025-11-19 0:59 ` Alexei Starovoitov
2025-11-19 0:28 ` [PATCH bpf-next v3 0/6] bpf trampoline support "jmp" mode Alexei Starovoitov
2025-11-24 18:00 ` patchwork-bot+netdevbpf
7 siblings, 1 reply; 22+ messages in thread
From: Menglong Dong @ 2025-11-18 12:36 UTC (permalink / raw)
To: ast, rostedt
Cc: daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, jolsa, mhiramat,
mark.rutland, mathieu.desnoyers, jiang.biao, bpf, linux-kernel,
linux-trace-kernel
Implement the "jmp" mode for the bpf trampoline. For the ftrace_managed
case, we need only to set the FTRACE_OPS_FL_JMP on the tr->fops if "jmp"
is needed.
For the bpf poke case, we will check the origin poke type with the
"origin_flags", and current poke type with "tr->flags". The function
bpf_trampoline_update_fentry() is introduced to do the job.
The "jmp" mode will only be enabled with CONFIG_DYNAMIC_FTRACE_WITH_JMP
enabled and BPF_TRAMP_F_SHARE_IPMODIFY is not set. With
BPF_TRAMP_F_SHARE_IPMODIFY, we need to get the origin call ip from the
stack, so we can't use the "jmp" mode.
Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
---
v3:
- wrap the write to tr->fops->flags with CONFIG_DYNAMIC_FTRACE_WITH_JMP
- reset BPF_TRAMP_F_SKIP_FRAME when the second try of modify_fentry in
bpf_trampoline_update()
v2:
- rename bpf_text_poke to bpf_trampoline_update_fentry
- remove the BPF_TRAMP_F_JMPED and check the current mode with the origin
flags instead.
---
kernel/bpf/trampoline.c | 75 +++++++++++++++++++++++++++++++----------
1 file changed, 58 insertions(+), 17 deletions(-)
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 0230ad19533e..976d89011b15 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -175,24 +175,42 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
return tr;
}
-static int unregister_fentry(struct bpf_trampoline *tr, void *old_addr)
+static int bpf_trampoline_update_fentry(struct bpf_trampoline *tr, u32 orig_flags,
+ void *old_addr, void *new_addr)
{
+ enum bpf_text_poke_type new_t = BPF_MOD_CALL, old_t = BPF_MOD_CALL;
void *ip = tr->func.addr;
+
+ if (!new_addr)
+ new_t = BPF_MOD_NOP;
+ else if (bpf_trampoline_use_jmp(tr->flags))
+ new_t = BPF_MOD_JUMP;
+
+ if (!old_addr)
+ old_t = BPF_MOD_NOP;
+ else if (bpf_trampoline_use_jmp(orig_flags))
+ old_t = BPF_MOD_JUMP;
+
+ return bpf_arch_text_poke(ip, old_t, new_t, old_addr, new_addr);
+}
+
+static int unregister_fentry(struct bpf_trampoline *tr, u32 orig_flags,
+ void *old_addr)
+{
int ret;
if (tr->func.ftrace_managed)
ret = unregister_ftrace_direct(tr->fops, (long)old_addr, false);
else
- ret = bpf_arch_text_poke(ip, BPF_MOD_CALL, BPF_MOD_NOP,
- old_addr, NULL);
+ ret = bpf_trampoline_update_fentry(tr, orig_flags, old_addr, NULL);
return ret;
}
-static int modify_fentry(struct bpf_trampoline *tr, void *old_addr, void *new_addr,
+static int modify_fentry(struct bpf_trampoline *tr, u32 orig_flags,
+ void *old_addr, void *new_addr,
bool lock_direct_mutex)
{
- void *ip = tr->func.addr;
int ret;
if (tr->func.ftrace_managed) {
@@ -201,10 +219,8 @@ static int modify_fentry(struct bpf_trampoline *tr, void *old_addr, void *new_ad
else
ret = modify_ftrace_direct_nolock(tr->fops, (long)new_addr);
} else {
- ret = bpf_arch_text_poke(ip,
- old_addr ? BPF_MOD_CALL : BPF_MOD_NOP,
- new_addr ? BPF_MOD_CALL : BPF_MOD_NOP,
- old_addr, new_addr);
+ ret = bpf_trampoline_update_fentry(tr, orig_flags, old_addr,
+ new_addr);
}
return ret;
}
@@ -229,8 +245,7 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
return ret;
ret = register_ftrace_direct(tr->fops, (long)new_addr);
} else {
- ret = bpf_arch_text_poke(ip, BPF_MOD_NOP, BPF_MOD_CALL,
- NULL, new_addr);
+ ret = bpf_trampoline_update_fentry(tr, 0, NULL, new_addr);
}
return ret;
@@ -416,7 +431,7 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
return PTR_ERR(tlinks);
if (total == 0) {
- err = unregister_fentry(tr, tr->cur_image->image);
+ err = unregister_fentry(tr, orig_flags, tr->cur_image->image);
bpf_tramp_image_put(tr->cur_image);
tr->cur_image = NULL;
goto out;
@@ -440,9 +455,20 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
again:
- if ((tr->flags & BPF_TRAMP_F_SHARE_IPMODIFY) &&
- (tr->flags & BPF_TRAMP_F_CALL_ORIG))
- tr->flags |= BPF_TRAMP_F_ORIG_STACK;
+ if (tr->flags & BPF_TRAMP_F_CALL_ORIG) {
+ if (tr->flags & BPF_TRAMP_F_SHARE_IPMODIFY) {
+ /* The BPF_TRAMP_F_SKIP_FRAME can be cleared in the
+ * first try, reset it in the second try.
+ */
+ tr->flags |= BPF_TRAMP_F_ORIG_STACK | BPF_TRAMP_F_SKIP_FRAME;
+ } else if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_JMP)) {
+ /* Use "jmp" instead of "call" for the trampoline
+ * in the origin call case, and we don't need to
+ * skip the frame.
+ */
+ tr->flags &= ~BPF_TRAMP_F_SKIP_FRAME;
+ }
+ }
#endif
size = arch_bpf_trampoline_size(&tr->func.model, tr->flags,
@@ -473,10 +499,18 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
if (err)
goto out_free;
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_JMP
+ if (bpf_trampoline_use_jmp(tr->flags))
+ tr->fops->flags |= FTRACE_OPS_FL_JMP;
+ else
+ tr->fops->flags &= ~FTRACE_OPS_FL_JMP;
+#endif
+
WARN_ON(tr->cur_image && total == 0);
if (tr->cur_image)
/* progs already running at this address */
- err = modify_fentry(tr, tr->cur_image->image, im->image, lock_direct_mutex);
+ err = modify_fentry(tr, orig_flags, tr->cur_image->image,
+ im->image, lock_direct_mutex);
else
/* first time registering */
err = register_fentry(tr, im->image);
@@ -499,8 +533,15 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
tr->cur_image = im;
out:
/* If any error happens, restore previous flags */
- if (err)
+ if (err) {
tr->flags = orig_flags;
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_JMP
+ if (bpf_trampoline_use_jmp(tr->flags))
+ tr->fops->flags |= FTRACE_OPS_FL_JMP;
+ else
+ tr->fops->flags &= ~FTRACE_OPS_FL_JMP;
+#endif
+ }
kfree(tlinks);
return err;
--
2.51.2
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [PATCH bpf-next v3 1/6] ftrace: introduce FTRACE_OPS_FL_JMP
2025-11-18 12:36 ` [PATCH bpf-next v3 1/6] ftrace: introduce FTRACE_OPS_FL_JMP Menglong Dong
@ 2025-11-18 13:25 ` bot+bpf-ci
2025-11-18 13:51 ` Steven Rostedt
0 siblings, 1 reply; 22+ messages in thread
From: bot+bpf-ci @ 2025-11-18 13:25 UTC (permalink / raw)
To: menglong8.dong, ast, rostedt
Cc: daniel, john.fastabend, andrii, martin.lau, eddyz87, song,
yonghong.song, kpsingh, sdf, haoluo, jolsa, mhiramat,
mark.rutland, mathieu.desnoyers, jiang.biao, bpf, linux-kernel,
linux-trace-kernel, ast, andrii, daniel, martin.lau, eddyz87,
yonghong.song, clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 5151 bytes --]
> diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
> index 07f8c309e..015dd1049 100644
> --- a/include/linux/ftrace.h
> +++ b/include/linux/ftrace.h
[ ... ]
> @@ -577,6 +578,38 @@ static inline void arch_ftrace_set_direct_caller(struct ftrace_regs *fregs,
> unsigned long addr) { }
> #endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
>
> +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_JMP
> +static inline bool ftrace_is_jmp(unsigned long addr)
> +{
> + return addr & 1;
> +}
> +
> +static inline unsigned long ftrace_jmp_set(unsigned long addr)
> +{
> + return addr | 1UL;
> +}
> +
> +static inline unsigned long ftrace_jmp_get(unsigned long addr)
> +{
> + return addr & ~1UL;
> +}
> +#else
> +static inline bool ftrace_is_jmp(unsigned long addr)
> +{
> + return false;
> +}
> +
> +static inline unsigned long ftrace_jmp_set(unsigned long addr)
> +{
> + return addr;
> +}
> +
> +static inline unsigned long ftrace_jmp_get(unsigned long addr)
> +{
> + return addr;
> +}
> +#endif /* CONFIG_DYNAMIC_FTRACE_WITH_JMP */
[ ... ]
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index 59cfacb8a..bbb37c0f8 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
[ ... ]
> @@ -5951,7 +5951,8 @@ static void remove_direct_functions_hash(struct ftrace_hash *hash, unsigned long
> for (i = 0; i < size; i++) {
> hlist_for_each_entry(entry, &hash->buckets[i], hlist) {
> del = __ftrace_lookup_ip(direct_functions, entry->ip);
> - if (del && del->direct == addr) {
> + if (del && ftrace_jmp_get(del->direct) ==
> + ftrace_jmp_get(addr)) {
> remove_hash_entry(direct_functions, del);
> kfree(del);
> }
[ ... ]
> @@ -6016,8 +6017,15 @@ int register_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
> if (ftrace_hash_empty(hash))
> return -EINVAL;
>
> + /* This is a "raw" address, and this should never happen. */
> + if (WARN_ON_ONCE(ftrace_is_jmp(addr)))
> + return -EINVAL;
> +
> mutex_lock(&direct_mutex);
>
> + if (ops->flags & FTRACE_OPS_FL_JMP)
> + addr = ftrace_jmp_set(addr);
> +
> /* Make sure requested entries are not already registered.. */
[ ... ]
> /* ... and add the new entries */
> size = 1 << hash->size_bits;
> for (i = 0; i < size; i++) {
> hlist_for_each_entry(entry, &hash->buckets[i], hlist) {
> new = add_hash_entry(new_hash, entry->ip);
> if (!new)
> goto out_unlock;
> /* Update both the copy and the hash entry */
> new->direct = addr;
> entry->direct = addr;
> }
> }
After addr is potentially encoded with ftrace_jmp_set(), it gets stored in
entry->direct. However, ftrace_find_rec_direct() returns entry->direct
without masking:
unsigned long ftrace_find_rec_direct(unsigned long ip)
{
struct ftrace_func_entry *entry;
entry = __ftrace_lookup_ip(direct_functions, ip);
if (!entry)
return 0;
return entry->direct;
}
This encoded address flows to two consumers:
1. ftrace_get_addr_new() and ftrace_get_addr_curr() return it directly,
which eventually reaches architecture-specific ftrace_call_replace().
Can the architecture code handle addresses with LSB set?
2. The debug output at kernel/trace/ftrace.c in ftrace_enabled_show():
seq_printf(m, "\n\tdirect-->%pS", (void *)direct);
Won't this print the wrong symbol due to the LSB being set?
> ops->func = call_direct_funcs;
> ops->flags = MULTI_FLAGS;
> ops->trampoline = FTRACE_REGS_ADDR;
> ops->direct_call = addr;
Similarly, ops->direct_call stores the encoded address. In
call_direct_funcs():
static void call_direct_funcs(unsigned long ip, unsigned long pip,
struct ftrace_ops *ops, struct ftrace_regs *fregs)
{
unsigned long addr = READ_ONCE(ops->direct_call);
if (!addr)
return;
arch_ftrace_set_direct_caller(fregs, addr);
}
The encoded address is passed directly to arch_ftrace_set_direct_caller()
without masking. Looking at arch implementations like x86:
static inline void
__arch_ftrace_set_direct_caller(struct pt_regs *regs, unsigned long addr)
{
/* Emulate a call */
regs->orig_ax = addr;
}
Does arch_ftrace_set_direct_caller() expect addresses with the LSB set?
The implementations directly store the address into registers without any
masking.
The commit message says "we can tell if we should use 'jmp' for the
callback in ftrace_call_replace()", but no architecture code is updated
to check the LSB. Should ftrace_find_rec_direct() and call_direct_funcs()
mask the JMP bit before returning addresses to architecture code?
> @@ -6136,8 +6146,13 @@ __modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
>
> lockdep_assert_held_once(&direct_mutex);
>
> + /* This is a "raw" address, and this should never happen. */
> + if (WARN_ON_ONCE(ftrace_is_jmp(addr)))
> + return -EINVAL;
> +
> + if (ops->flags & FTRACE_OPS_FL_JMP)
> + addr = ftrace_jmp_set(addr);
Same encoding happens here with the same flow issues.
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/19466635856
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH bpf-next v3 1/6] ftrace: introduce FTRACE_OPS_FL_JMP
2025-11-18 13:25 ` bot+bpf-ci
@ 2025-11-18 13:51 ` Steven Rostedt
0 siblings, 0 replies; 22+ messages in thread
From: Steven Rostedt @ 2025-11-18 13:51 UTC (permalink / raw)
To: bot+bpf-ci
Cc: menglong8.dong, ast, daniel, john.fastabend, andrii, martin.lau,
eddyz87, song, yonghong.song, kpsingh, sdf, haoluo, jolsa,
mhiramat, mark.rutland, mathieu.desnoyers, jiang.biao, bpf,
linux-kernel, linux-trace-kernel, martin.lau, clm, ihor.solodrai
On Tue, 18 Nov 2025 13:25:04 +0000 (UTC)
bot+bpf-ci@kernel.org wrote:
> The commit message says "we can tell if we should use 'jmp' for the
> callback in ftrace_call_replace()", but no architecture code is updated
> to check the LSB. Should ftrace_find_rec_direct() and call_direct_funcs()
> mask the JMP bit before returning addresses to architecture code?
I guess AI isn't smart enough to know about kernel config options yet.
> +config DYNAMIC_FTRACE_WITH_JMP
> + def_bool y
> + depends on DYNAMIC_FTRACE
> + depends on DYNAMIC_FTRACE_WITH_DIRECT_CALLS
> + depends on HAVE_DYNAMIC_FTRACE_WITH_JMP
Where this code is only implemented when HAVE_DYNAMIC_FTRACE_WITH_JMP is
set.
-- Steve
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH bpf-next v3 2/6] x86/ftrace: implement DYNAMIC_FTRACE_WITH_JMP
2025-11-18 12:36 ` [PATCH bpf-next v3 2/6] x86/ftrace: implement DYNAMIC_FTRACE_WITH_JMP Menglong Dong
@ 2025-11-18 22:01 ` Jiri Olsa
2025-11-19 1:05 ` Menglong Dong
0 siblings, 1 reply; 22+ messages in thread
From: Jiri Olsa @ 2025-11-18 22:01 UTC (permalink / raw)
To: Menglong Dong
Cc: ast, rostedt, daniel, john.fastabend, andrii, martin.lau, eddyz87,
song, yonghong.song, kpsingh, sdf, haoluo, mhiramat, mark.rutland,
mathieu.desnoyers, jiang.biao, bpf, linux-kernel,
linux-trace-kernel
On Tue, Nov 18, 2025 at 08:36:30PM +0800, Menglong Dong wrote:
> Implement the DYNAMIC_FTRACE_WITH_JMP for x86_64. In ftrace_call_replace,
> we will use JMP32_INSN_OPCODE instead of CALL_INSN_OPCODE if the address
> should use "jmp".
>
> Meanwhile, adjust the direct call in the ftrace_regs_caller. The RSB is
> balanced in the "jmp" mode. Take the function "foo" for example:
>
> original_caller:
> call foo -> foo:
> call fentry -> fentry:
> [do ftrace callbacks ]
> move tramp_addr to stack
> RET -> tramp_addr
> tramp_addr:
> [..]
> call foo_body -> foo_body:
> [..]
> RET -> back to tramp_addr
> [..]
> RET -> back to original_caller
>
> Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
> ---
> arch/x86/Kconfig | 1 +
> arch/x86/kernel/ftrace.c | 7 ++++++-
> arch/x86/kernel/ftrace_64.S | 12 +++++++++++-
> 3 files changed, 18 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index fa3b616af03a..462250a20311 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -230,6 +230,7 @@ config X86
> select HAVE_DYNAMIC_FTRACE_WITH_ARGS if X86_64
> select HAVE_FTRACE_REGS_HAVING_PT_REGS if X86_64
> select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
> + select HAVE_DYNAMIC_FTRACE_WITH_JMP if X86_64
> select HAVE_SAMPLE_FTRACE_DIRECT if X86_64
> select HAVE_SAMPLE_FTRACE_DIRECT_MULTI if X86_64
> select HAVE_EBPF_JIT
> diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
> index 4450acec9390..0543b57f54ee 100644
> --- a/arch/x86/kernel/ftrace.c
> +++ b/arch/x86/kernel/ftrace.c
> @@ -74,7 +74,12 @@ static const char *ftrace_call_replace(unsigned long ip, unsigned long addr)
> * No need to translate into a callthunk. The trampoline does
> * the depth accounting itself.
> */
> - return text_gen_insn(CALL_INSN_OPCODE, (void *)ip, (void *)addr);
> + if (ftrace_is_jmp(addr)) {
> + addr = ftrace_jmp_get(addr);
> + return text_gen_insn(JMP32_INSN_OPCODE, (void *)ip, (void *)addr);
> + } else {
> + return text_gen_insn(CALL_INSN_OPCODE, (void *)ip, (void *)addr);
> + }
> }
>
> static int ftrace_verify_code(unsigned long ip, const char *old_code)
> diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S
> index 823dbdd0eb41..a132608265f6 100644
> --- a/arch/x86/kernel/ftrace_64.S
> +++ b/arch/x86/kernel/ftrace_64.S
> @@ -285,8 +285,18 @@ SYM_INNER_LABEL(ftrace_regs_caller_end, SYM_L_GLOBAL)
> ANNOTATE_NOENDBR
> RET
>
> +1:
> + testb $1, %al
> + jz 2f
> + andq $0xfffffffffffffffe, %rax
> + movq %rax, MCOUNT_REG_SIZE+8(%rsp)
> + restore_mcount_regs
> + /* Restore flags */
> + popfq
> + RET
is this hunk the reason for the 0x1 jmp-bit you set in the address?
I wonder if we introduced new flag in dyn_ftrace::flags for this,
then we'd need to have extra ftrace trampoline for jmp ftrace_ops
jirka
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH bpf-next v3 0/6] bpf trampoline support "jmp" mode
2025-11-18 12:36 [PATCH bpf-next v3 0/6] bpf trampoline support "jmp" mode Menglong Dong
` (5 preceding siblings ...)
2025-11-18 12:36 ` [PATCH bpf-next v3 6/6] bpf: implement "jmp" mode for trampoline Menglong Dong
@ 2025-11-19 0:28 ` Alexei Starovoitov
2025-11-19 2:47 ` Menglong Dong
2025-11-24 18:00 ` patchwork-bot+netdevbpf
7 siblings, 1 reply; 22+ messages in thread
From: Alexei Starovoitov @ 2025-11-19 0:28 UTC (permalink / raw)
To: Menglong Dong
Cc: Alexei Starovoitov, Steven Rostedt, Daniel Borkmann,
John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Eduard,
Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo,
Jiri Olsa, Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers,
jiang.biao, bpf, LKML, linux-trace-kernel
On Tue, Nov 18, 2025 at 4:36 AM Menglong Dong <menglong8.dong@gmail.com> wrote:
>
> As we can see above, the performance of fexit increase from 80.544M/s to
> 136.540M/s, and the "fmodret" increase from 78.301M/s to 159.248M/s.
Nice! Now we're talking.
I think arm64 CPUs have a similar RSB-like return address predictor.
Do we need to do something similar there?
The question is not targeted to you, Menglong,
just wondering.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH bpf-next v3 6/6] bpf: implement "jmp" mode for trampoline
2025-11-18 12:36 ` [PATCH bpf-next v3 6/6] bpf: implement "jmp" mode for trampoline Menglong Dong
@ 2025-11-19 0:59 ` Alexei Starovoitov
2025-11-19 1:03 ` Steven Rostedt
0 siblings, 1 reply; 22+ messages in thread
From: Alexei Starovoitov @ 2025-11-19 0:59 UTC (permalink / raw)
To: Menglong Dong
Cc: Alexei Starovoitov, Steven Rostedt, Daniel Borkmann,
John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Eduard,
Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo,
Jiri Olsa, Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers,
jiang.biao, bpf, LKML, linux-trace-kernel
On Tue, Nov 18, 2025 at 4:37 AM Menglong Dong <menglong8.dong@gmail.com> wrote:
>
> Implement the "jmp" mode for the bpf trampoline. For the ftrace_managed
> case, we need only to set the FTRACE_OPS_FL_JMP on the tr->fops if "jmp"
> is needed.
>
> For the bpf poke case, we will check the origin poke type with the
> "origin_flags", and current poke type with "tr->flags". The function
> bpf_trampoline_update_fentry() is introduced to do the job.
>
> The "jmp" mode will only be enabled with CONFIG_DYNAMIC_FTRACE_WITH_JMP
> enabled and BPF_TRAMP_F_SHARE_IPMODIFY is not set. With
> BPF_TRAMP_F_SHARE_IPMODIFY, we need to get the origin call ip from the
> stack, so we can't use the "jmp" mode.
>
> Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
> ---
> v3:
> - wrap the write to tr->fops->flags with CONFIG_DYNAMIC_FTRACE_WITH_JMP
> - reset BPF_TRAMP_F_SKIP_FRAME when the second try of modify_fentry in
> bpf_trampoline_update()
All looks good to me.
Steven,
are you happy with patch 1?
Can you pls Ack?
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH bpf-next v3 6/6] bpf: implement "jmp" mode for trampoline
2025-11-19 0:59 ` Alexei Starovoitov
@ 2025-11-19 1:03 ` Steven Rostedt
2025-11-22 2:37 ` Alexei Starovoitov
0 siblings, 1 reply; 22+ messages in thread
From: Steven Rostedt @ 2025-11-19 1:03 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Menglong Dong, Alexei Starovoitov, Daniel Borkmann,
John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Eduard,
Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo,
Jiri Olsa, Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers,
jiang.biao, bpf, LKML, linux-trace-kernel
On Tue, 18 Nov 2025 16:59:59 -0800
Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
> Steven,
> are you happy with patch 1?
> Can you pls Ack?
Let me run patch 1 and 2 through my tests.
-- Steve
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH bpf-next v3 2/6] x86/ftrace: implement DYNAMIC_FTRACE_WITH_JMP
2025-11-18 22:01 ` Jiri Olsa
@ 2025-11-19 1:05 ` Menglong Dong
0 siblings, 0 replies; 22+ messages in thread
From: Menglong Dong @ 2025-11-19 1:05 UTC (permalink / raw)
To: Menglong Dong, Jiri Olsa
Cc: ast, rostedt, daniel, john.fastabend, andrii, martin.lau, eddyz87,
song, yonghong.song, kpsingh, sdf, haoluo, mhiramat, mark.rutland,
mathieu.desnoyers, jiang.biao, bpf, linux-kernel,
linux-trace-kernel
On 2025/11/19 06:01, Jiri Olsa wrote:
> On Tue, Nov 18, 2025 at 08:36:30PM +0800, Menglong Dong wrote:
> > Implement the DYNAMIC_FTRACE_WITH_JMP for x86_64. In ftrace_call_replace,
> > we will use JMP32_INSN_OPCODE instead of CALL_INSN_OPCODE if the address
> > should use "jmp".
> >
> > Meanwhile, adjust the direct call in the ftrace_regs_caller. The RSB is
> > balanced in the "jmp" mode. Take the function "foo" for example:
> >
> > original_caller:
> > call foo -> foo:
> > call fentry -> fentry:
> > [do ftrace callbacks ]
> > move tramp_addr to stack
> > RET -> tramp_addr
> > tramp_addr:
> > [..]
> > call foo_body -> foo_body:
> > [..]
> > RET -> back to tramp_addr
> > [..]
> > RET -> back to original_caller
> >
> > Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
> > ---
> > arch/x86/Kconfig | 1 +
> > arch/x86/kernel/ftrace.c | 7 ++++++-
> > arch/x86/kernel/ftrace_64.S | 12 +++++++++++-
> > 3 files changed, 18 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> > index fa3b616af03a..462250a20311 100644
> > --- a/arch/x86/Kconfig
> > +++ b/arch/x86/Kconfig
> > @@ -230,6 +230,7 @@ config X86
> > select HAVE_DYNAMIC_FTRACE_WITH_ARGS if X86_64
> > select HAVE_FTRACE_REGS_HAVING_PT_REGS if X86_64
> > select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
> > + select HAVE_DYNAMIC_FTRACE_WITH_JMP if X86_64
> > select HAVE_SAMPLE_FTRACE_DIRECT if X86_64
> > select HAVE_SAMPLE_FTRACE_DIRECT_MULTI if X86_64
> > select HAVE_EBPF_JIT
> > diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
> > index 4450acec9390..0543b57f54ee 100644
> > --- a/arch/x86/kernel/ftrace.c
> > +++ b/arch/x86/kernel/ftrace.c
> > @@ -74,7 +74,12 @@ static const char *ftrace_call_replace(unsigned long ip, unsigned long addr)
> > * No need to translate into a callthunk. The trampoline does
> > * the depth accounting itself.
> > */
> > - return text_gen_insn(CALL_INSN_OPCODE, (void *)ip, (void *)addr);
> > + if (ftrace_is_jmp(addr)) {
> > + addr = ftrace_jmp_get(addr);
> > + return text_gen_insn(JMP32_INSN_OPCODE, (void *)ip, (void *)addr);
> > + } else {
> > + return text_gen_insn(CALL_INSN_OPCODE, (void *)ip, (void *)addr);
> > + }
> > }
> >
> > static int ftrace_verify_code(unsigned long ip, const char *old_code)
> > diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S
> > index 823dbdd0eb41..a132608265f6 100644
> > --- a/arch/x86/kernel/ftrace_64.S
> > +++ b/arch/x86/kernel/ftrace_64.S
> > @@ -285,8 +285,18 @@ SYM_INNER_LABEL(ftrace_regs_caller_end, SYM_L_GLOBAL)
> > ANNOTATE_NOENDBR
> > RET
> >
> > +1:
> > + testb $1, %al
> > + jz 2f
> > + andq $0xfffffffffffffffe, %rax
> > + movq %rax, MCOUNT_REG_SIZE+8(%rsp)
> > + restore_mcount_regs
> > + /* Restore flags */
> > + popfq
> > + RET
>
> is this hunk the reason for the 0x1 jmp-bit you set in the address?
Exactly!
>
> I wonder if we introduced new flag in dyn_ftrace::flags for this,
> then we'd need to have extra ftrace trampoline for jmp ftrace_ops
We don't introduce new dyn_ftrace::flags. I tried to introduce
FTRACE_FL_JMP and FTRACE_FL_JMP_EN for this propose before I
added the jmp-bit to the address. It's hard to do it this way.
First, we need to introduce a ftrace_regs_jmp_caller, which will
be used for the "jmp" mode. However, it's difficult when we need
to change a call_address to jmp_address in __modify_ftrace_direct(),
as it will change the "entry->direct" directly. And maybe we need
reconstruct the direct call to implement it this way.
I were almost giving up before I thought the jmp-bit, which allow
us the update the address from call mode to jmp mode atomically.
Thanks!
Menglong Dong
>
> jirka
>
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH bpf-next v3 0/6] bpf trampoline support "jmp" mode
2025-11-19 0:28 ` [PATCH bpf-next v3 0/6] bpf trampoline support "jmp" mode Alexei Starovoitov
@ 2025-11-19 2:47 ` Menglong Dong
2025-11-19 2:55 ` Leon Hwang
0 siblings, 1 reply; 22+ messages in thread
From: Menglong Dong @ 2025-11-19 2:47 UTC (permalink / raw)
To: Menglong Dong, Alexei Starovoitov, Leon Hwang
Cc: Alexei Starovoitov, Steven Rostedt, Daniel Borkmann,
John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Eduard,
Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo,
Jiri Olsa, Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers,
jiang.biao, bpf, LKML, linux-trace-kernel
On 2025/11/19 08:28, Alexei Starovoitov wrote:
> On Tue, Nov 18, 2025 at 4:36 AM Menglong Dong <menglong8.dong@gmail.com> wrote:
> >
> > As we can see above, the performance of fexit increase from 80.544M/s to
> > 136.540M/s, and the "fmodret" increase from 78.301M/s to 159.248M/s.
>
> Nice! Now we're talking.
>
> I think arm64 CPUs have a similar RSB-like return address predictor.
> Do we need to do something similar there?
> The question is not targeted to you, Menglong,
> just wondering.
I did some research before, and I find that most arch
have such RSB-like stuff. I'll have a look at the loongarch
later(maybe after the LPC, as I'm forcing on the English practice),
and Leon is following the arm64.
For the other arch, we don't have the machine, and I think
it needs some else help.
Thanks!
Menglong Dong
>
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH bpf-next v3 0/6] bpf trampoline support "jmp" mode
2025-11-19 2:47 ` Menglong Dong
@ 2025-11-19 2:55 ` Leon Hwang
2025-11-19 12:36 ` Xu Kuohai
0 siblings, 1 reply; 22+ messages in thread
From: Leon Hwang @ 2025-11-19 2:55 UTC (permalink / raw)
To: Menglong Dong, Menglong Dong, Alexei Starovoitov, Leon Hwang
Cc: Alexei Starovoitov, Steven Rostedt, Daniel Borkmann,
John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Eduard,
Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo,
Jiri Olsa, Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers,
jiang.biao, bpf, LKML, linux-trace-kernel
On 19/11/25 10:47, Menglong Dong wrote:
> On 2025/11/19 08:28, Alexei Starovoitov wrote:
>> On Tue, Nov 18, 2025 at 4:36 AM Menglong Dong <menglong8.dong@gmail.com> wrote:
>>>
>>> As we can see above, the performance of fexit increase from 80.544M/s to
>>> 136.540M/s, and the "fmodret" increase from 78.301M/s to 159.248M/s.
>>
>> Nice! Now we're talking.
>>
>> I think arm64 CPUs have a similar RSB-like return address predictor.
>> Do we need to do something similar there?
>> The question is not targeted to you, Menglong,
>> just wondering.
>
> I did some research before, and I find that most arch
> have such RSB-like stuff. I'll have a look at the loongarch
> later(maybe after the LPC, as I'm forcing on the English practice),
> and Leon is following the arm64.
Yep, happy to take this on.
I'm reviewing the arm64 JIT code now and will experiment with possible
approaches to handle this as well.
Thanks,
Leon
>
> For the other arch, we don't have the machine, and I think
> it needs some else help.
>
> Thanks!
> Menglong Dong
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH bpf-next v3 0/6] bpf trampoline support "jmp" mode
2025-11-19 2:55 ` Leon Hwang
@ 2025-11-19 12:36 ` Xu Kuohai
2025-11-20 2:07 ` Leon Hwang
0 siblings, 1 reply; 22+ messages in thread
From: Xu Kuohai @ 2025-11-19 12:36 UTC (permalink / raw)
To: Leon Hwang, Menglong Dong, Menglong Dong, Alexei Starovoitov
Cc: Alexei Starovoitov, Steven Rostedt, Daniel Borkmann,
John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Eduard,
Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo,
Jiri Olsa, Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers,
jiang.biao, bpf, LKML, linux-trace-kernel
On 11/19/2025 10:55 AM, Leon Hwang wrote:
>
>
> On 19/11/25 10:47, Menglong Dong wrote:
>> On 2025/11/19 08:28, Alexei Starovoitov wrote:
>>> On Tue, Nov 18, 2025 at 4:36 AM Menglong Dong <menglong8.dong@gmail.com> wrote:
>>>>
>>>> As we can see above, the performance of fexit increase from 80.544M/s to
>>>> 136.540M/s, and the "fmodret" increase from 78.301M/s to 159.248M/s.
>>>
>>> Nice! Now we're talking.
>>>
>>> I think arm64 CPUs have a similar RSB-like return address predictor.
>>> Do we need to do something similar there?
>>> The question is not targeted to you, Menglong,
>>> just wondering.
>>
>> I did some research before, and I find that most arch
>> have such RSB-like stuff. I'll have a look at the loongarch
>> later(maybe after the LPC, as I'm forcing on the English practice),
>> and Leon is following the arm64.
>
> Yep, happy to take this on.
>
> I'm reviewing the arm64 JIT code now and will experiment with possible
> approaches to handle this as well.
>
Unfortunately, the arm64 trampoline uses a tricky approach to bypass BTI
by using ret instruction to invoke the patched function. This conflicts
with the current approach, and seems there is no straightforward solution.
> Thanks,
> Leon
>
>>
>> For the other arch, we don't have the machine, and I think
>> it needs some else help.
>>
>> Thanks!
>> Menglong Dong
>
>
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH bpf-next v3 0/6] bpf trampoline support "jmp" mode
2025-11-19 12:36 ` Xu Kuohai
@ 2025-11-20 2:07 ` Leon Hwang
2025-11-20 3:24 ` Xu Kuohai
0 siblings, 1 reply; 22+ messages in thread
From: Leon Hwang @ 2025-11-20 2:07 UTC (permalink / raw)
To: Xu Kuohai, Menglong Dong, Menglong Dong, Alexei Starovoitov
Cc: Alexei Starovoitov, Steven Rostedt, Daniel Borkmann,
John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Eduard,
Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo,
Jiri Olsa, Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers,
jiang.biao, bpf, LKML, linux-trace-kernel
On 19/11/25 20:36, Xu Kuohai wrote:
> On 11/19/2025 10:55 AM, Leon Hwang wrote:
>>
>>
>> On 19/11/25 10:47, Menglong Dong wrote:
>>> On 2025/11/19 08:28, Alexei Starovoitov wrote:
>>>> On Tue, Nov 18, 2025 at 4:36 AM Menglong Dong
>>>> <menglong8.dong@gmail.com> wrote:
>>>>>
>>>>> As we can see above, the performance of fexit increase from
>>>>> 80.544M/s to
>>>>> 136.540M/s, and the "fmodret" increase from 78.301M/s to 159.248M/s.
>>>>
>>>> Nice! Now we're talking.
>>>>
>>>> I think arm64 CPUs have a similar RSB-like return address predictor.
>>>> Do we need to do something similar there?
>>>> The question is not targeted to you, Menglong,
>>>> just wondering.
>>>
>>> I did some research before, and I find that most arch
>>> have such RSB-like stuff. I'll have a look at the loongarch
>>> later(maybe after the LPC, as I'm forcing on the English practice),
>>> and Leon is following the arm64.
>>
>> Yep, happy to take this on.
>>
>> I'm reviewing the arm64 JIT code now and will experiment with possible
>> approaches to handle this as well.
>>
>
> Unfortunately, the arm64 trampoline uses a tricky approach to bypass BTI
> by using ret instruction to invoke the patched function. This conflicts
> with the current approach, and seems there is no straightforward solution.
>
Hi Kuohai,
Thanks for the explanation.
Do you recall the original reason for using a ret instruction to bypass
BTI in the arm64 trampoline? I'm trying to understand whether that
constraint is fundamental or historical.
I'm wondering if we could structure the control flow like this:
foo "bl" bar -> bar:
bar "br" trampoline -> trampoline:
trampoline "bl" -> bar func body:
bar func body "ret" -> trampoline
trampoline "ret" -> foo
This would introduce two "bl"s and two "ret"s, keeping the RAS balanced
in a way similar to the x86 approach.
With this structure, we could also shrink the frame layout:
* SP + retaddr_off [ self ip ]
* [ FP ]
And then store the "self" return address elsewhere on the stack.
Do you think something along these lines could work?
Thanks,
Leon
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH bpf-next v3 0/6] bpf trampoline support "jmp" mode
2025-11-20 2:07 ` Leon Hwang
@ 2025-11-20 3:24 ` Xu Kuohai
0 siblings, 0 replies; 22+ messages in thread
From: Xu Kuohai @ 2025-11-20 3:24 UTC (permalink / raw)
To: Leon Hwang, Menglong Dong, Menglong Dong, Alexei Starovoitov
Cc: Alexei Starovoitov, Steven Rostedt, Daniel Borkmann,
John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Eduard,
Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo,
Jiri Olsa, Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers,
jiang.biao, bpf, LKML, linux-trace-kernel
On 11/20/2025 10:07 AM, Leon Hwang wrote:
>
>
> On 19/11/25 20:36, Xu Kuohai wrote:
>> On 11/19/2025 10:55 AM, Leon Hwang wrote:
>>>
>>>
>>> On 19/11/25 10:47, Menglong Dong wrote:
>>>> On 2025/11/19 08:28, Alexei Starovoitov wrote:
>>>>> On Tue, Nov 18, 2025 at 4:36 AM Menglong Dong
>>>>> <menglong8.dong@gmail.com> wrote:
>>>>>>
>>>>>> As we can see above, the performance of fexit increase from
>>>>>> 80.544M/s to
>>>>>> 136.540M/s, and the "fmodret" increase from 78.301M/s to 159.248M/s.
>>>>>
>>>>> Nice! Now we're talking.
>>>>>
>>>>> I think arm64 CPUs have a similar RSB-like return address predictor.
>>>>> Do we need to do something similar there?
>>>>> The question is not targeted to you, Menglong,
>>>>> just wondering.
>>>>
>>>> I did some research before, and I find that most arch
>>>> have such RSB-like stuff. I'll have a look at the loongarch
>>>> later(maybe after the LPC, as I'm forcing on the English practice),
>>>> and Leon is following the arm64.
>>>
>>> Yep, happy to take this on.
>>>
>>> I'm reviewing the arm64 JIT code now and will experiment with possible
>>> approaches to handle this as well.
>>>
>>
>> Unfortunately, the arm64 trampoline uses a tricky approach to bypass BTI
>> by using ret instruction to invoke the patched function. This conflicts
>> with the current approach, and seems there is no straightforward solution.
>>
> Hi Kuohai,
>
> Thanks for the explanation.
>
> Do you recall the original reason for using a ret instruction to bypass
> BTI in the arm64 trampoline? I'm trying to understand whether that
> constraint is fundamental or historical.
arm64 direct jump instructions (b and bl) support only a ±128 MB range.
But the distance between the trampoline and the patched function may
exceed this range. So an indirect jump is required.
With BTI enabled, indirect jump instructions (br and blr) require a landing
pad at the jump target. The target is the instruction immediately after
the call site in the patched function. It may be any instruction, including
non-landing-pad instructions. If it is ot a landing pad, a BTI exception
occurs when trampline jump back using BR/BLR.
Since the RET instruction does not require landing pad, it is chosen to
return to the patched function.
See [1] for reference.
[1] https://lore.kernel.org/bpf/20230401234144.3719742-1-xukuohai@huaweicloud.com/
> I'm wondering if we could structure the control flow like this:
>
> foo "bl" bar -> bar:
> bar "br" trampoline -> trampoline:
> trampoline "bl" -> bar func body:
As mentioned above, the problem is that the bl may be out of range.
If blr instruction is used instead, the target instruction must be a landing
pad when BTI is enabled. One approach is to reserve an extra nop at the call
site and patch it into a bti instruction at runtime when needed.
> bar func body "ret" -> trampoline
> trampoline "ret" -> foo
>
> This would introduce two "bl"s and two "ret"s, keeping the RAS balanced
> in a way similar to the x86 approach.
>
> With this structure, we could also shrink the frame layout:
>
> * SP + retaddr_off [ self ip ]
> * [ FP ]
>
> And then store the "self" return address elsewhere on the stack.
>
> Do you think something along these lines could work?
>
> Thanks,
> Leon
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH bpf-next v3 6/6] bpf: implement "jmp" mode for trampoline
2025-11-19 1:03 ` Steven Rostedt
@ 2025-11-22 2:37 ` Alexei Starovoitov
2025-11-24 14:50 ` Steven Rostedt
0 siblings, 1 reply; 22+ messages in thread
From: Alexei Starovoitov @ 2025-11-22 2:37 UTC (permalink / raw)
To: Steven Rostedt
Cc: Menglong Dong, Alexei Starovoitov, Daniel Borkmann,
John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Eduard,
Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo,
Jiri Olsa, Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers,
jiang.biao, bpf, LKML, linux-trace-kernel
On Tue, Nov 18, 2025 at 5:02 PM Steven Rostedt <rostedt@goodmis.org> wrote:
>
> On Tue, 18 Nov 2025 16:59:59 -0800
> Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
>
> > Steven,
> > are you happy with patch 1?
> > Can you pls Ack?
>
> Let me run patch 1 and 2 through my tests.
gentle ping.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH bpf-next v3 6/6] bpf: implement "jmp" mode for trampoline
2025-11-22 2:37 ` Alexei Starovoitov
@ 2025-11-24 14:50 ` Steven Rostedt
0 siblings, 0 replies; 22+ messages in thread
From: Steven Rostedt @ 2025-11-24 14:50 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Menglong Dong, Alexei Starovoitov, Daniel Borkmann,
John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Eduard,
Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo,
Jiri Olsa, Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers,
jiang.biao, bpf, LKML, linux-trace-kernel
On Fri, 21 Nov 2025 18:37:02 -0800
Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
> On Tue, Nov 18, 2025 at 5:02 PM Steven Rostedt <rostedt@goodmis.org> wrote:
> >
> > On Tue, 18 Nov 2025 16:59:59 -0800
> > Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
> >
> > > Steven,
> > > are you happy with patch 1?
> > > Can you pls Ack?
> >
> > Let me run patch 1 and 2 through my tests.
>
> gentle ping.
Yeah, sorry forgot to reply. They did pass.
Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org>
-- Steve
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH bpf-next v3 0/6] bpf trampoline support "jmp" mode
2025-11-18 12:36 [PATCH bpf-next v3 0/6] bpf trampoline support "jmp" mode Menglong Dong
` (6 preceding siblings ...)
2025-11-19 0:28 ` [PATCH bpf-next v3 0/6] bpf trampoline support "jmp" mode Alexei Starovoitov
@ 2025-11-24 18:00 ` patchwork-bot+netdevbpf
7 siblings, 0 replies; 22+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-11-24 18:00 UTC (permalink / raw)
To: Menglong Dong
Cc: ast, rostedt, daniel, john.fastabend, andrii, martin.lau, eddyz87,
song, yonghong.song, kpsingh, sdf, haoluo, jolsa, mhiramat,
mark.rutland, mathieu.desnoyers, jiang.biao, bpf, linux-kernel,
linux-trace-kernel
Hello:
This series was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:
On Tue, 18 Nov 2025 20:36:28 +0800 you wrote:
> For now, the bpf trampoline is called by the "call" instruction. However,
> it break the RSB and introduce extra overhead in x86_64 arch.
>
> For example, we hook the function "foo" with fexit, the call and return
> logic will be like this:
> call foo -> call trampoline -> call foo-body ->
> return foo-body -> return foo
>
> [...]
Here is the summary with links:
- [bpf-next,v3,1/6] ftrace: introduce FTRACE_OPS_FL_JMP
https://git.kernel.org/bpf/bpf-next/c/25e4e3565d45
- [bpf-next,v3,2/6] x86/ftrace: implement DYNAMIC_FTRACE_WITH_JMP
https://git.kernel.org/bpf/bpf-next/c/0c3772a8db1f
- [bpf-next,v3,3/6] bpf: fix the usage of BPF_TRAMP_F_SKIP_FRAME
https://git.kernel.org/bpf/bpf-next/c/47c9214dcbea
- [bpf-next,v3,4/6] bpf,x86: adjust the "jmp" mode for bpf trampoline
https://git.kernel.org/bpf/bpf-next/c/373f2f44c300
- [bpf-next,v3,5/6] bpf: specify the old and new poke_type for bpf_arch_text_poke
https://git.kernel.org/bpf/bpf-next/c/ae4a3160d19c
- [bpf-next,v3,6/6] bpf: implement "jmp" mode for trampoline
https://git.kernel.org/bpf/bpf-next/c/402e44b31e9d
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2025-11-24 18:00 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-18 12:36 [PATCH bpf-next v3 0/6] bpf trampoline support "jmp" mode Menglong Dong
2025-11-18 12:36 ` [PATCH bpf-next v3 1/6] ftrace: introduce FTRACE_OPS_FL_JMP Menglong Dong
2025-11-18 13:25 ` bot+bpf-ci
2025-11-18 13:51 ` Steven Rostedt
2025-11-18 12:36 ` [PATCH bpf-next v3 2/6] x86/ftrace: implement DYNAMIC_FTRACE_WITH_JMP Menglong Dong
2025-11-18 22:01 ` Jiri Olsa
2025-11-19 1:05 ` Menglong Dong
2025-11-18 12:36 ` [PATCH bpf-next v3 3/6] bpf: fix the usage of BPF_TRAMP_F_SKIP_FRAME Menglong Dong
2025-11-18 12:36 ` [PATCH bpf-next v3 4/6] bpf,x86: adjust the "jmp" mode for bpf trampoline Menglong Dong
2025-11-18 12:36 ` [PATCH bpf-next v3 5/6] bpf: specify the old and new poke_type for bpf_arch_text_poke Menglong Dong
2025-11-18 12:36 ` [PATCH bpf-next v3 6/6] bpf: implement "jmp" mode for trampoline Menglong Dong
2025-11-19 0:59 ` Alexei Starovoitov
2025-11-19 1:03 ` Steven Rostedt
2025-11-22 2:37 ` Alexei Starovoitov
2025-11-24 14:50 ` Steven Rostedt
2025-11-19 0:28 ` [PATCH bpf-next v3 0/6] bpf trampoline support "jmp" mode Alexei Starovoitov
2025-11-19 2:47 ` Menglong Dong
2025-11-19 2:55 ` Leon Hwang
2025-11-19 12:36 ` Xu Kuohai
2025-11-20 2:07 ` Leon Hwang
2025-11-20 3:24 ` Xu Kuohai
2025-11-24 18:00 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).