* [PATCH 0/7] uprobes/x86: Fix red zone issue for optimized uprobes
@ 2026-05-14 13:53 Jiri Olsa
2026-05-14 13:53 ` [PATCH 1/7] uprobes/x86: Move optimized uprobe from nop5 to nop10 Jiri Olsa
` (6 more replies)
0 siblings, 7 replies; 15+ messages in thread
From: Jiri Olsa @ 2026-05-14 13:53 UTC (permalink / raw)
To: Oleg Nesterov, Peter Zijlstra, Ingo Molnar, Masami Hiramatsu,
Andrii Nakryiko
Cc: bpf, linux-trace-kernel
hi,
Andrii reported an issue with optimized uprobes [1] that can clobber
redzone area with call instruction storing return address on stack
where user code may keep temporary data without adjusting rsp.
Fixing this by moving the optimized uprobes on top of 10-bytes nop
instruction, so we can squeeze another instruction to escape the
redzone area before doing the call.
Note we need upstream update first for patch 3 (github.com/libbpf/usdt),
if we decide to take this change.
thanks,
jirka
[1] https://lore.kernel.org/bpf/20260509003146.976844-1-andrii@kernel.org/
---
Andrii Nakryiko (1):
selftests/bpf: Add tests for uprobe nop10 red zone clobbering
Jiri Olsa (6):
uprobes/x86: Move optimized uprobe from nop5 to nop10
libbpf: Change has_nop_combo to work on top of nop10
selftests/bpf: Emit nop,nop10 instructions combo for x86_64 arch
selftests/bpf: Change uprobe syscall tests to use nop10
selftests/bpf: Change uprobe/usdt trigger bench code to use nop10
selftests/bpf: Add reattach tests for uprobe syscall
arch/x86/kernel/uprobes.c | 121 ++++++++++++++++++++++++++++------------
tools/lib/bpf/usdt.c | 16 +++---
tools/testing/selftests/bpf/bench.c | 20 +++----
tools/testing/selftests/bpf/benchs/bench_trigger.c | 38 ++++++-------
tools/testing/selftests/bpf/benchs/run_bench_uprobes.sh | 2 +-
tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c | 217 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------
tools/testing/selftests/bpf/prog_tests/usdt.c | 74 +++++++++++++++++++++----
tools/testing/selftests/bpf/progs/test_usdt.c | 25 +++++++++
tools/testing/selftests/bpf/usdt.h | 2 +-
tools/testing/selftests/bpf/usdt_2.c | 15 ++++-
10 files changed, 423 insertions(+), 107 deletions(-)
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 1/7] uprobes/x86: Move optimized uprobe from nop5 to nop10
2026-05-14 13:53 [PATCH 0/7] uprobes/x86: Fix red zone issue for optimized uprobes Jiri Olsa
@ 2026-05-14 13:53 ` Jiri Olsa
2026-05-14 16:54 ` Jakub Sitnicki
2026-05-14 20:05 ` sashiko-bot
2026-05-14 13:53 ` [PATCH 2/7] libbpf: Change has_nop_combo to work on top of nop10 Jiri Olsa
` (5 subsequent siblings)
6 siblings, 2 replies; 15+ messages in thread
From: Jiri Olsa @ 2026-05-14 13:53 UTC (permalink / raw)
To: Oleg Nesterov, Peter Zijlstra, Ingo Molnar, Masami Hiramatsu,
Andrii Nakryiko
Cc: bpf, linux-trace-kernel
Andrii reported an issue with optimized uprobes [1] that can clobber
redzone area with call instruction storing return address on stack
where user code may keep temporary data without adjusting rsp.
Fixing this by moving the optimized uprobes on top of 10-bytes nop
instruction, so we can squeeze another instruction to escape the
redzone area before doing the call, like:
lea -0x80(%rsp), %rsp
call tramp
Note the lea instruction is used to adjust the rsp register without
changing the flags.
The optimized uprobe performance stays the same:
uprobe-nop : 3.129 ± 0.013M/s
uprobe-push : 3.045 ± 0.006M/s
uprobe-ret : 1.095 ± 0.004M/s
--> uprobe-nop10 : 7.170 ± 0.020M/s
uretprobe-nop : 2.143 ± 0.021M/s
uretprobe-push : 2.090 ± 0.000M/s
uretprobe-ret : 0.942 ± 0.000M/s
--> uretprobe-nop10: 3.381 ± 0.003M/s
usdt-nop : 3.245 ± 0.004M/s
--> usdt-nop10 : 7.256 ± 0.023M/s
[1] https://lore.kernel.org/bpf/20260509003146.976844-1-andrii@kernel.org/
Reported-by: Andrii Nakryiko <andrii@kernel.org>
Closes: https://lore.kernel.org/bpf/20260509003146.976844-1-andrii@kernel.org/
Fixes: ba2bfc97b462 ("uprobes/x86: Add support to optimize uprobes")
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
arch/x86/kernel/uprobes.c | 121 +++++++++++++++++++++++++++-----------
1 file changed, 86 insertions(+), 35 deletions(-)
diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c
index ebb1baf1eb1d..f7c4101a4039 100644
--- a/arch/x86/kernel/uprobes.c
+++ b/arch/x86/kernel/uprobes.c
@@ -636,9 +636,21 @@ struct uprobe_trampoline {
unsigned long vaddr;
};
+#define LEA_INSN_SIZE 5
+#define OPT_INSN_SIZE (LEA_INSN_SIZE + CALL_INSN_SIZE)
+#define OPT_JMP8_OFFSET (OPT_INSN_SIZE - JMP8_INSN_SIZE)
+#define REDZONE_SIZE 0x80
+
+static const u8 lea_rsp[] = { 0x48, 0x8d, 0x64, 0x24, 0x80 };
+
+static bool is_lea_insn(const uprobe_opcode_t *insn)
+{
+ return !memcmp(insn, lea_rsp, LEA_INSN_SIZE);
+}
+
static bool is_reachable_by_call(unsigned long vtramp, unsigned long vaddr)
{
- long delta = (long)(vaddr + 5 - vtramp);
+ long delta = (long)(vaddr + OPT_INSN_SIZE - vtramp);
return delta >= INT_MIN && delta <= INT_MAX;
}
@@ -651,7 +663,7 @@ static unsigned long find_nearest_trampoline(unsigned long vaddr)
};
unsigned long low_limit, high_limit;
unsigned long low_tramp, high_tramp;
- unsigned long call_end = vaddr + 5;
+ unsigned long call_end = vaddr + OPT_INSN_SIZE;
if (check_add_overflow(call_end, INT_MIN, &low_limit))
low_limit = PAGE_SIZE;
@@ -826,8 +838,8 @@ SYSCALL_DEFINE0(uprobe)
regs->ax = args.ax;
regs->r11 = args.r11;
regs->cx = args.cx;
- regs->ip = args.retaddr - 5;
- regs->sp += sizeof(args);
+ regs->ip = args.retaddr - OPT_INSN_SIZE;
+ regs->sp += sizeof(args) + REDZONE_SIZE;
regs->orig_ax = -1;
sp = regs->sp;
@@ -844,12 +856,12 @@ SYSCALL_DEFINE0(uprobe)
*/
if (regs->sp != sp) {
/* skip the trampoline call */
- if (args.retaddr - 5 == regs->ip)
- regs->ip += 5;
+ if (args.retaddr - OPT_INSN_SIZE == regs->ip)
+ regs->ip += OPT_INSN_SIZE;
return regs->ax;
}
- regs->sp -= sizeof(args);
+ regs->sp -= sizeof(args) + REDZONE_SIZE;
/* for the case uprobe_consumer has changed ax/r11/cx */
args.ax = regs->ax;
@@ -857,7 +869,7 @@ SYSCALL_DEFINE0(uprobe)
args.cx = regs->cx;
/* keep return address unless we are instructed otherwise */
- if (args.retaddr - 5 != regs->ip)
+ if (args.retaddr - OPT_INSN_SIZE != regs->ip)
args.retaddr = regs->ip;
if (shstk_push(args.retaddr) == -EFAULT)
@@ -891,7 +903,7 @@ asm (
"pop %rax\n"
"pop %r11\n"
"pop %rcx\n"
- "ret\n"
+ "ret $" __stringify(REDZONE_SIZE) "\n"
"int3\n"
".balign " __stringify(PAGE_SIZE) "\n"
".popsection\n"
@@ -909,7 +921,7 @@ late_initcall(arch_uprobes_init);
enum {
EXPECT_SWBP,
- EXPECT_CALL,
+ EXPECT_OPTIMIZED,
};
struct write_opcode_ctx {
@@ -930,17 +942,18 @@ static int verify_insn(struct page *page, unsigned long vaddr, uprobe_opcode_t *
int nbytes, void *data)
{
struct write_opcode_ctx *ctx = data;
- uprobe_opcode_t old_opcode[5];
+ uprobe_opcode_t old_opcode[OPT_INSN_SIZE];
- uprobe_copy_from_page(page, ctx->base, (uprobe_opcode_t *) &old_opcode, 5);
+ uprobe_copy_from_page(page, ctx->base, old_opcode, OPT_INSN_SIZE);
switch (ctx->expect) {
case EXPECT_SWBP:
if (is_swbp_insn(&old_opcode[0]))
return 1;
break;
- case EXPECT_CALL:
- if (is_call_insn(&old_opcode[0]))
+ case EXPECT_OPTIMIZED:
+ if (is_lea_insn(&old_opcode[0]) &&
+ is_call_insn(&old_opcode[LEA_INSN_SIZE]))
return 1;
break;
}
@@ -963,7 +976,7 @@ static int verify_insn(struct page *page, unsigned long vaddr, uprobe_opcode_t *
* - SMP sync all CPUs
*/
static int int3_update(struct arch_uprobe *auprobe, struct vm_area_struct *vma,
- unsigned long vaddr, char *insn, bool optimize)
+ unsigned long vaddr, char *insn, int size, bool optimize)
{
uprobe_opcode_t int3 = UPROBE_SWBP_INSN;
struct write_opcode_ctx ctx = {
@@ -978,7 +991,7 @@ static int int3_update(struct arch_uprobe *auprobe, struct vm_area_struct *vma,
* so we can skip this step for optimize == true.
*/
if (!optimize) {
- ctx.expect = EXPECT_CALL;
+ ctx.expect = EXPECT_OPTIMIZED;
err = uprobe_write(auprobe, vma, vaddr, &int3, 1, verify_insn,
true /* is_register */, false /* do_update_ref_ctr */,
&ctx);
@@ -990,7 +1003,7 @@ static int int3_update(struct arch_uprobe *auprobe, struct vm_area_struct *vma,
/* Write all but the first byte of the patched range. */
ctx.expect = EXPECT_SWBP;
- err = uprobe_write(auprobe, vma, vaddr + 1, insn + 1, 4, verify_insn,
+ err = uprobe_write(auprobe, vma, vaddr + 1, insn + 1, size - 1, verify_insn,
true /* is_register */, false /* do_update_ref_ctr */,
&ctx);
if (err)
@@ -1017,17 +1030,32 @@ static int int3_update(struct arch_uprobe *auprobe, struct vm_area_struct *vma,
static int swbp_optimize(struct arch_uprobe *auprobe, struct vm_area_struct *vma,
unsigned long vaddr, unsigned long tramp)
{
- u8 call[5];
+ u8 insn[OPT_INSN_SIZE], *call = &insn[LEA_INSN_SIZE];
- __text_gen_insn(call, CALL_INSN_OPCODE, (const void *) vaddr,
+ /*
+ * We have nop10 instruction (with first byte overwritten to int3),
+ * changing it to:
+ * lea -0x80(%rsp), %rsp
+ * call tramp
+ */
+ memcpy(insn, lea_rsp, LEA_INSN_SIZE);
+ __text_gen_insn(call, CALL_INSN_OPCODE,
+ (const void *) (vaddr + LEA_INSN_SIZE),
(const void *) tramp, CALL_INSN_SIZE);
- return int3_update(auprobe, vma, vaddr, call, true /* optimize */);
+ return int3_update(auprobe, vma, vaddr, insn, OPT_INSN_SIZE, true /* optimize */);
}
static int swbp_unoptimize(struct arch_uprobe *auprobe, struct vm_area_struct *vma,
unsigned long vaddr)
{
- return int3_update(auprobe, vma, vaddr, auprobe->insn, false /* optimize */);
+ /*
+ * We have optimized nop10 (lea, call), changing it to 'jmp rel8' to
+ * end of the 10-byte slot instead of restoring the original nop10,
+ * because we could have thread already inside lea instruction.
+ */
+ u8 jmp[OPT_INSN_SIZE] = { JMP8_INSN_OPCODE, OPT_JMP8_OFFSET };
+
+ return int3_update(auprobe, vma, vaddr, jmp, JMP8_INSN_SIZE, false /* optimize */);
}
static int copy_from_vaddr(struct mm_struct *mm, unsigned long vaddr, void *dst, int len)
@@ -1049,19 +1077,21 @@ static bool __is_optimized(uprobe_opcode_t *insn, unsigned long vaddr)
struct __packed __arch_relative_insn {
u8 op;
s32 raddr;
- } *call = (struct __arch_relative_insn *) insn;
+ } *call = (struct __arch_relative_insn *)(insn + LEA_INSN_SIZE);
- if (!is_call_insn(insn))
+ if (!is_lea_insn(insn))
+ return false;
+ if (!is_call_insn(insn + LEA_INSN_SIZE))
return false;
- return __in_uprobe_trampoline(vaddr + 5 + call->raddr);
+ return __in_uprobe_trampoline(vaddr + OPT_INSN_SIZE + call->raddr);
}
static int is_optimized(struct mm_struct *mm, unsigned long vaddr)
{
- uprobe_opcode_t insn[5];
+ uprobe_opcode_t insn[OPT_INSN_SIZE];
int err;
- err = copy_from_vaddr(mm, vaddr, &insn, 5);
+ err = copy_from_vaddr(mm, vaddr, &insn, OPT_INSN_SIZE);
if (err)
return err;
return __is_optimized((uprobe_opcode_t *)&insn, vaddr);
@@ -1095,14 +1125,25 @@ int set_orig_insn(struct arch_uprobe *auprobe, struct vm_area_struct *vma,
unsigned long vaddr)
{
if (test_bit(ARCH_UPROBE_FLAG_CAN_OPTIMIZE, &auprobe->flags)) {
- int ret = is_optimized(vma->vm_mm, vaddr);
- if (ret < 0)
+ uprobe_opcode_t insn[OPT_INSN_SIZE];
+ int ret;
+
+ ret = copy_from_vaddr(vma->vm_mm, vaddr, &insn, OPT_INSN_SIZE);
+ if (ret)
return ret;
- if (ret) {
+ if (__is_optimized((uprobe_opcode_t *)&insn, vaddr)) {
ret = swbp_unoptimize(auprobe, vma, vaddr);
WARN_ON_ONCE(ret);
return ret;
}
+ /*
+ * We can have re-attached probe on top of jmp8 instruction,
+ * which did not get optimized. We need to restore the jmp8
+ * instruction, instead of the original instruction (nop10).
+ */
+ if (is_swbp_insn(&insn[0]) && insn[1] == OPT_JMP8_OFFSET)
+ return uprobe_write_opcode(auprobe, vma, vaddr, JMP8_INSN_OPCODE,
+ false /* is_register */);
}
return uprobe_write_opcode(auprobe, vma, vaddr, *(uprobe_opcode_t *)&auprobe->insn,
false /* is_register */);
@@ -1131,7 +1172,7 @@ static int __arch_uprobe_optimize(struct arch_uprobe *auprobe, struct mm_struct
void arch_uprobe_optimize(struct arch_uprobe *auprobe, unsigned long vaddr)
{
struct mm_struct *mm = current->mm;
- uprobe_opcode_t insn[5];
+ uprobe_opcode_t insn[OPT_INSN_SIZE];
if (!should_optimize(auprobe))
return;
@@ -1142,7 +1183,7 @@ void arch_uprobe_optimize(struct arch_uprobe *auprobe, unsigned long vaddr)
* Check if some other thread already optimized the uprobe for us,
* if it's the case just go away silently.
*/
- if (copy_from_vaddr(mm, vaddr, &insn, 5))
+ if (copy_from_vaddr(mm, vaddr, &insn, OPT_INSN_SIZE))
goto unlock;
if (!is_swbp_insn((uprobe_opcode_t*) &insn))
goto unlock;
@@ -1160,14 +1201,24 @@ void arch_uprobe_optimize(struct arch_uprobe *auprobe, unsigned long vaddr)
static bool can_optimize(struct insn *insn, unsigned long vaddr)
{
- if (!insn->x86_64 || insn->length != 5)
+ if (!insn->x86_64)
return false;
- if (!insn_is_nop(insn))
+ /* We can't do cross page atomic writes yet. */
+ if (PAGE_SIZE - (vaddr & ~PAGE_MASK) < OPT_INSN_SIZE)
return false;
- /* We can't do cross page atomic writes yet. */
- return PAGE_SIZE - (vaddr & ~PAGE_MASK) >= 5;
+ /* We can optimize on top of nop10.. */
+ if (insn->length == OPT_INSN_SIZE && insn_is_nop(insn))
+ return true;
+
+ /* .. and JMP rel8 to end of slot — check swbp_unoptimize. */
+ if (insn->length == 2 &&
+ insn->opcode.bytes[0] == JMP8_INSN_OPCODE &&
+ insn->immediate.value == OPT_JMP8_OFFSET)
+ return true;
+
+ return false;
}
#else /* 32-bit: */
/*
--
2.53.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 2/7] libbpf: Change has_nop_combo to work on top of nop10
2026-05-14 13:53 [PATCH 0/7] uprobes/x86: Fix red zone issue for optimized uprobes Jiri Olsa
2026-05-14 13:53 ` [PATCH 1/7] uprobes/x86: Move optimized uprobe from nop5 to nop10 Jiri Olsa
@ 2026-05-14 13:53 ` Jiri Olsa
2026-05-14 14:55 ` bot+bpf-ci
2026-05-14 13:53 ` [PATCH 3/7] selftests/bpf: Emit nop,nop10 instructions combo for x86_64 arch Jiri Olsa
` (4 subsequent siblings)
6 siblings, 1 reply; 15+ messages in thread
From: Jiri Olsa @ 2026-05-14 13:53 UTC (permalink / raw)
To: Oleg Nesterov, Peter Zijlstra, Ingo Molnar, Masami Hiramatsu,
Andrii Nakryiko
Cc: bpf, linux-trace-kernel
We now expect nop combo with 10 bytes nop instead of 5 bytes nop,
fixing has_nop_combo to reflect that.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
tools/lib/bpf/usdt.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/tools/lib/bpf/usdt.c b/tools/lib/bpf/usdt.c
index e3710933fd52..7e62e4d5bedd 100644
--- a/tools/lib/bpf/usdt.c
+++ b/tools/lib/bpf/usdt.c
@@ -305,7 +305,7 @@ struct usdt_manager *usdt_manager_new(struct bpf_object *obj)
/*
* Detect kernel support for uprobe() syscall, it's presence means we can
- * take advantage of faster nop5 uprobe handling.
+ * take advantage of faster nop10 uprobe handling.
* Added in: 56101b69c919 ("uprobes/x86: Add uprobe syscall to speed up uprobe")
*/
man->has_uprobe_syscall = kernel_supports(obj, FEAT_UPROBE_SYSCALL);
@@ -596,14 +596,14 @@ static int parse_usdt_spec(struct usdt_spec *spec, const struct usdt_note *note,
#if defined(__x86_64__)
static bool has_nop_combo(int fd, long off)
{
- unsigned char nop_combo[6] = {
- 0x90, 0x0f, 0x1f, 0x44, 0x00, 0x00 /* nop,nop5 */
+ unsigned char nop_combo[11] = {
+ 0x90, 0x66, 0x66, 0x0f, 0x1f, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00,
};
- unsigned char buf[6];
+ unsigned char buf[11];
- if (pread(fd, buf, 6, off) != 6)
+ if (pread(fd, buf, 11, off) != 11)
return false;
- return memcmp(buf, nop_combo, 6) == 0;
+ return memcmp(buf, nop_combo, 11) == 0;
}
#else
static bool has_nop_combo(int fd, long off)
@@ -814,8 +814,8 @@ static int collect_usdt_targets(struct usdt_manager *man, struct elf_fd *elf_fd,
memset(target, 0, sizeof(*target));
/*
- * We have uprobe syscall and usdt with nop,nop5 instructions combo,
- * so we can place the uprobe directly on nop5 (+1) and get this probe
+ * We have uprobe syscall and usdt with nop,nop10 instructions combo,
+ * so we can place the uprobe directly on nop10 (+1) and get this probe
* optimized.
*/
if (man->has_uprobe_syscall && has_nop_combo(elf_fd->fd, usdt_rel_ip)) {
--
2.53.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 3/7] selftests/bpf: Emit nop,nop10 instructions combo for x86_64 arch
2026-05-14 13:53 [PATCH 0/7] uprobes/x86: Fix red zone issue for optimized uprobes Jiri Olsa
2026-05-14 13:53 ` [PATCH 1/7] uprobes/x86: Move optimized uprobe from nop5 to nop10 Jiri Olsa
2026-05-14 13:53 ` [PATCH 2/7] libbpf: Change has_nop_combo to work on top of nop10 Jiri Olsa
@ 2026-05-14 13:53 ` Jiri Olsa
2026-05-14 20:44 ` sashiko-bot
2026-05-14 13:53 ` [PATCH 4/7] selftests/bpf: Change uprobe syscall tests to use nop10 Jiri Olsa
` (3 subsequent siblings)
6 siblings, 1 reply; 15+ messages in thread
From: Jiri Olsa @ 2026-05-14 13:53 UTC (permalink / raw)
To: Oleg Nesterov, Peter Zijlstra, Ingo Molnar, Masami Hiramatsu,
Andrii Nakryiko
Cc: bpf, linux-trace-kernel
Syncing latest usdt.h change [1].
Now that we have nop10 optimization support in kernel, let's emit
nop,nop10 for usdt probe. We leave it up to the library to use
desirable nop instruction.
[1] TBD
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
tools/testing/selftests/bpf/usdt.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/usdt.h b/tools/testing/selftests/bpf/usdt.h
index c71e21df38b3..d359663b9c32 100644
--- a/tools/testing/selftests/bpf/usdt.h
+++ b/tools/testing/selftests/bpf/usdt.h
@@ -313,7 +313,7 @@ struct usdt_sema { volatile unsigned short active; };
#if defined(__ia64__) || defined(__s390__) || defined(__s390x__)
#define USDT_NOP nop 0
#elif defined(__x86_64__)
-#define USDT_NOP .byte 0x90, 0x0f, 0x1f, 0x44, 0x00, 0x0 /* nop, nop5 */
+#define USDT_NOP .byte 0x90, 0x66, 0x66, 0x0f, 0x1f, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00 /* nop, nop10 */
#else
#define USDT_NOP nop
#endif
--
2.53.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 4/7] selftests/bpf: Change uprobe syscall tests to use nop10
2026-05-14 13:53 [PATCH 0/7] uprobes/x86: Fix red zone issue for optimized uprobes Jiri Olsa
` (2 preceding siblings ...)
2026-05-14 13:53 ` [PATCH 3/7] selftests/bpf: Emit nop,nop10 instructions combo for x86_64 arch Jiri Olsa
@ 2026-05-14 13:53 ` Jiri Olsa
2026-05-14 20:51 ` sashiko-bot
2026-05-14 13:53 ` [PATCH 5/7] selftests/bpf: Change uprobe/usdt trigger bench code " Jiri Olsa
` (2 subsequent siblings)
6 siblings, 1 reply; 15+ messages in thread
From: Jiri Olsa @ 2026-05-14 13:53 UTC (permalink / raw)
To: Oleg Nesterov, Peter Zijlstra, Ingo Molnar, Masami Hiramatsu,
Andrii Nakryiko
Cc: bpf, linux-trace-kernel
Optimized uprobes are now on top of 10-bytes nop instructions,
reflect that in existing tests.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
.../selftests/bpf/benchs/bench_trigger.c | 2 +-
.../selftests/bpf/prog_tests/uprobe_syscall.c | 29 ++++++++++---------
tools/testing/selftests/bpf/prog_tests/usdt.c | 25 +++++++++-------
tools/testing/selftests/bpf/usdt_2.c | 2 +-
4 files changed, 33 insertions(+), 25 deletions(-)
diff --git a/tools/testing/selftests/bpf/benchs/bench_trigger.c b/tools/testing/selftests/bpf/benchs/bench_trigger.c
index 2f22ec61667b..bcc4820c802e 100644
--- a/tools/testing/selftests/bpf/benchs/bench_trigger.c
+++ b/tools/testing/selftests/bpf/benchs/bench_trigger.c
@@ -398,7 +398,7 @@ static void *uprobe_producer_ret(void *input)
#ifdef __x86_64__
__nocf_check __weak void uprobe_target_nop5(void)
{
- asm volatile (".byte 0x0f, 0x1f, 0x44, 0x00, 0x00");
+ asm volatile (".byte 0x66, 0x66, 0x0f, 0x1f, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00");
}
static void *uprobe_producer_nop5(void *input)
diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c b/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c
index 955a37751b52..c2e9e549c737 100644
--- a/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c
+++ b/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c
@@ -17,7 +17,7 @@
#include "uprobe_syscall_executed.skel.h"
#include "bpf/libbpf_internal.h"
-#define USDT_NOP .byte 0x0f, 0x1f, 0x44, 0x00, 0x00
+#define USDT_NOP .byte 0x66, 0x66, 0x0f, 0x1f, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00
#include "usdt.h"
#pragma GCC diagnostic ignored "-Wattributes"
@@ -26,7 +26,7 @@ __attribute__((aligned(16)))
__nocf_check __weak __naked unsigned long uprobe_regs_trigger(void)
{
asm volatile (
- ".byte 0x0f, 0x1f, 0x44, 0x00, 0x00\n" /* nop5 */
+ ".byte 0x66, 0x66, 0x0f, 0x1f, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00\n" /* nop10 */
"movq $0xdeadbeef, %rax\n"
"ret\n"
);
@@ -345,9 +345,9 @@ static void test_uretprobe_syscall_call(void)
__attribute__((aligned(16)))
__nocf_check __weak __naked void uprobe_test(void)
{
- asm volatile (" \n"
- ".byte 0x0f, 0x1f, 0x44, 0x00, 0x00 \n"
- "ret \n"
+ asm volatile (
+ ".byte 0x66, 0x66, 0x0f, 0x1f, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00\n" /* nop10 */
+ "ret\n"
);
}
@@ -388,14 +388,16 @@ static int find_uprobes_trampoline(void *tramp_addr)
return ret;
}
-static unsigned char nop5[5] = { 0x0f, 0x1f, 0x44, 0x00, 0x00 };
+static unsigned char jmp2B[2] = { 0xeb, 8 };
+static unsigned char nop10[10] = { 0x66, 0x66, 0x0f, 0x1f, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00 };
+static unsigned char lea_rsp[5] = { 0x48, 0x8d, 0x64, 0x24, 0x80 };
-static void *find_nop5(void *fn)
+static void *find_nop10(void *fn)
{
int i;
- for (i = 0; i < 10; i++) {
- if (!memcmp(nop5, fn + i, 5))
+ for (i = 0; i < 128; i++) {
+ if (!memcmp(nop10, fn + i, 9))
return fn + i;
}
return NULL;
@@ -420,7 +422,8 @@ static void *check_attach(struct uprobe_syscall_executed *skel, trigger_t trigge
ASSERT_EQ(skel->bss->executed, executed, "executed");
/* .. and check the trampoline is as expected. */
- call = (struct __arch_relative_insn *) addr;
+ ASSERT_OK(memcmp(addr, lea_rsp, 4), "lea_rsp");
+ call = (struct __arch_relative_insn *)(addr + 5);
tramp = (void *) (call + 1) + call->raddr;
ASSERT_EQ(call->op, 0xe8, "call");
ASSERT_OK(find_uprobes_trampoline(tramp), "uprobes_trampoline");
@@ -432,7 +435,7 @@ static void check_detach(void *addr, void *tramp)
{
/* [uprobes_trampoline] stays after detach */
ASSERT_OK(find_uprobes_trampoline(tramp), "uprobes_trampoline");
- ASSERT_OK(memcmp(addr, nop5, 5), "nop5");
+ ASSERT_OK(memcmp(addr, jmp2B, 2), "jmp2B");
}
static void check(struct uprobe_syscall_executed *skel, struct bpf_link *link,
@@ -568,8 +571,8 @@ static void test_uprobe_usdt(void)
void *addr;
errno = 0;
- addr = find_nop5(usdt_test);
- if (!ASSERT_OK_PTR(addr, "find_nop5"))
+ addr = find_nop10(usdt_test);
+ if (!ASSERT_OK_PTR(addr, "find_nop10"))
return;
skel = uprobe_syscall_executed__open_and_load();
diff --git a/tools/testing/selftests/bpf/prog_tests/usdt.c b/tools/testing/selftests/bpf/prog_tests/usdt.c
index 69759b27794d..be34c4087ff5 100644
--- a/tools/testing/selftests/bpf/prog_tests/usdt.c
+++ b/tools/testing/selftests/bpf/prog_tests/usdt.c
@@ -252,7 +252,7 @@ extern void usdt_1(void);
extern void usdt_2(void);
static unsigned char nop1[1] = { 0x90 };
-static unsigned char nop1_nop5_combo[6] = { 0x90, 0x0f, 0x1f, 0x44, 0x00, 0x00 };
+static unsigned char nop1_nop10_combo[11] = { 0x90, 0x66, 0x66, 0x0f, 0x1f, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00 };
static void *find_instr(void *fn, unsigned char *instr, size_t cnt)
{
@@ -271,17 +271,17 @@ static void subtest_optimized_attach(void)
__u8 *addr_1, *addr_2;
/* usdt_1 USDT probe has single nop instruction */
- addr_1 = find_instr(usdt_1, nop1_nop5_combo, 6);
- if (!ASSERT_NULL(addr_1, "usdt_1_find_nop1_nop5_combo"))
+ addr_1 = find_instr(usdt_1, nop1_nop10_combo, 6);
+ if (!ASSERT_NULL(addr_1, "usdt_1_find_nop1_nop10_combo"))
return;
addr_1 = find_instr(usdt_1, nop1, 1);
if (!ASSERT_OK_PTR(addr_1, "usdt_1_find_nop1"))
return;
- /* usdt_2 USDT probe has nop,nop5 instructions combo */
- addr_2 = find_instr(usdt_2, nop1_nop5_combo, 6);
- if (!ASSERT_OK_PTR(addr_2, "usdt_2_find_nop1_nop5_combo"))
+ /* usdt_2 USDT probe has nop,nop10 instructions combo */
+ addr_2 = find_instr(usdt_2, nop1_nop10_combo, 6);
+ if (!ASSERT_OK_PTR(addr_2, "usdt_2_find_nop1_nop10_combo"))
return;
skel = test_usdt__open_and_load();
@@ -309,12 +309,12 @@ static void subtest_optimized_attach(void)
bpf_link__destroy(skel->links.usdt_executed);
- /* we expect the nop5 ip */
+ /* we expect the nop10 ip */
skel->bss->expected_ip = (unsigned long) addr_2 + 1;
/*
* Attach program on top of usdt_2 which is probe defined on top
- * of nop1,nop5 combo, so the probe gets optimized on top of nop5.
+ * of nop1,nop10 combo, so the probe gets optimized on top of nop10.
*/
skel->links.usdt_executed = bpf_program__attach_usdt(skel->progs.usdt_executed,
0 /*self*/, "/proc/self/exe",
@@ -328,8 +328,13 @@ static void subtest_optimized_attach(void)
/* nop stays on addr_2 address */
ASSERT_EQ(*addr_2, 0x90, "nop");
- /* call is on addr_2 + 1 address */
- ASSERT_EQ(*(addr_2 + 1), 0xe8, "call");
+ /*
+ * lea -0x80(%rsp), %rsp
+ * call ...
+ */
+ static unsigned char expected[] = { 0x48, 0x8d, 0x64, 0x24, 0x80, 0xe8 };
+
+ ASSERT_MEMEQ(addr_2 + 1, expected, sizeof(expected), "lea_and_call");
ASSERT_EQ(skel->bss->executed, 4, "executed");
cleanup:
diff --git a/tools/testing/selftests/bpf/usdt_2.c b/tools/testing/selftests/bpf/usdt_2.c
index 789883aaca4c..b359b389f6c0 100644
--- a/tools/testing/selftests/bpf/usdt_2.c
+++ b/tools/testing/selftests/bpf/usdt_2.c
@@ -3,7 +3,7 @@
#if defined(__x86_64__)
/*
- * Include usdt.h with default nop,nop5 instructions combo.
+ * Include usdt.h with default nop,nop10 instructions combo.
*/
#include "usdt.h"
--
2.53.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 5/7] selftests/bpf: Change uprobe/usdt trigger bench code to use nop10
2026-05-14 13:53 [PATCH 0/7] uprobes/x86: Fix red zone issue for optimized uprobes Jiri Olsa
` (3 preceding siblings ...)
2026-05-14 13:53 ` [PATCH 4/7] selftests/bpf: Change uprobe syscall tests to use nop10 Jiri Olsa
@ 2026-05-14 13:53 ` Jiri Olsa
2026-05-14 13:53 ` [PATCH 6/7] selftests/bpf: Add reattach tests for uprobe syscall Jiri Olsa
2026-05-14 13:53 ` [PATCH 7/7] selftests/bpf: Add tests for uprobe nop10 red zone clobbering Jiri Olsa
6 siblings, 0 replies; 15+ messages in thread
From: Jiri Olsa @ 2026-05-14 13:53 UTC (permalink / raw)
To: Oleg Nesterov, Peter Zijlstra, Ingo Molnar, Masami Hiramatsu,
Andrii Nakryiko
Cc: bpf, linux-trace-kernel
Changing uprobe/usdt trigger bench code to use nop10 instead
of nop5. Also changing un_bench_uprobes.sh to use nop10 triggers.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
tools/testing/selftests/bpf/bench.c | 20 +++++------
.../selftests/bpf/benchs/bench_trigger.c | 36 +++++++++----------
.../selftests/bpf/benchs/run_bench_uprobes.sh | 2 +-
3 files changed, 29 insertions(+), 29 deletions(-)
diff --git a/tools/testing/selftests/bpf/bench.c b/tools/testing/selftests/bpf/bench.c
index 6155ce455c27..1252a1af2e84 100644
--- a/tools/testing/selftests/bpf/bench.c
+++ b/tools/testing/selftests/bpf/bench.c
@@ -539,12 +539,12 @@ extern const struct bench bench_trig_uretprobe_multi_push;
extern const struct bench bench_trig_uprobe_multi_ret;
extern const struct bench bench_trig_uretprobe_multi_ret;
#ifdef __x86_64__
-extern const struct bench bench_trig_uprobe_nop5;
-extern const struct bench bench_trig_uretprobe_nop5;
-extern const struct bench bench_trig_uprobe_multi_nop5;
-extern const struct bench bench_trig_uretprobe_multi_nop5;
+extern const struct bench bench_trig_uprobe_nop10;
+extern const struct bench bench_trig_uretprobe_nop10;
+extern const struct bench bench_trig_uprobe_multi_nop10;
+extern const struct bench bench_trig_uretprobe_multi_nop10;
extern const struct bench bench_trig_usdt_nop;
-extern const struct bench bench_trig_usdt_nop5;
+extern const struct bench bench_trig_usdt_nop10;
#endif
extern const struct bench bench_rb_libbpf;
@@ -619,12 +619,12 @@ static const struct bench *benchs[] = {
&bench_trig_uprobe_multi_ret,
&bench_trig_uretprobe_multi_ret,
#ifdef __x86_64__
- &bench_trig_uprobe_nop5,
- &bench_trig_uretprobe_nop5,
- &bench_trig_uprobe_multi_nop5,
- &bench_trig_uretprobe_multi_nop5,
+ &bench_trig_uprobe_nop10,
+ &bench_trig_uretprobe_nop10,
+ &bench_trig_uprobe_multi_nop10,
+ &bench_trig_uretprobe_multi_nop10,
&bench_trig_usdt_nop,
- &bench_trig_usdt_nop5,
+ &bench_trig_usdt_nop10,
#endif
/* ringbuf/perfbuf benchmarks */
&bench_rb_libbpf,
diff --git a/tools/testing/selftests/bpf/benchs/bench_trigger.c b/tools/testing/selftests/bpf/benchs/bench_trigger.c
index bcc4820c802e..3998ea8ff9aa 100644
--- a/tools/testing/selftests/bpf/benchs/bench_trigger.c
+++ b/tools/testing/selftests/bpf/benchs/bench_trigger.c
@@ -396,15 +396,15 @@ static void *uprobe_producer_ret(void *input)
}
#ifdef __x86_64__
-__nocf_check __weak void uprobe_target_nop5(void)
+__nocf_check __weak void uprobe_target_nop10(void)
{
asm volatile (".byte 0x66, 0x66, 0x0f, 0x1f, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00");
}
-static void *uprobe_producer_nop5(void *input)
+static void *uprobe_producer_nop10(void *input)
{
while (true)
- uprobe_target_nop5();
+ uprobe_target_nop10();
return NULL;
}
@@ -418,7 +418,7 @@ static void *uprobe_producer_usdt_nop(void *input)
return NULL;
}
-static void *uprobe_producer_usdt_nop5(void *input)
+static void *uprobe_producer_usdt_nop10(void *input)
{
while (true)
usdt_2();
@@ -542,24 +542,24 @@ static void uretprobe_multi_ret_setup(void)
}
#ifdef __x86_64__
-static void uprobe_nop5_setup(void)
+static void uprobe_nop10_setup(void)
{
- usetup(false, false /* !use_multi */, &uprobe_target_nop5);
+ usetup(false, false /* !use_multi */, &uprobe_target_nop10);
}
-static void uretprobe_nop5_setup(void)
+static void uretprobe_nop10_setup(void)
{
- usetup(true, false /* !use_multi */, &uprobe_target_nop5);
+ usetup(true, false /* !use_multi */, &uprobe_target_nop10);
}
-static void uprobe_multi_nop5_setup(void)
+static void uprobe_multi_nop10_setup(void)
{
- usetup(false, true /* use_multi */, &uprobe_target_nop5);
+ usetup(false, true /* use_multi */, &uprobe_target_nop10);
}
-static void uretprobe_multi_nop5_setup(void)
+static void uretprobe_multi_nop10_setup(void)
{
- usetup(true, true /* use_multi */, &uprobe_target_nop5);
+ usetup(true, true /* use_multi */, &uprobe_target_nop10);
}
static void usdt_setup(const char *name)
@@ -598,7 +598,7 @@ static void usdt_nop_setup(void)
usdt_setup("usdt_1");
}
-static void usdt_nop5_setup(void)
+static void usdt_nop10_setup(void)
{
usdt_setup("usdt_2");
}
@@ -665,10 +665,10 @@ BENCH_TRIG_USERMODE(uretprobe_multi_nop, nop, "uretprobe-multi-nop");
BENCH_TRIG_USERMODE(uretprobe_multi_push, push, "uretprobe-multi-push");
BENCH_TRIG_USERMODE(uretprobe_multi_ret, ret, "uretprobe-multi-ret");
#ifdef __x86_64__
-BENCH_TRIG_USERMODE(uprobe_nop5, nop5, "uprobe-nop5");
-BENCH_TRIG_USERMODE(uretprobe_nop5, nop5, "uretprobe-nop5");
-BENCH_TRIG_USERMODE(uprobe_multi_nop5, nop5, "uprobe-multi-nop5");
-BENCH_TRIG_USERMODE(uretprobe_multi_nop5, nop5, "uretprobe-multi-nop5");
+BENCH_TRIG_USERMODE(uprobe_nop10, nop10, "uprobe-nop10");
+BENCH_TRIG_USERMODE(uretprobe_nop10, nop10, "uretprobe-nop10");
+BENCH_TRIG_USERMODE(uprobe_multi_nop10, nop10, "uprobe-multi-nop10");
+BENCH_TRIG_USERMODE(uretprobe_multi_nop10, nop10, "uretprobe-multi-nop10");
BENCH_TRIG_USERMODE(usdt_nop, usdt_nop, "usdt-nop");
-BENCH_TRIG_USERMODE(usdt_nop5, usdt_nop5, "usdt-nop5");
+BENCH_TRIG_USERMODE(usdt_nop10, usdt_nop10, "usdt-nop10");
#endif
diff --git a/tools/testing/selftests/bpf/benchs/run_bench_uprobes.sh b/tools/testing/selftests/bpf/benchs/run_bench_uprobes.sh
index 9ec59423b949..e490b337e960 100755
--- a/tools/testing/selftests/bpf/benchs/run_bench_uprobes.sh
+++ b/tools/testing/selftests/bpf/benchs/run_bench_uprobes.sh
@@ -2,7 +2,7 @@
set -eufo pipefail
-for i in usermode-count syscall-count {uprobe,uretprobe}-{nop,push,ret,nop5} usdt-nop usdt-nop5
+for i in usermode-count syscall-count {uprobe,uretprobe}-{nop,push,ret,nop10} usdt-nop usdt-nop10
do
summary=$(sudo ./bench -w2 -d5 -a trig-$i | tail -n1 | cut -d'(' -f1 | cut -d' ' -f3-)
printf "%-15s: %s\n" $i "$summary"
--
2.53.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 6/7] selftests/bpf: Add reattach tests for uprobe syscall
2026-05-14 13:53 [PATCH 0/7] uprobes/x86: Fix red zone issue for optimized uprobes Jiri Olsa
` (4 preceding siblings ...)
2026-05-14 13:53 ` [PATCH 5/7] selftests/bpf: Change uprobe/usdt trigger bench code " Jiri Olsa
@ 2026-05-14 13:53 ` Jiri Olsa
2026-05-14 13:53 ` [PATCH 7/7] selftests/bpf: Add tests for uprobe nop10 red zone clobbering Jiri Olsa
6 siblings, 0 replies; 15+ messages in thread
From: Jiri Olsa @ 2026-05-14 13:53 UTC (permalink / raw)
To: Oleg Nesterov, Peter Zijlstra, Ingo Molnar, Masami Hiramatsu,
Andrii Nakryiko
Cc: bpf, linux-trace-kernel
Adding reattach tests for uprobe syscall tests to make sure
we can re-attach and optimize same uprobe multiple times.
The reason is that optimized uprobe does not restore original
nop10 after detach, but instead it uses 'jmp 8' instruction.
Making sure we can still install and optimize uprobe on top
of the 'jmp 8' instruction.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
.../selftests/bpf/prog_tests/uprobe_syscall.c | 115 ++++++++++++++++--
1 file changed, 105 insertions(+), 10 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c b/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c
index c2e9e549c737..82b3c0ce9253 100644
--- a/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c
+++ b/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c
@@ -431,21 +431,27 @@ static void *check_attach(struct uprobe_syscall_executed *skel, trigger_t trigge
return tramp;
}
-static void check_detach(void *addr, void *tramp)
+static bool check_detach(void *addr, void *tramp)
{
+ bool ok = true;
+
/* [uprobes_trampoline] stays after detach */
- ASSERT_OK(find_uprobes_trampoline(tramp), "uprobes_trampoline");
- ASSERT_OK(memcmp(addr, jmp2B, 2), "jmp2B");
+ if (!ASSERT_OK(find_uprobes_trampoline(tramp), "uprobes_trampoline"))
+ ok = false;
+ if (!ASSERT_OK(memcmp(addr, jmp2B, 2), "jmp2B"))
+ ok = false;
+ return ok;
}
-static void check(struct uprobe_syscall_executed *skel, struct bpf_link *link,
- trigger_t trigger, void *addr, int executed)
+static void *check(struct uprobe_syscall_executed *skel, struct bpf_link *link,
+ trigger_t trigger, void *addr, int executed)
{
void *tramp;
tramp = check_attach(skel, trigger, addr, executed);
bpf_link__destroy(link);
check_detach(addr, tramp);
+ return tramp;
}
static void test_uprobe_legacy(void)
@@ -456,6 +462,7 @@ static void test_uprobe_legacy(void)
);
struct bpf_link *link;
unsigned long offset;
+ void *tramp;
offset = get_uprobe_offset(&uprobe_test);
if (!ASSERT_GE(offset, 0, "get_uprobe_offset"))
@@ -473,7 +480,28 @@ static void test_uprobe_legacy(void)
if (!ASSERT_OK_PTR(link, "bpf_program__attach_uprobe_opts"))
goto cleanup;
- check(skel, link, uprobe_test, uprobe_test, 2);
+ tramp = check(skel, link, uprobe_test, uprobe_test, 2);
+
+ /* reattach and detach without triggering optimization */
+ link = bpf_program__attach_uprobe_opts(skel->progs.test_uprobe,
+ 0, "/proc/self/exe", offset, NULL);
+ if (!ASSERT_OK_PTR(link, "bpf_program__attach_uprobe_opts"))
+ goto cleanup;
+
+ bpf_link__destroy(link);
+ if (!check_detach(uprobe_test, tramp))
+ goto cleanup;
+
+ uprobe_test();
+ ASSERT_EQ(skel->bss->executed, 2, "executed_no_probe");
+
+ /* reattach with triggering optimization */
+ link = bpf_program__attach_uprobe_opts(skel->progs.test_uprobe,
+ 0, "/proc/self/exe", offset, NULL);
+ if (!ASSERT_OK_PTR(link, "bpf_program__attach_uprobe_opts"))
+ goto cleanup;
+
+ check(skel, link, uprobe_test, uprobe_test, 4);
/* uretprobe */
skel->bss->executed = 0;
@@ -495,6 +523,7 @@ static void test_uprobe_multi(void)
LIBBPF_OPTS(bpf_uprobe_multi_opts, opts);
struct bpf_link *link;
unsigned long offset;
+ void *tramp;
offset = get_uprobe_offset(&uprobe_test);
if (!ASSERT_GE(offset, 0, "get_uprobe_offset"))
@@ -515,7 +544,28 @@ static void test_uprobe_multi(void)
if (!ASSERT_OK_PTR(link, "bpf_program__attach_uprobe_multi"))
goto cleanup;
- check(skel, link, uprobe_test, uprobe_test, 2);
+ tramp = check(skel, link, uprobe_test, uprobe_test, 2);
+
+ /* reattach and detach without triggering optimization */
+ link = bpf_program__attach_uprobe_multi(skel->progs.test_uprobe_multi,
+ 0, "/proc/self/exe", NULL, &opts);
+ if (!ASSERT_OK_PTR(link, "bpf_program__attach_uprobe_multi"))
+ goto cleanup;
+
+ bpf_link__destroy(link);
+ if (!check_detach(uprobe_test, tramp))
+ goto cleanup;
+
+ uprobe_test();
+ ASSERT_EQ(skel->bss->executed, 2, "executed_no_probe");
+
+ /* reattach with triggering optimization */
+ link = bpf_program__attach_uprobe_multi(skel->progs.test_uprobe_multi,
+ 0, "/proc/self/exe", NULL, &opts);
+ if (!ASSERT_OK_PTR(link, "bpf_program__attach_uprobe_multi"))
+ goto cleanup;
+
+ check(skel, link, uprobe_test, uprobe_test, 4);
/* uretprobe.multi */
skel->bss->executed = 0;
@@ -539,6 +589,7 @@ static void test_uprobe_session(void)
);
struct bpf_link *link;
unsigned long offset;
+ void *tramp;
offset = get_uprobe_offset(&uprobe_test);
if (!ASSERT_GE(offset, 0, "get_uprobe_offset"))
@@ -558,7 +609,28 @@ static void test_uprobe_session(void)
if (!ASSERT_OK_PTR(link, "bpf_program__attach_uprobe_multi"))
goto cleanup;
- check(skel, link, uprobe_test, uprobe_test, 4);
+ tramp = check(skel, link, uprobe_test, uprobe_test, 4);
+
+ /* reattach and detach without triggering optimization */
+ link = bpf_program__attach_uprobe_multi(skel->progs.test_uprobe_session,
+ 0, "/proc/self/exe", NULL, &opts);
+ if (!ASSERT_OK_PTR(link, "bpf_program__attach_uprobe_multi"))
+ goto cleanup;
+
+ bpf_link__destroy(link);
+ if (!check_detach(uprobe_test, tramp))
+ goto cleanup;
+
+ uprobe_test();
+ ASSERT_EQ(skel->bss->executed, 4, "executed_no_probe");
+
+ /* reattach with triggering optimization */
+ link = bpf_program__attach_uprobe_multi(skel->progs.test_uprobe_session,
+ 0, "/proc/self/exe", NULL, &opts);
+ if (!ASSERT_OK_PTR(link, "bpf_program__attach_uprobe_multi"))
+ goto cleanup;
+
+ check(skel, link, uprobe_test, uprobe_test, 8);
cleanup:
uprobe_syscall_executed__destroy(skel);
@@ -568,7 +640,7 @@ static void test_uprobe_usdt(void)
{
struct uprobe_syscall_executed *skel;
struct bpf_link *link;
- void *addr;
+ void *addr, *tramp;
errno = 0;
addr = find_nop10(usdt_test);
@@ -587,7 +659,30 @@ static void test_uprobe_usdt(void)
if (!ASSERT_OK_PTR(link, "bpf_program__attach_usdt"))
goto cleanup;
- check(skel, link, usdt_test, addr, 2);
+ tramp = check(skel, link, usdt_test, addr, 2);
+
+ /* reattach and detach without triggering optimization */
+ link = bpf_program__attach_usdt(skel->progs.test_usdt,
+ -1 /* all PIDs */, "/proc/self/exe",
+ "optimized_uprobe", "usdt", NULL);
+ if (!ASSERT_OK_PTR(link, "bpf_program__attach_usdt"))
+ goto cleanup;
+
+ bpf_link__destroy(link);
+ if (!check_detach(addr, tramp))
+ goto cleanup;
+
+ usdt_test();
+ ASSERT_EQ(skel->bss->executed, 2, "executed_no_probe");
+
+ /* reattach with triggering optimization */
+ link = bpf_program__attach_usdt(skel->progs.test_usdt,
+ -1 /* all PIDs */, "/proc/self/exe",
+ "optimized_uprobe", "usdt", NULL);
+ if (!ASSERT_OK_PTR(link, "bpf_program__attach_usdt"))
+ goto cleanup;
+
+ check(skel, link, usdt_test, addr, 4);
cleanup:
uprobe_syscall_executed__destroy(skel);
--
2.53.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 7/7] selftests/bpf: Add tests for uprobe nop10 red zone clobbering
2026-05-14 13:53 [PATCH 0/7] uprobes/x86: Fix red zone issue for optimized uprobes Jiri Olsa
` (5 preceding siblings ...)
2026-05-14 13:53 ` [PATCH 6/7] selftests/bpf: Add reattach tests for uprobe syscall Jiri Olsa
@ 2026-05-14 13:53 ` Jiri Olsa
2026-05-14 14:55 ` bot+bpf-ci
2026-05-14 21:22 ` sashiko-bot
6 siblings, 2 replies; 15+ messages in thread
From: Jiri Olsa @ 2026-05-14 13:53 UTC (permalink / raw)
To: Oleg Nesterov, Peter Zijlstra, Ingo Molnar, Masami Hiramatsu,
Andrii Nakryiko
Cc: bpf, linux-trace-kernel
From: Andrii Nakryiko <andrii@kernel.org>
The uprobe nop5 optimization used to replace a 5-byte NOP with a 5-byte
CALL to a trampoline. The CALL pushes a return address onto the stack at
[rsp-8], clobbering whatever was stored there.
On x86-64, the red zone is the 128 bytes below rsp that user code may use
for temporary storage without adjusting rsp. Compilers can place USDT
argument operands there, generating specs like "8@-8(%rbp)" when rbp ==
rsp. With the CALL-based optimization, the return address overwrites that
argument before the BPF-side USDT argument fetch runs.
Add two tests for this case. The uprobe_syscall subtest stores known values
at -8(%rsp), -16(%rsp), and -24(%rsp), executes an optimized nop10 uprobe,
and verifies the red-zone data is still intact. The USDT subtest triggers a
probe in a function where the compiler places three USDT operands in the
red zone and verifies that all 10 optimized invocations deliver the expected
argument values to BPF.
On an unfixed kernel, the first hit goes through the INT3 path and later
hits use the optimized CALL path, so the red-zone checks fail after
optimization.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
[ updates to use nop10 ]
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
.../selftests/bpf/prog_tests/uprobe_syscall.c | 75 +++++++++++++++++++
tools/testing/selftests/bpf/prog_tests/usdt.c | 49 ++++++++++++
tools/testing/selftests/bpf/progs/test_usdt.c | 25 +++++++
tools/testing/selftests/bpf/usdt_2.c | 13 ++++
4 files changed, 162 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c b/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c
index 82b3c0ce9253..d553485e7db5 100644
--- a/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c
+++ b/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c
@@ -357,6 +357,48 @@ __nocf_check __weak void usdt_test(void)
USDT(optimized_uprobe, usdt);
}
+/*
+ * Assembly-level red zone clobbering test. Stores known values in the
+ * red zone (below RSP), executes a nop10 (uprobe site), and checks that
+ * the values survived. Returns 0 if intact, 1 if clobbered.
+ *
+ * The nop5 optimization used CALL (which pushes a return address to
+ * [rsp-8]), the value at -8(%rsp) was overwritten. The nop10 optimization
+ * should escape that by moving stackpointer below the redzone before
+ * doing the CALL.
+ */
+__attribute__((aligned(16)))
+__nocf_check __weak __naked unsigned long uprobe_red_zone_test(void)
+{
+ asm volatile (
+ "movabs $0x1111111111111111, %%rax\n"
+ "movq %%rax, -8(%%rsp)\n"
+ "movabs $0x2222222222222222, %%rax\n"
+ "movq %%rax, -16(%%rsp)\n"
+ "movabs $0x3333333333333333, %%rax\n"
+ "movq %%rax, -24(%%rsp)\n"
+
+ ".byte 0x66, 0x66, 0x0f, 0x1f, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00\n" /* nop10: uprobe site */
+
+ "movabs $0x1111111111111111, %%rax\n"
+ "cmpq %%rax, -8(%%rsp)\n"
+ "jne 1f\n"
+ "movabs $0x2222222222222222, %%rax\n"
+ "cmpq %%rax, -16(%%rsp)\n"
+ "jne 1f\n"
+ "movabs $0x3333333333333333, %%rax\n"
+ "cmpq %%rax, -24(%%rsp)\n"
+ "jne 1f\n"
+
+ "xorl %%eax, %%eax\n"
+ "retq\n"
+ "1:\n"
+ "movl $1, %%eax\n"
+ "retq\n"
+ ::: "rax", "memory"
+ );
+}
+
static int find_uprobes_trampoline(void *tramp_addr)
{
void *start, *end;
@@ -855,6 +897,37 @@ static void test_uprobe_race(void)
#define __NR_uprobe 336
#endif
+static void test_uprobe_red_zone(void)
+{
+ struct uprobe_syscall_executed *skel;
+ struct bpf_link *link;
+ void *nop10_addr;
+ size_t offset;
+ int i;
+
+ nop10_addr = find_nop10(uprobe_red_zone_test);
+ if (!ASSERT_NEQ(nop10_addr, NULL, "find_nop10"))
+ return;
+
+ skel = uprobe_syscall_executed__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "open_and_load"))
+ return;
+
+ offset = get_uprobe_offset(nop10_addr);
+ link = bpf_program__attach_uprobe_opts(skel->progs.test_uprobe,
+ 0, "/proc/self/exe", offset, NULL);
+ if (!ASSERT_OK_PTR(link, "attach_uprobe"))
+ goto cleanup;
+
+ for (i = 0; i < 10; i++)
+ ASSERT_EQ(uprobe_red_zone_test(), 0, "red_zone_intact");
+
+ bpf_link__destroy(link);
+
+cleanup:
+ uprobe_syscall_executed__destroy(skel);
+}
+
static void test_uprobe_error(void)
{
long err = syscall(__NR_uprobe);
@@ -881,6 +954,8 @@ static void __test_uprobe_syscall(void)
test_uprobe_usdt();
if (test__start_subtest("uprobe_race"))
test_uprobe_race();
+ if (test__start_subtest("uprobe_red_zone"))
+ test_uprobe_red_zone();
if (test__start_subtest("uprobe_error"))
test_uprobe_error();
if (test__start_subtest("uprobe_regs_equal"))
diff --git a/tools/testing/selftests/bpf/prog_tests/usdt.c b/tools/testing/selftests/bpf/prog_tests/usdt.c
index be34c4087ff5..606601ccdc42 100644
--- a/tools/testing/selftests/bpf/prog_tests/usdt.c
+++ b/tools/testing/selftests/bpf/prog_tests/usdt.c
@@ -250,6 +250,7 @@ static void subtest_basic_usdt(bool optimized)
#ifdef __x86_64__
extern void usdt_1(void);
extern void usdt_2(void);
+extern void usdt_red_zone_trigger(void);
static unsigned char nop1[1] = { 0x90 };
static unsigned char nop1_nop10_combo[11] = { 0x90, 0x66, 0x66, 0x0f, 0x1f, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00 };
@@ -340,6 +341,52 @@ static void subtest_optimized_attach(void)
cleanup:
test_usdt__destroy(skel);
}
+
+/*
+ * Test that USDT arguments survive nop10 optimization in a function where
+ * the compiler places operands in the red zone.
+ *
+ * Signal handlers are prone to having the compiler place USDT argument
+ * operands in the red zone (below rsp).
+ *
+ * The nop5 optimization used CALL (which pushes a return address to
+ * [rsp-8]), the value at -8(%rsp) was overwritten. The nop10 optimization
+ * should escape that by moving stackpointer below the redzone before
+ * doing the CALL.
+ */
+static void subtest_optimized_red_zone(void)
+{
+ struct test_usdt *skel;
+ int i;
+
+ skel = test_usdt__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "open_and_load"))
+ return;
+
+ skel->bss->expected_arg[0] = 0xDEADBEEF;
+ skel->bss->expected_arg[1] = 0xCAFEBABE;
+ skel->bss->expected_arg[2] = 0xFEEDFACE;
+ skel->bss->expected_pid = getpid();
+
+ skel->links.usdt_check_arg = bpf_program__attach_usdt(
+ skel->progs.usdt_check_arg, 0, "/proc/self/exe",
+ "optimized_attach", "usdt_red_zone", NULL);
+ if (!ASSERT_OK_PTR(skel->links.usdt_check_arg, "attach_usdt_red_zone"))
+ goto cleanup;
+
+ for (i = 0; i < 10; i++)
+ usdt_red_zone_trigger();
+
+ ASSERT_EQ(skel->bss->arg_total, 10, "arg_total");
+ ASSERT_EQ(skel->bss->arg_bad, 0, "arg_bad");
+ ASSERT_EQ(skel->bss->arg_last[0], 0xDEADBEEF, "arg_last_1");
+ ASSERT_EQ(skel->bss->arg_last[1], 0xCAFEBABE, "arg_last_2");
+ ASSERT_EQ(skel->bss->arg_last[2], 0xFEEDFACE, "arg_last_3");
+
+cleanup:
+ test_usdt__destroy(skel);
+}
+
#endif
unsigned short test_usdt_100_semaphore SEC(".probes");
@@ -613,6 +660,8 @@ void test_usdt(void)
subtest_basic_usdt(true);
if (test__start_subtest("optimized_attach"))
subtest_optimized_attach();
+ if (test__start_subtest("optimized_red_zone"))
+ subtest_optimized_red_zone();
#endif
if (test__start_subtest("multispec"))
subtest_multispec_usdt();
diff --git a/tools/testing/selftests/bpf/progs/test_usdt.c b/tools/testing/selftests/bpf/progs/test_usdt.c
index f00cb52874e0..0ee78fb050a1 100644
--- a/tools/testing/selftests/bpf/progs/test_usdt.c
+++ b/tools/testing/selftests/bpf/progs/test_usdt.c
@@ -149,5 +149,30 @@ int usdt_executed(struct pt_regs *ctx)
executed++;
return 0;
}
+
+int arg_total;
+int arg_bad;
+long arg_last[3];
+long expected_arg[3];
+int expected_pid;
+
+SEC("usdt")
+int BPF_USDT(usdt_check_arg, long arg1, long arg2, long arg3)
+{
+ if (expected_pid != (bpf_get_current_pid_tgid() >> 32))
+ return 0;
+
+ __sync_fetch_and_add(&arg_total, 1);
+ arg_last[0] = arg1;
+ arg_last[1] = arg2;
+ arg_last[2] = arg3;
+
+ if (arg1 != expected_arg[0] ||
+ arg2 != expected_arg[1] ||
+ arg3 != expected_arg[2])
+ __sync_fetch_and_add(&arg_bad, 1);
+
+ return 0;
+}
#endif
char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/usdt_2.c b/tools/testing/selftests/bpf/usdt_2.c
index b359b389f6c0..5e38f8605b02 100644
--- a/tools/testing/selftests/bpf/usdt_2.c
+++ b/tools/testing/selftests/bpf/usdt_2.c
@@ -13,4 +13,17 @@ void usdt_2(void)
USDT(optimized_attach, usdt_2);
}
+static volatile unsigned long usdt_red_zone_arg1 = 0xDEADBEEF;
+static volatile unsigned long usdt_red_zone_arg2 = 0xCAFEBABE;
+static volatile unsigned long usdt_red_zone_arg3 = 0xFEEDFACE;
+
+void __attribute__((noinline)) usdt_red_zone_trigger(void)
+{
+ unsigned long a1 = usdt_red_zone_arg1;
+ unsigned long a2 = usdt_red_zone_arg2;
+ unsigned long a3 = usdt_red_zone_arg3;
+
+ USDT(optimized_attach, usdt_red_zone, a1, a2, a3);
+}
+
#endif
--
2.53.0
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 2/7] libbpf: Change has_nop_combo to work on top of nop10
2026-05-14 13:53 ` [PATCH 2/7] libbpf: Change has_nop_combo to work on top of nop10 Jiri Olsa
@ 2026-05-14 14:55 ` bot+bpf-ci
0 siblings, 0 replies; 15+ messages in thread
From: bot+bpf-ci @ 2026-05-14 14:55 UTC (permalink / raw)
To: jolsa, oleg, peterz, mingo, mhiramat, andrii
Cc: bpf, linux-trace-kernel, ast, andrii, daniel, martin.lau, eddyz87,
yonghong.song, clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 630 bytes --]
> libbpf: Change has_nop_combo to work on top of nop10
>
> We now expect nop combo with 10 bytes nop instead of 5 bytes nop,
> fixing has_nop_combo to reflect that.
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
This looks like a bug fix for kernel/libbpf integration. Should this
include:
Fixes: 9734c3ac2f23 ("uprobes/x86: Move optimized uprobe from nop5 to nop10")
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/25864669093
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 7/7] selftests/bpf: Add tests for uprobe nop10 red zone clobbering
2026-05-14 13:53 ` [PATCH 7/7] selftests/bpf: Add tests for uprobe nop10 red zone clobbering Jiri Olsa
@ 2026-05-14 14:55 ` bot+bpf-ci
2026-05-14 21:22 ` sashiko-bot
1 sibling, 0 replies; 15+ messages in thread
From: bot+bpf-ci @ 2026-05-14 14:55 UTC (permalink / raw)
To: jolsa, oleg, peterz, mingo, mhiramat, andrii
Cc: bpf, linux-trace-kernel, ast, andrii, daniel, martin.lau, eddyz87,
yonghong.song, clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 2747 bytes --]
> diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c b/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c
> index 82b3c0ce9253..d553485e7db5 100644
> --- a/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c
> +++ b/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c
[ ... ]
> diff --git a/tools/testing/selftests/bpf/prog_tests/usdt.c b/tools/testing/selftests/bpf/prog_tests/usdt.c
> index be34c4087ff5..606601ccdc42 100644
> --- a/tools/testing/selftests/bpf/prog_tests/usdt.c
> +++ b/tools/testing/selftests/bpf/prog_tests/usdt.c
[ ... ]
> diff --git a/tools/testing/selftests/bpf/progs/test_usdt.c b/tools/testing/selftests/bpf/progs/test_usdt.c
> index f00cb52874e0..0ee78fb050a1 100644
> --- a/tools/testing/selftests/bpf/progs/test_usdt.c
> +++ b/tools/testing/selftests/bpf/progs/test_usdt.c
[ ... ]
> diff --git a/tools/testing/selftests/bpf/usdt_2.c b/tools/testing/selftests/bpf/usdt_2.c
> index b359b389f6c0..5e38f8605b02 100644
> --- a/tools/testing/selftests/bpf/usdt_2.c
> +++ b/tools/testing/selftests/bpf/usdt_2.c
> @@ -13,4 +13,17 @@ void usdt_2(void)
> USDT(optimized_attach, usdt_2);
> }
>
> +static volatile unsigned long usdt_red_zone_arg1 = 0xDEADBEEF;
> +static volatile unsigned long usdt_red_zone_arg2 = 0xCAFEBABE;
> +static volatile unsigned long usdt_red_zone_arg3 = 0xFEEDFACE;
> +
> +void __attribute__((noinline)) usdt_red_zone_trigger(void)
> +{
> + unsigned long a1 = usdt_red_zone_arg1;
> + unsigned long a2 = usdt_red_zone_arg2;
> + unsigned long a3 = usdt_red_zone_arg3;
> +
> + USDT(optimized_attach, usdt_red_zone, a1, a2, a3);
> +}
> +
> #endif
Is this test reliable across different compiler versions and optimizations?
The USDT macro uses the "nor" inline assembly constraint, which allows the
compiler to allocate these operands to CPU registers. Because there is no
register pressure in usdt_red_zone_trigger(), modern compilers might allocate
a1, a2, and a3 directly to registers rather than the stack's red zone.
If the operands are placed in registers, the uprobe CALL optimization
clobbering [rsp-8] will not corrupt them. This could cause the test to pass
even on buggy kernels, creating a false positive.
Would it be safer to explicitly force these operands into the red zone using
inline assembly constraints, rather than depending on the compiler's register
allocator?
Reference: https://lore.kernel.org/bpf/20260509021238.AE98FC2BCB0@smtp.kernel.org/
Raised by: sashiko-bot@kernel.org on Sat, 09 May 2026
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/25864669093
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 1/7] uprobes/x86: Move optimized uprobe from nop5 to nop10
2026-05-14 13:53 ` [PATCH 1/7] uprobes/x86: Move optimized uprobe from nop5 to nop10 Jiri Olsa
@ 2026-05-14 16:54 ` Jakub Sitnicki
2026-05-14 20:05 ` sashiko-bot
1 sibling, 0 replies; 15+ messages in thread
From: Jakub Sitnicki @ 2026-05-14 16:54 UTC (permalink / raw)
To: Jiri Olsa
Cc: Oleg Nesterov, Peter Zijlstra, Ingo Molnar, Masami Hiramatsu,
Andrii Nakryiko, bpf, linux-trace-kernel
On Thu, May 14, 2026 at 03:53:36PM +0200, Jiri Olsa wrote:
> Andrii reported an issue with optimized uprobes [1] that can clobber
> redzone area with call instruction storing return address on stack
> where user code may keep temporary data without adjusting rsp.
>
> Fixing this by moving the optimized uprobes on top of 10-bytes nop
> instruction, so we can squeeze another instruction to escape the
> redzone area before doing the call, like:
>
> lea -0x80(%rsp), %rsp
> call tramp
>
> Note the lea instruction is used to adjust the rsp register without
> changing the flags.
>
> The optimized uprobe performance stays the same:
>
> uprobe-nop : 3.129 ± 0.013M/s
> uprobe-push : 3.045 ± 0.006M/s
> uprobe-ret : 1.095 ± 0.004M/s
> --> uprobe-nop10 : 7.170 ± 0.020M/s
> uretprobe-nop : 2.143 ± 0.021M/s
> uretprobe-push : 2.090 ± 0.000M/s
> uretprobe-ret : 0.942 ± 0.000M/s
> --> uretprobe-nop10: 3.381 ± 0.003M/s
> usdt-nop : 3.245 ± 0.004M/s
> --> usdt-nop10 : 7.256 ± 0.023M/s
>
> [1] https://lore.kernel.org/bpf/20260509003146.976844-1-andrii@kernel.org/
> Reported-by: Andrii Nakryiko <andrii@kernel.org>
> Closes: https://lore.kernel.org/bpf/20260509003146.976844-1-andrii@kernel.org/
> Fixes: ba2bfc97b462 ("uprobes/x86: Add support to optimize uprobes")
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
> arch/x86/kernel/uprobes.c | 121 +++++++++++++++++++++++++++-----------
> 1 file changed, 86 insertions(+), 35 deletions(-)
>
> diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c
> index ebb1baf1eb1d..f7c4101a4039 100644
> --- a/arch/x86/kernel/uprobes.c
> +++ b/arch/x86/kernel/uprobes.c
> @@ -636,9 +636,21 @@ struct uprobe_trampoline {
> unsigned long vaddr;
> };
>
> +#define LEA_INSN_SIZE 5
> +#define OPT_INSN_SIZE (LEA_INSN_SIZE + CALL_INSN_SIZE)
> +#define OPT_JMP8_OFFSET (OPT_INSN_SIZE - JMP8_INSN_SIZE)
> +#define REDZONE_SIZE 0x80
> +
> +static const u8 lea_rsp[] = { 0x48, 0x8d, 0x64, 0x24, 0x80 };
> +
> +static bool is_lea_insn(const uprobe_opcode_t *insn)
> +{
> + return !memcmp(insn, lea_rsp, LEA_INSN_SIZE);
> +}
> +
Just a thought. See if below maybe reads better when plugged in.
is_call_insn can then be removed, I think.
static bool is_call_past_redzone_insns(const uprobe_opcode_t *insn)
{
static const u8 lea_rsp_call[] = {
0x48, 0x8d, 0x64, 0x24, REDZONE_SIZE, /* lea -0x80(%rsp), %rsp */
CALL_INSN_OPCODE
};
return !memcmp(insn, lea_rsp_call, ARRAY_SIZE(lea_rsp_call));
}
[...]
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 1/7] uprobes/x86: Move optimized uprobe from nop5 to nop10
2026-05-14 13:53 ` [PATCH 1/7] uprobes/x86: Move optimized uprobe from nop5 to nop10 Jiri Olsa
2026-05-14 16:54 ` Jakub Sitnicki
@ 2026-05-14 20:05 ` sashiko-bot
1 sibling, 0 replies; 15+ messages in thread
From: sashiko-bot @ 2026-05-14 20:05 UTC (permalink / raw)
To: Jiri Olsa; +Cc: bpf
Thank you for your contribution! Sashiko AI review found 2 potential issue(s) to consider:
- [High] set_orig_insn fails to unoptimize uprobes in foreign processes because __is_optimized relies on __in_uprobe_trampoline, which hardcodes current->mm for VMA lookups, resulting in silent stack pointer corruption.
- [High] set_orig_insn fails to unoptimize a forked child's uprobe due to breakpoint overwriting the lea instruction, leaving a corrupted instruction that truncates the stack pointer.
--
commit ecbc682eb0a0b0a7df1fd339bbdb6beeb8e1fc0a
Author: Jiri Olsa <jolsa@kernel.org>
uprobes/x86: Move optimized uprobe from nop5 to nop10
This commit fixes a redzone clobbering issue with optimized uprobes by
moving them to use a 10-byte nop instruction, allowing space for a stack
adjustment before the call.
Link: https://lore.kernel.org/bpf/20260509003146.976844-1-andrii@kernel.org/
> diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c
> --- a/arch/x86/kernel/uprobes.c
> +++ b/arch/x86/kernel/uprobes.c
[ ... ]
> @@ -1049,19 +1077,21 @@ static bool __is_optimized(uprobe_opcode_t *insn, unsigned long vaddr)
> struct __packed __arch_relative_insn {
> u8 op;
> s32 raddr;
> - } *call = (struct __arch_relative_insn *) insn;
> + } *call = (struct __arch_relative_insn *)(insn + LEA_INSN_SIZE);
>
> - if (!is_call_insn(insn))
> + if (!is_lea_insn(insn))
> + return false;
> + if (!is_call_insn(insn + LEA_INSN_SIZE))
> return false;
> - return __in_uprobe_trampoline(vaddr + 5 + call->raddr);
> + return __in_uprobe_trampoline(vaddr + OPT_INSN_SIZE + call->raddr);
> }
Does this sequence risk silent stack pointer corruption during uprobe
unregistration?
When a tracer unregisters a uprobe from another process, it runs in the
tracer's context. However, __is_optimized() calls __in_uprobe_trampoline(),
which hardcodes current->mm for the VMA lookup:
arch/x86/kernel/uprobes.c:__in_uprobe_trampoline() {
struct vm_area_struct *vma = vma_lookup(current->mm, ip);
return vma && vma_is_special_mapping(vma, &tramp_mapping);
}
Since current is the tracer, the lookup fails, causing __is_optimized()
to return false.
If __is_optimized() returns false, set_orig_insn() bypasses
swbp_unoptimize() and writes only the first byte of the original
instruction (e.g., 0x66 for nop10) over the 0x48 (lea) byte.
This leaves the instruction 66 8d 64 24 80 e8 ... in the tracee's memory,
which decodes as lea -0x80(%rsp), %sp. Could this truncate the stack
pointer to 16 bits and cause stack corruption when the tracee subsequently
pushes or pops?
[ ... ]
> @@ -1095,14 +1125,25 @@ int set_orig_insn(struct arch_uprobe *auprobe, struct vm_area_struct *vma,
> unsigned long vaddr)
> {
> if (test_bit(ARCH_UPROBE_FLAG_CAN_OPTIMIZE, &auprobe->flags)) {
> - int ret = is_optimized(vma->vm_mm, vaddr);
> - if (ret < 0)
> + uprobe_opcode_t insn[OPT_INSN_SIZE];
> + int ret;
> +
> + ret = copy_from_vaddr(vma->vm_mm, vaddr, &insn, OPT_INSN_SIZE);
> + if (ret)
> return ret;
> - if (ret) {
> + if (__is_optimized((uprobe_opcode_t *)&insn, vaddr)) {
> ret = swbp_unoptimize(auprobe, vma, vaddr);
> WARN_ON_ONCE(ret);
> return ret;
> }
> + /*
> + * We can have re-attached probe on top of jmp8 instruction,
> + * which did not get optimized. We need to restore the jmp8
> + * instruction, instead of the original instruction (nop10).
> + */
> + if (is_swbp_insn(&insn[0]) && insn[1] == OPT_JMP8_OFFSET)
> + return uprobe_write_opcode(auprobe, vma, vaddr, JMP8_INSN_OPCODE,
> + false /* is_register */);
> }
> return uprobe_write_opcode(auprobe, vma, vaddr, *(uprobe_opcode_t *)&auprobe->insn,
> false /* is_register */);
Is there a similar issue for forked children that inherit the optimized
10-byte sequence?
During fork, uprobe_mmap() installs a breakpoint in the child by writing
0xCC to the first byte, changing the instruction from 48 8d... to cc 8d...
If the uprobe is unregistered before the child hits and re-optimizes it,
__is_optimized() will return false because is_lea_insn() strictly expects
the first byte to be 0x48:
arch/x86/kernel/uprobes.c:is_lea_insn() {
return !memcmp(insn, lea_rsp, LEA_INSN_SIZE);
}
The fallback check for the re-attached probe on top of jmp8 also fails
because insn[1] is 0x8d, not OPT_JMP8_OFFSET.
Could set_orig_insn() then fall back to writing just the first byte of
the original instruction over the 0xcc, again leaving 66 8d 64 24 80 e8 ...
and silently truncating the child's stack pointer?
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260514135342.22130-1-jolsa@kernel.org?part=1
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 3/7] selftests/bpf: Emit nop,nop10 instructions combo for x86_64 arch
2026-05-14 13:53 ` [PATCH 3/7] selftests/bpf: Emit nop,nop10 instructions combo for x86_64 arch Jiri Olsa
@ 2026-05-14 20:44 ` sashiko-bot
0 siblings, 0 replies; 15+ messages in thread
From: sashiko-bot @ 2026-05-14 20:44 UTC (permalink / raw)
To: Jiri Olsa; +Cc: bpf
Thank you for your contribution! Sashiko AI review found 1 potential issue(s) to consider:
- [Medium] The commit modifies `USDT_NOP` in `usdt.h` to an 11-byte `nop,nop10` sequence but does not update `prog_tests/usdt.c`, causing the selftest to break and introducing a git bisect regression.
--
commit 600c1890b9793e001b5d8d9027629fb122af9024
Author: Jiri Olsa <jolsa@kernel.org>
selftests/bpf: Emit nop,nop10 instructions combo for x86_64 arch
This commit updates the USDT_NOP macro in usdt.h for x86_64 to emit an
11-byte nop, nop10 instruction sequence instead of the previous nop, nop5
sequence. It synchronizes with the latest usdt.h changes to utilize nop10
optimization support.
> diff --git a/tools/testing/selftests/bpf/usdt.h b/tools/testing/selftests/bpf/usdt.h
> --- a/tools/testing/selftests/bpf/usdt.h
> +++ b/tools/testing/selftests/bpf/usdt.h
[ ... ]
> @@ -313,7 +313,7 @@ struct usdt_sema { volatile unsigned short active; };
> #if defined(__ia64__) || defined(__s390__) || defined(__s390x__)
> #define USDT_NOP nop 0
> #elif defined(__x86_64__)
> -#define USDT_NOP .byte 0x90, 0x0f, 0x1f, 0x44, 0x00, 0x0 /* nop, nop5 */
> +#define USDT_NOP .byte 0x90, 0x66, 0x66, 0x0f, 0x1f, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00 /* nop, nop10 */
Does changing this instruction sequence from 6 bytes to 11 bytes without
updating prog_tests/usdt.c in the same commit cause a test breakage?
The selftest in prog_tests/usdt.c hardcodes the old 6-byte nop1_nop5_combo
and searches for it using find_instr(usdt_2, nop1_nop5_combo, 6). Because
the old sequence is no longer present in usdt_2.c, find_instr will return
NULL, causing the ASSERT_OK_PTR assertion to fail.
I see this is addressed later in the series in commit
ca88e95623000540d5b706ce364d5b5abf75b4c1. Would it be better to squash the
test update into this commit to avoid breaking git bisect?
Additionally, looking at that later commit, it updates nop1_nop10_combo to
11 bytes but appears to continue passing 6 as the size argument to
find_instr. Should that size argument be updated to 11 as well?
> #else
> #define USDT_NOP nop
> #endif
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260514135342.22130-1-jolsa@kernel.org?part=3
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 4/7] selftests/bpf: Change uprobe syscall tests to use nop10
2026-05-14 13:53 ` [PATCH 4/7] selftests/bpf: Change uprobe syscall tests to use nop10 Jiri Olsa
@ 2026-05-14 20:51 ` sashiko-bot
0 siblings, 0 replies; 15+ messages in thread
From: sashiko-bot @ 2026-05-14 20:51 UTC (permalink / raw)
To: Jiri Olsa; +Cc: bpf
Thank you for your contribution! Sashiko AI review found 1 potential issue(s) to consider:
- [Medium] Multiple test validation checks use hardcoded, incorrect byte lengths when comparing instruction sequences, leaving trailing bytes unverified.
--
commit ca88e95623000540d5b706ce364d5b5abf75b4c1
Author: Jiri Olsa <jolsa@kernel.org>
selftests/bpf: Change uprobe syscall tests to use nop10
This commit updates the uprobe syscall tests to reflect that optimized
uprobes are now placed on top of 10-byte nop instructions rather than
5-byte nop instructions.
> diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c b/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c
> index 955a37751b52d..c2e9e549c737d 100644
> --- a/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c
> +++ b/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c
[ ... ]
> @@ -388,14 +388,16 @@ static int find_uprobes_trampoline(void *tramp_addr)
[ ... ]
> -static void *find_nop5(void *fn)
> +static void *find_nop10(void *fn)
> {
> int i;
>
> - for (i = 0; i < 10; i++) {
> - if (!memcmp(nop5, fn + i, 5))
> + for (i = 0; i < 128; i++) {
> + if (!memcmp(nop10, fn + i, 9))
Is this memcmp() checking enough bytes? The nop10 sequence is 10 bytes long,
but this only compares the first 9 bytes, leaving the final byte unverified.
> return fn + i;
> }
> return NULL;
> @@ -420,7 +422,8 @@ static void *check_attach(struct uprobe_syscall_executed *skel, trigger_t trigge
> ASSERT_EQ(skel->bss->executed, executed, "executed");
>
> /* .. and check the trampoline is as expected. */
> - call = (struct __arch_relative_insn *) addr;
> + ASSERT_OK(memcmp(addr, lea_rsp, 4), "lea_rsp");
Does this memcmp() skip verifying the displacement byte? The lea_rsp array
is 5 bytes long, and the 0x80 (-128) byte proves the red zone is bypassed
correctly. Comparing only 4 bytes seems to miss this detail.
> + call = (struct __arch_relative_insn *)(addr + 5);
> tramp = (void *) (call + 1) + call->raddr;
> ASSERT_EQ(call->op, 0xe8, "call");
> ASSERT_OK(find_uprobes_trampoline(tramp), "uprobes_trampoline");
[ ... ]
> diff --git a/tools/testing/selftests/bpf/prog_tests/usdt.c b/tools/testing/selftests/bpf/prog_tests/usdt.c
> index 69759b27794d7..be34c4087ff56 100644
> --- a/tools/testing/selftests/bpf/prog_tests/usdt.c
> +++ b/tools/testing/selftests/bpf/prog_tests/usdt.c
[ ... ]
> @@ -271,17 +271,17 @@ static void subtest_optimized_attach(void)
> __u8 *addr_1, *addr_2;
>
> /* usdt_1 USDT probe has single nop instruction */
> - addr_1 = find_instr(usdt_1, nop1_nop5_combo, 6);
> - if (!ASSERT_NULL(addr_1, "usdt_1_find_nop1_nop5_combo"))
> + addr_1 = find_instr(usdt_1, nop1_nop10_combo, 6);
Should this length parameter be updated to match the new array size? The
nop1_nop10_combo array was increased to 11 bytes, but find_instr() is still
being called with length 6.
> addr_1 = find_instr(usdt_1, nop1, 1);
> if (!ASSERT_OK_PTR(addr_1, "usdt_1_find_nop1"))
> return;
>
> - /* usdt_2 USDT probe has nop,nop5 instructions combo */
> - addr_2 = find_instr(usdt_2, nop1_nop5_combo, 6);
> - if (!ASSERT_OK_PTR(addr_2, "usdt_2_find_nop1_nop5_combo"))
> + /* usdt_2 USDT probe has nop,nop10 instructions combo */
> + addr_2 = find_instr(usdt_2, nop1_nop10_combo, 6);
Could this cause the test to miss verifying the full sequence? It looks like
it is still passing 6 instead of 11 for the updated nop1_nop10_combo array.
> + if (!ASSERT_OK_PTR(addr_2, "usdt_2_find_nop1_nop10_combo"))
> return;
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260514135342.22130-1-jolsa@kernel.org?part=4
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 7/7] selftests/bpf: Add tests for uprobe nop10 red zone clobbering
2026-05-14 13:53 ` [PATCH 7/7] selftests/bpf: Add tests for uprobe nop10 red zone clobbering Jiri Olsa
2026-05-14 14:55 ` bot+bpf-ci
@ 2026-05-14 21:22 ` sashiko-bot
1 sibling, 0 replies; 15+ messages in thread
From: sashiko-bot @ 2026-05-14 21:22 UTC (permalink / raw)
To: Jiri Olsa; +Cc: bpf
Thank you for your contribution! Sashiko AI review found 2 potential issue(s) to consider:
- [High] The `optimized_red_zone` USDT subtest does not guarantee that its arguments are placed in the stack's red zone, rendering it ineffective as a regression test.
- [High] The uprobe regression tests loop too quickly to guarantee that the asynchronous uprobe optimization has completed, making them prone to falsely passing on unfixed kernels.
--
commit c8c0580fb53dfbfe85e78c3bfddc6256a6a88b56
Author: Andrii Nakryiko <andrii@kernel.org>
selftests/bpf: Add tests for uprobe nop10 red zone clobbering
This commit introduces tests to ensure the uprobe nop10 optimization avoids
clobbering the stack's red zone on x86-64. It adds an assembly-level test
that checks for red-zone data integrity and a USDT subtest checking for
correct argument delivery over 10 invocations.
> diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c b/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c
> --- a/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c
> +++ b/tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c
[ ... ]
> @@ -855,6 +897,37 @@ static void test_uprobe_race(void)
> #define __NR_uprobe 336
> #endif
>
> +static void test_uprobe_red_zone(void)
> +{
[ ... ]
> + for (i = 0; i < 10; i++)
> + ASSERT_EQ(uprobe_red_zone_test(), 0, "red_zone_intact");
Does this loop execute too quickly to guarantee the uprobe optimization has
completed?
Uprobe optimization is performed asynchronously by a worker thread. Since
this is a tight loop with no delays or blocking, the 10 iterations might
finish before the background worker patches the code.
If all hits execute on the unoptimized INT3 path, the test could falsely
pass on kernels where the CALL-based optimization bug is present.
> +
> + bpf_link__destroy(link);
> +
> +cleanup:
> + uprobe_syscall_executed__destroy(skel);
> +}
[ ... ]
> diff --git a/tools/testing/selftests/bpf/usdt_2.c b/tools/testing/selftests/bpf/usdt_2.c
> --- a/tools/testing/selftests/bpf/usdt_2.c
> +++ b/tools/testing/selftests/bpf/usdt_2.c
> @@ -13,4 +13,17 @@ void usdt_2(void)
> USDT(optimized_attach, usdt_2);
> }
>
> +static volatile unsigned long usdt_red_zone_arg1 = 0xDEADBEEF;
> +static volatile unsigned long usdt_red_zone_arg2 = 0xCAFEBABE;
> +static volatile unsigned long usdt_red_zone_arg3 = 0xFEEDFACE;
> +
> +void __attribute__((noinline)) usdt_red_zone_trigger(void)
> +{
> + unsigned long a1 = usdt_red_zone_arg1;
> + unsigned long a2 = usdt_red_zone_arg2;
> + unsigned long a3 = usdt_red_zone_arg3;
> +
> + USDT(optimized_attach, usdt_red_zone, a1, a2, a3);
> +}
Is it guaranteed that the compiler will place these variables in the
red zone?
On x86_64, the compiler will typically allocate these three arguments
to general-purpose registers rather than spilling them to memory.
If they are placed in registers or a standard stack frame above rsp, the
uprobe bug wouldn't clobber them, which might allow the test to falsely
pass on an unfixed kernel.
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260514135342.22130-1-jolsa@kernel.org?part=7
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2026-05-14 21:22 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-14 13:53 [PATCH 0/7] uprobes/x86: Fix red zone issue for optimized uprobes Jiri Olsa
2026-05-14 13:53 ` [PATCH 1/7] uprobes/x86: Move optimized uprobe from nop5 to nop10 Jiri Olsa
2026-05-14 16:54 ` Jakub Sitnicki
2026-05-14 20:05 ` sashiko-bot
2026-05-14 13:53 ` [PATCH 2/7] libbpf: Change has_nop_combo to work on top of nop10 Jiri Olsa
2026-05-14 14:55 ` bot+bpf-ci
2026-05-14 13:53 ` [PATCH 3/7] selftests/bpf: Emit nop,nop10 instructions combo for x86_64 arch Jiri Olsa
2026-05-14 20:44 ` sashiko-bot
2026-05-14 13:53 ` [PATCH 4/7] selftests/bpf: Change uprobe syscall tests to use nop10 Jiri Olsa
2026-05-14 20:51 ` sashiko-bot
2026-05-14 13:53 ` [PATCH 5/7] selftests/bpf: Change uprobe/usdt trigger bench code " Jiri Olsa
2026-05-14 13:53 ` [PATCH 6/7] selftests/bpf: Add reattach tests for uprobe syscall Jiri Olsa
2026-05-14 13:53 ` [PATCH 7/7] selftests/bpf: Add tests for uprobe nop10 red zone clobbering Jiri Olsa
2026-05-14 14:55 ` bot+bpf-ci
2026-05-14 21:22 ` sashiko-bot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox