* [PATCHv2 perf/core 0/4] uprobe,bpf: Allow to change app registers from uprobe registers
@ 2025-09-08 12:13 Jiri Olsa
2025-09-08 12:13 ` [PATCHv2 perf/core 1/4] bpf: Allow uprobe program to change context registers Jiri Olsa
` (3 more replies)
0 siblings, 4 replies; 8+ messages in thread
From: Jiri Olsa @ 2025-09-08 12:13 UTC (permalink / raw)
To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
Ingo Molnar
hi,
we recently had several requests for tetragon to be able to change
user application function return value or divert its execution through
instruction pointer change.
This patchset adds support for uprobe program to change app's registers
including instruction pointer.
v2 changes:
- moving back to original change without the uniqeu/exclusive flag
as discussed in here [1]
thanks,
jirka
[1] https://lore.kernel.org/bpf/CAEf4BzbxjRwxhJTLUgJNwR-vEbDybBpawNsRb+y+PiDsxzT=eA@mail.gmail.com/
---
Jiri Olsa (4):
bpf: Allow uprobe program to change context registers
uprobe: Do not emulate/sstep original instruction when ip is changed
selftests/bpf: Add uprobe context registers changes test
selftests/bpf: Add uprobe context ip register change test
include/linux/bpf.h | 1 +
kernel/events/core.c | 4 +++
kernel/events/uprobes.c | 7 +++++
kernel/trace/bpf_trace.c | 3 +-
tools/testing/selftests/bpf/prog_tests/uprobe.c | 156 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
tools/testing/selftests/bpf/progs/test_uprobe.c | 38 +++++++++++++++++++++++++
6 files changed, 206 insertions(+), 3 deletions(-)
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCHv2 perf/core 1/4] bpf: Allow uprobe program to change context registers
2025-09-08 12:13 [PATCHv2 perf/core 0/4] uprobe,bpf: Allow to change app registers from uprobe registers Jiri Olsa
@ 2025-09-08 12:13 ` Jiri Olsa
2025-09-08 17:20 ` Alexei Starovoitov
2025-09-08 12:13 ` [PATCHv2 perf/core 2/4] uprobe: Do not emulate/sstep original instruction when ip is changed Jiri Olsa
` (2 subsequent siblings)
3 siblings, 1 reply; 8+ messages in thread
From: Jiri Olsa @ 2025-09-08 12:13 UTC (permalink / raw)
To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
Ingo Molnar
Currently uprobe (BPF_PROG_TYPE_KPROBE) program can't write to the
context registers data. While this makes sense for kprobe attachments,
for uprobe attachment it might make sense to be able to change user
space registers to alter application execution.
Since uprobe and kprobe programs share the same type (BPF_PROG_TYPE_KPROBE),
we can't deny write access to context during the program load. We need
to check on it during program attachment to see if it's going to be
kprobe or uprobe.
Storing the program's write attempt to context and checking on it
during the attachment.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
include/linux/bpf.h | 1 +
kernel/events/core.c | 4 ++++
kernel/trace/bpf_trace.c | 3 +--
3 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index cc700925b802..404a30cde84e 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1619,6 +1619,7 @@ struct bpf_prog_aux {
bool priv_stack_requested;
bool changes_pkt_data;
bool might_sleep;
+ bool kprobe_write_ctx;
u64 prog_array_member_cnt; /* counts how many times as member of prog_array */
struct mutex ext_mutex; /* mutex for is_extended and prog_array_member_cnt */
struct bpf_arena *arena;
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 28de3baff792..c3f37b266fc4 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -11238,6 +11238,10 @@ static int __perf_event_set_bpf_prog(struct perf_event *event,
if (prog->kprobe_override && !is_kprobe)
return -EINVAL;
+ /* Writing to context allowed only for uprobes. */
+ if (prog->aux->kprobe_write_ctx && !is_uprobe)
+ return -EINVAL;
+
if (is_tracepoint || is_syscall_tp) {
int off = trace_event_get_offsets(event->tp_event);
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 3ae52978cae6..467fd5ab4b79 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1521,8 +1521,6 @@ static bool kprobe_prog_is_valid_access(int off, int size, enum bpf_access_type
{
if (off < 0 || off >= sizeof(struct pt_regs))
return false;
- if (type != BPF_READ)
- return false;
if (off % size != 0)
return false;
/*
@@ -1532,6 +1530,7 @@ static bool kprobe_prog_is_valid_access(int off, int size, enum bpf_access_type
if (off + size > sizeof(struct pt_regs))
return false;
+ prog->aux->kprobe_write_ctx |= type == BPF_WRITE;
return true;
}
--
2.51.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCHv2 perf/core 2/4] uprobe: Do not emulate/sstep original instruction when ip is changed
2025-09-08 12:13 [PATCHv2 perf/core 0/4] uprobe,bpf: Allow to change app registers from uprobe registers Jiri Olsa
2025-09-08 12:13 ` [PATCHv2 perf/core 1/4] bpf: Allow uprobe program to change context registers Jiri Olsa
@ 2025-09-08 12:13 ` Jiri Olsa
2025-09-08 12:17 ` Oleg Nesterov
2025-09-08 12:13 ` [PATCHv2 perf/core 3/4] selftests/bpf: Add uprobe context registers changes test Jiri Olsa
2025-09-08 12:13 ` [PATCHv2 perf/core 4/4] selftests/bpf: Add uprobe context ip register change test Jiri Olsa
3 siblings, 1 reply; 8+ messages in thread
From: Jiri Olsa @ 2025-09-08 12:13 UTC (permalink / raw)
To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
Ingo Molnar
If uprobe handler changes instruction pointer we still execute single
step) or emulate the original instruction and increment the (new) ip
with its length.
This makes the new instruction pointer bogus and application will
likely crash on illegal instruction execution.
If user decided to take execution elsewhere, it makes little sense
to execute the original instruction, so let's skip it.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
kernel/events/uprobes.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 996a81080d56..4f46018e507e 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -2768,6 +2768,13 @@ static void handle_swbp(struct pt_regs *regs)
/* Try to optimize after first hit. */
arch_uprobe_optimize(&uprobe->arch, bp_vaddr);
+ /*
+ * If user decided to take execution elsewhere, it makes little sense
+ * to execute the original instruction, so let's skip it.
+ */
+ if (instruction_pointer(regs) != bp_vaddr)
+ goto out;
+
if (arch_uprobe_skip_sstep(&uprobe->arch, regs))
goto out;
--
2.51.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCHv2 perf/core 3/4] selftests/bpf: Add uprobe context registers changes test
2025-09-08 12:13 [PATCHv2 perf/core 0/4] uprobe,bpf: Allow to change app registers from uprobe registers Jiri Olsa
2025-09-08 12:13 ` [PATCHv2 perf/core 1/4] bpf: Allow uprobe program to change context registers Jiri Olsa
2025-09-08 12:13 ` [PATCHv2 perf/core 2/4] uprobe: Do not emulate/sstep original instruction when ip is changed Jiri Olsa
@ 2025-09-08 12:13 ` Jiri Olsa
2025-09-08 12:13 ` [PATCHv2 perf/core 4/4] selftests/bpf: Add uprobe context ip register change test Jiri Olsa
3 siblings, 0 replies; 8+ messages in thread
From: Jiri Olsa @ 2025-09-08 12:13 UTC (permalink / raw)
To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
Ingo Molnar
Adding test to check we can change common register values through
uprobe program.
It's x86_64 specific test.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
.../testing/selftests/bpf/prog_tests/uprobe.c | 114 +++++++++++++++++-
.../testing/selftests/bpf/progs/test_uprobe.c | 24 ++++
2 files changed, 137 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe.c b/tools/testing/selftests/bpf/prog_tests/uprobe.c
index cf3e0e7a64fa..19dd900df188 100644
--- a/tools/testing/selftests/bpf/prog_tests/uprobe.c
+++ b/tools/testing/selftests/bpf/prog_tests/uprobe.c
@@ -2,6 +2,7 @@
/* Copyright (c) 2023 Hengqi Chen */
#include <test_progs.h>
+#include <asm/ptrace.h>
#include "test_uprobe.skel.h"
static FILE *urand_spawn(int *pid)
@@ -33,7 +34,7 @@ static int urand_trigger(FILE **urand_pipe)
return exit_code;
}
-void test_uprobe(void)
+static void test_uprobe_attach(void)
{
LIBBPF_OPTS(bpf_uprobe_opts, uprobe_opts);
struct test_uprobe *skel;
@@ -93,3 +94,114 @@ void test_uprobe(void)
pclose(urand_pipe);
test_uprobe__destroy(skel);
}
+
+#ifdef __x86_64__
+__naked __maybe_unused unsigned long uprobe_regs_change_trigger(void)
+{
+ asm volatile (
+ "ret\n"
+ );
+}
+
+static __naked void uprobe_regs_change(struct pt_regs *before, struct pt_regs *after)
+{
+ asm volatile (
+ "movq %r11, 48(%rdi)\n"
+ "movq %r10, 56(%rdi)\n"
+ "movq %r9, 64(%rdi)\n"
+ "movq %r8, 72(%rdi)\n"
+ "movq %rax, 80(%rdi)\n"
+ "movq %rcx, 88(%rdi)\n"
+ "movq %rdx, 96(%rdi)\n"
+ "movq %rsi, 104(%rdi)\n"
+ "movq %rdi, 112(%rdi)\n"
+
+ /* save 2nd argument */
+ "pushq %rsi\n"
+ "call uprobe_regs_change_trigger\n"
+
+ /* save return value and load 2nd argument pointer to rax */
+ "pushq %rax\n"
+ "movq 8(%rsp), %rax\n"
+
+ "movq %r11, 48(%rax)\n"
+ "movq %r10, 56(%rax)\n"
+ "movq %r9, 64(%rax)\n"
+ "movq %r8, 72(%rax)\n"
+ "movq %rcx, 88(%rax)\n"
+ "movq %rdx, 96(%rax)\n"
+ "movq %rsi, 104(%rax)\n"
+ "movq %rdi, 112(%rax)\n"
+
+ /* restore return value and 2nd argument */
+ "pop %rax\n"
+ "pop %rsi\n"
+
+ "movq %rax, 80(%rsi)\n"
+ "ret\n"
+ );
+}
+
+static void regs_common(void)
+{
+ struct pt_regs before = {}, after = {}, expected = {
+ .rax = 0xc0ffe,
+ .rcx = 0xbad,
+ .rdx = 0xdead,
+ .r8 = 0x8,
+ .r9 = 0x9,
+ .r10 = 0x10,
+ .r11 = 0x11,
+ .rdi = 0x12,
+ .rsi = 0x13,
+ };
+ LIBBPF_OPTS(bpf_uprobe_opts, uprobe_opts);
+ struct test_uprobe *skel;
+
+ skel = test_uprobe__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ return;
+
+ skel->bss->my_pid = getpid();
+ skel->bss->regs = expected;
+
+ uprobe_opts.func_name = "uprobe_regs_change_trigger";
+ skel->links.test_regs_change = bpf_program__attach_uprobe_opts(skel->progs.test_regs_change,
+ -1,
+ "/proc/self/exe",
+ 0 /* offset */,
+ &uprobe_opts);
+ if (!ASSERT_OK_PTR(skel->links.test_regs_change, "bpf_program__attach_uprobe_opts"))
+ goto cleanup;
+
+ uprobe_regs_change(&before, &after);
+
+ ASSERT_EQ(after.rax, expected.rax, "ax");
+ ASSERT_EQ(after.rcx, expected.rcx, "cx");
+ ASSERT_EQ(after.rdx, expected.rdx, "dx");
+ ASSERT_EQ(after.r8, expected.r8, "r8");
+ ASSERT_EQ(after.r9, expected.r9, "r9");
+ ASSERT_EQ(after.r10, expected.r10, "r10");
+ ASSERT_EQ(after.r11, expected.r11, "r11");
+ ASSERT_EQ(after.rdi, expected.rdi, "rdi");
+ ASSERT_EQ(after.rsi, expected.rsi, "rsi");
+
+cleanup:
+ test_uprobe__destroy(skel);
+}
+
+static void test_uprobe_regs_change(void)
+{
+ if (test__start_subtest("regs_change_common"))
+ regs_common();
+}
+#else
+static void test_uprobe_regs_change(void) { }
+#endif
+
+void test_uprobe(void)
+{
+ if (test__start_subtest("attach"))
+ test_uprobe_attach();
+ test_uprobe_regs_change();
+}
diff --git a/tools/testing/selftests/bpf/progs/test_uprobe.c b/tools/testing/selftests/bpf/progs/test_uprobe.c
index 896c88a4960d..9437bd76a437 100644
--- a/tools/testing/selftests/bpf/progs/test_uprobe.c
+++ b/tools/testing/selftests/bpf/progs/test_uprobe.c
@@ -59,3 +59,27 @@ int BPF_UPROBE(test4)
test4_result = 1;
return 0;
}
+
+#if defined(__TARGET_ARCH_x86)
+struct pt_regs regs;
+
+SEC("uprobe")
+int BPF_UPROBE(test_regs_change)
+{
+ pid_t pid = bpf_get_current_pid_tgid() >> 32;
+
+ if (pid != my_pid)
+ return 0;
+
+ ctx->ax = regs.ax;
+ ctx->cx = regs.cx;
+ ctx->dx = regs.dx;
+ ctx->r8 = regs.r8;
+ ctx->r9 = regs.r9;
+ ctx->r10 = regs.r10;
+ ctx->r11 = regs.r11;
+ ctx->di = regs.di;
+ ctx->si = regs.si;
+ return 0;
+}
+#endif
--
2.51.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCHv2 perf/core 4/4] selftests/bpf: Add uprobe context ip register change test
2025-09-08 12:13 [PATCHv2 perf/core 0/4] uprobe,bpf: Allow to change app registers from uprobe registers Jiri Olsa
` (2 preceding siblings ...)
2025-09-08 12:13 ` [PATCHv2 perf/core 3/4] selftests/bpf: Add uprobe context registers changes test Jiri Olsa
@ 2025-09-08 12:13 ` Jiri Olsa
3 siblings, 0 replies; 8+ messages in thread
From: Jiri Olsa @ 2025-09-08 12:13 UTC (permalink / raw)
To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
Ingo Molnar
Adding test to check we can change the application execution
through instruction pointer change through uprobe program.
It's x86_64 specific test.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
.../testing/selftests/bpf/prog_tests/uprobe.c | 42 +++++++++++++++++++
.../testing/selftests/bpf/progs/test_uprobe.c | 14 +++++++
2 files changed, 56 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe.c b/tools/testing/selftests/bpf/prog_tests/uprobe.c
index 19dd900df188..86404476c1da 100644
--- a/tools/testing/selftests/bpf/prog_tests/uprobe.c
+++ b/tools/testing/selftests/bpf/prog_tests/uprobe.c
@@ -190,10 +190,52 @@ static void regs_common(void)
test_uprobe__destroy(skel);
}
+static noinline unsigned long uprobe_regs_change_ip_1(void)
+{
+ return 0xc0ffee;
+}
+
+static noinline unsigned long uprobe_regs_change_ip_2(void)
+{
+ return 0xdeadbeef;
+}
+
+static void regs_ip(void)
+{
+ LIBBPF_OPTS(bpf_uprobe_opts, uprobe_opts);
+ struct test_uprobe *skel;
+ unsigned long ret;
+
+ skel = test_uprobe__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ return;
+
+ skel->bss->my_pid = getpid();
+ skel->bss->ip = (unsigned long) uprobe_regs_change_ip_2;
+
+ uprobe_opts.func_name = "uprobe_regs_change_ip_1";
+ skel->links.test_regs_change_ip = bpf_program__attach_uprobe_opts(
+ skel->progs.test_regs_change_ip,
+ -1,
+ "/proc/self/exe",
+ 0 /* offset */,
+ &uprobe_opts);
+ if (!ASSERT_OK_PTR(skel->links.test_regs_change_ip, "bpf_program__attach_uprobe_opts"))
+ goto cleanup;
+
+ ret = uprobe_regs_change_ip_1();
+ ASSERT_EQ(ret, 0xdeadbeef, "ret");
+
+cleanup:
+ test_uprobe__destroy(skel);
+}
+
static void test_uprobe_regs_change(void)
{
if (test__start_subtest("regs_change_common"))
regs_common();
+ if (test__start_subtest("regs_change_ip"))
+ regs_ip();
}
#else
static void test_uprobe_regs_change(void) { }
diff --git a/tools/testing/selftests/bpf/progs/test_uprobe.c b/tools/testing/selftests/bpf/progs/test_uprobe.c
index 9437bd76a437..12f4065fca20 100644
--- a/tools/testing/selftests/bpf/progs/test_uprobe.c
+++ b/tools/testing/selftests/bpf/progs/test_uprobe.c
@@ -82,4 +82,18 @@ int BPF_UPROBE(test_regs_change)
ctx->si = regs.si;
return 0;
}
+
+unsigned long ip;
+
+SEC("uprobe")
+int BPF_UPROBE(test_regs_change_ip)
+{
+ pid_t pid = bpf_get_current_pid_tgid() >> 32;
+
+ if (pid != my_pid)
+ return 0;
+
+ ctx->ip = ip;
+ return 0;
+}
#endif
--
2.51.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCHv2 perf/core 2/4] uprobe: Do not emulate/sstep original instruction when ip is changed
2025-09-08 12:13 ` [PATCHv2 perf/core 2/4] uprobe: Do not emulate/sstep original instruction when ip is changed Jiri Olsa
@ 2025-09-08 12:17 ` Oleg Nesterov
0 siblings, 0 replies; 8+ messages in thread
From: Oleg Nesterov @ 2025-09-08 12:17 UTC (permalink / raw)
To: Jiri Olsa
Cc: Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko, bpf,
linux-kernel, linux-trace-kernel, x86, Song Liu, Yonghong Song,
John Fastabend, Hao Luo, Steven Rostedt, Ingo Molnar
On 09/08, Jiri Olsa wrote:
>
> If user decided to take execution elsewhere, it makes little sense
> to execute the original instruction, so let's skip it.
...
> @@ -2768,6 +2768,13 @@ static void handle_swbp(struct pt_regs *regs)
> /* Try to optimize after first hit. */
> arch_uprobe_optimize(&uprobe->arch, bp_vaddr);
>
> + /*
> + * If user decided to take execution elsewhere, it makes little sense
> + * to execute the original instruction, so let's skip it.
> + */
> + if (instruction_pointer(regs) != bp_vaddr)
> + goto out;
> +
Acked-by: Oleg Nesterov <oleg@redhat.com>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCHv2 perf/core 1/4] bpf: Allow uprobe program to change context registers
2025-09-08 12:13 ` [PATCHv2 perf/core 1/4] bpf: Allow uprobe program to change context registers Jiri Olsa
@ 2025-09-08 17:20 ` Alexei Starovoitov
2025-09-08 19:36 ` Jiri Olsa
0 siblings, 1 reply; 8+ messages in thread
From: Alexei Starovoitov @ 2025-09-08 17:20 UTC (permalink / raw)
To: Jiri Olsa
Cc: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko,
bpf, LKML, linux-trace-kernel, X86 ML, Song Liu, Yonghong Song,
John Fastabend, Hao Luo, Steven Rostedt, Ingo Molnar
On Mon, Sep 8, 2025 at 5:13 AM Jiri Olsa <jolsa@kernel.org> wrote:
>
> Currently uprobe (BPF_PROG_TYPE_KPROBE) program can't write to the
> context registers data. While this makes sense for kprobe attachments,
> for uprobe attachment it might make sense to be able to change user
> space registers to alter application execution.
>
> Since uprobe and kprobe programs share the same type (BPF_PROG_TYPE_KPROBE),
> we can't deny write access to context during the program load. We need
> to check on it during program attachment to see if it's going to be
> kprobe or uprobe.
>
> Storing the program's write attempt to context and checking on it
> during the attachment.
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
> include/linux/bpf.h | 1 +
> kernel/events/core.c | 4 ++++
> kernel/trace/bpf_trace.c | 3 +--
> 3 files changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index cc700925b802..404a30cde84e 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1619,6 +1619,7 @@ struct bpf_prog_aux {
> bool priv_stack_requested;
> bool changes_pkt_data;
> bool might_sleep;
> + bool kprobe_write_ctx;
> u64 prog_array_member_cnt; /* counts how many times as member of prog_array */
> struct mutex ext_mutex; /* mutex for is_extended and prog_array_member_cnt */
> struct bpf_arena *arena;
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 28de3baff792..c3f37b266fc4 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -11238,6 +11238,10 @@ static int __perf_event_set_bpf_prog(struct perf_event *event,
> if (prog->kprobe_override && !is_kprobe)
> return -EINVAL;
>
> + /* Writing to context allowed only for uprobes. */
> + if (prog->aux->kprobe_write_ctx && !is_uprobe)
> + return -EINVAL;
> +
> if (is_tracepoint || is_syscall_tp) {
> int off = trace_event_get_offsets(event->tp_event);
>
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 3ae52978cae6..467fd5ab4b79 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -1521,8 +1521,6 @@ static bool kprobe_prog_is_valid_access(int off, int size, enum bpf_access_type
> {
> if (off < 0 || off >= sizeof(struct pt_regs))
> return false;
> - if (type != BPF_READ)
> - return false;
> if (off % size != 0)
> return false;
> /*
> @@ -1532,6 +1530,7 @@ static bool kprobe_prog_is_valid_access(int off, int size, enum bpf_access_type
> if (off + size > sizeof(struct pt_regs))
> return false;
>
> + prog->aux->kprobe_write_ctx |= type == BPF_WRITE;
iirc the same function is used to validate [ku]probe.multi ctx access,
but attaching is not done via __perf_event_set_bpf_prog().
The check at attach time is missing?
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCHv2 perf/core 1/4] bpf: Allow uprobe program to change context registers
2025-09-08 17:20 ` Alexei Starovoitov
@ 2025-09-08 19:36 ` Jiri Olsa
0 siblings, 0 replies; 8+ messages in thread
From: Jiri Olsa @ 2025-09-08 19:36 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko,
bpf, LKML, linux-trace-kernel, X86 ML, Song Liu, Yonghong Song,
John Fastabend, Hao Luo, Steven Rostedt, Ingo Molnar
On Mon, Sep 08, 2025 at 10:20:55AM -0700, Alexei Starovoitov wrote:
> On Mon, Sep 8, 2025 at 5:13 AM Jiri Olsa <jolsa@kernel.org> wrote:
> >
> > Currently uprobe (BPF_PROG_TYPE_KPROBE) program can't write to the
> > context registers data. While this makes sense for kprobe attachments,
> > for uprobe attachment it might make sense to be able to change user
> > space registers to alter application execution.
> >
> > Since uprobe and kprobe programs share the same type (BPF_PROG_TYPE_KPROBE),
> > we can't deny write access to context during the program load. We need
> > to check on it during program attachment to see if it's going to be
> > kprobe or uprobe.
> >
> > Storing the program's write attempt to context and checking on it
> > during the attachment.
> >
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> > include/linux/bpf.h | 1 +
> > kernel/events/core.c | 4 ++++
> > kernel/trace/bpf_trace.c | 3 +--
> > 3 files changed, 6 insertions(+), 2 deletions(-)
> >
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index cc700925b802..404a30cde84e 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -1619,6 +1619,7 @@ struct bpf_prog_aux {
> > bool priv_stack_requested;
> > bool changes_pkt_data;
> > bool might_sleep;
> > + bool kprobe_write_ctx;
> > u64 prog_array_member_cnt; /* counts how many times as member of prog_array */
> > struct mutex ext_mutex; /* mutex for is_extended and prog_array_member_cnt */
> > struct bpf_arena *arena;
> > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > index 28de3baff792..c3f37b266fc4 100644
> > --- a/kernel/events/core.c
> > +++ b/kernel/events/core.c
> > @@ -11238,6 +11238,10 @@ static int __perf_event_set_bpf_prog(struct perf_event *event,
> > if (prog->kprobe_override && !is_kprobe)
> > return -EINVAL;
> >
> > + /* Writing to context allowed only for uprobes. */
> > + if (prog->aux->kprobe_write_ctx && !is_uprobe)
> > + return -EINVAL;
> > +
> > if (is_tracepoint || is_syscall_tp) {
> > int off = trace_event_get_offsets(event->tp_event);
> >
> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index 3ae52978cae6..467fd5ab4b79 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -1521,8 +1521,6 @@ static bool kprobe_prog_is_valid_access(int off, int size, enum bpf_access_type
> > {
> > if (off < 0 || off >= sizeof(struct pt_regs))
> > return false;
> > - if (type != BPF_READ)
> > - return false;
> > if (off % size != 0)
> > return false;
> > /*
> > @@ -1532,6 +1530,7 @@ static bool kprobe_prog_is_valid_access(int off, int size, enum bpf_access_type
> > if (off + size > sizeof(struct pt_regs))
> > return false;
> >
> > + prog->aux->kprobe_write_ctx |= type == BPF_WRITE;
>
> iirc the same function is used to validate [ku]probe.multi ctx access,
> but attaching is not done via __perf_event_set_bpf_prog().
> The check at attach time is missing?
argh, yes, missed that.. good catch, thanks
jirka
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-09-08 19:36 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-08 12:13 [PATCHv2 perf/core 0/4] uprobe,bpf: Allow to change app registers from uprobe registers Jiri Olsa
2025-09-08 12:13 ` [PATCHv2 perf/core 1/4] bpf: Allow uprobe program to change context registers Jiri Olsa
2025-09-08 17:20 ` Alexei Starovoitov
2025-09-08 19:36 ` Jiri Olsa
2025-09-08 12:13 ` [PATCHv2 perf/core 2/4] uprobe: Do not emulate/sstep original instruction when ip is changed Jiri Olsa
2025-09-08 12:17 ` Oleg Nesterov
2025-09-08 12:13 ` [PATCHv2 perf/core 3/4] selftests/bpf: Add uprobe context registers changes test Jiri Olsa
2025-09-08 12:13 ` [PATCHv2 perf/core 4/4] selftests/bpf: Add uprobe context ip register change test Jiri Olsa
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).