linux-trace-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv4 bpf-next 0/6] uprobe,bpf: Allow to change app registers from uprobe registers
@ 2025-09-16 21:52 Jiri Olsa
  2025-09-16 21:52 ` [PATCHv4 bpf-next 1/6] bpf: Allow uprobe program to change context registers Jiri Olsa
                   ` (6 more replies)
  0 siblings, 7 replies; 13+ messages in thread
From: Jiri Olsa @ 2025-09-16 21:52 UTC (permalink / raw)
  To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
  Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
	Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
	Ingo Molnar

hi,
we recently had several requests for tetragon to be able to change
user application function return value or divert its execution through
instruction pointer change.

This patchset adds support for uprobe program to change app's registers
including instruction pointer.

v4 changes:
- rebased on bpf-next/master, we will handle the future simple conflict
  with tip/perf/core
- changed condition in kprobe_prog_is_valid_access [Andrii]
- added acks

thanks,
jirka


---
Jiri Olsa (6):
      bpf: Allow uprobe program to change context registers
      uprobe: Do not emulate/sstep original instruction when ip is changed
      selftests/bpf: Add uprobe context registers changes test
      selftests/bpf: Add uprobe context ip register change test
      selftests/bpf: Add kprobe write ctx attach test
      selftests/bpf: Add kprobe multi write ctx attach test

 include/linux/bpf.h                                        |   1 +
 kernel/events/core.c                                       |   4 +++
 kernel/events/uprobes.c                                    |   7 +++++
 kernel/trace/bpf_trace.c                                   |   9 ++++--
 tools/testing/selftests/bpf/prog_tests/attach_probe.c      |  28 +++++++++++++++++
 tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c |  27 ++++++++++++++++
 tools/testing/selftests/bpf/prog_tests/uprobe.c            | 156 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 tools/testing/selftests/bpf/progs/kprobe_write_ctx.c       |  22 +++++++++++++
 tools/testing/selftests/bpf/progs/test_uprobe.c            |  38 +++++++++++++++++++++++
 9 files changed, 289 insertions(+), 3 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/kprobe_write_ctx.c

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCHv4 bpf-next 1/6] bpf: Allow uprobe program to change context registers
  2025-09-16 21:52 [PATCHv4 bpf-next 0/6] uprobe,bpf: Allow to change app registers from uprobe registers Jiri Olsa
@ 2025-09-16 21:52 ` Jiri Olsa
  2025-09-16 21:52 ` [PATCHv4 bpf-next 2/6] uprobe: Do not emulate/sstep original instruction when ip is changed Jiri Olsa
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Jiri Olsa @ 2025-09-16 21:52 UTC (permalink / raw)
  To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
  Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
	Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
	Ingo Molnar

Currently uprobe (BPF_PROG_TYPE_KPROBE) program can't write to the
context registers data. While this makes sense for kprobe attachments,
for uprobe attachment it might make sense to be able to change user
space registers to alter application execution.

Since uprobe and kprobe programs share the same type (BPF_PROG_TYPE_KPROBE),
we can't deny write access to context during the program load. We need
to check on it during program attachment to see if it's going to be
kprobe or uprobe.

Storing the program's write attempt to context and checking on it
during the attachment.

Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/linux/bpf.h      | 1 +
 kernel/events/core.c     | 4 ++++
 kernel/trace/bpf_trace.c | 9 +++++++--
 3 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 41f776071ff5..8dfaf5e9f7fb 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1624,6 +1624,7 @@ struct bpf_prog_aux {
 	bool priv_stack_requested;
 	bool changes_pkt_data;
 	bool might_sleep;
+	bool kprobe_write_ctx;
 	u64 prog_array_member_cnt; /* counts how many times as member of prog_array */
 	struct mutex ext_mutex; /* mutex for is_extended and prog_array_member_cnt */
 	struct bpf_arena *arena;
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 820127536e62..1d354778dcd4 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -11232,6 +11232,10 @@ static int __perf_event_set_bpf_prog(struct perf_event *event,
 	if (prog->kprobe_override && !is_kprobe)
 		return -EINVAL;
 
+	/* Writing to context allowed only for uprobes. */
+	if (prog->aux->kprobe_write_ctx && !is_uprobe)
+		return -EINVAL;
+
 	if (is_tracepoint || is_syscall_tp) {
 		int off = trace_event_get_offsets(event->tp_event);
 
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 606007c387c5..b0deaeebb34a 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1521,8 +1521,6 @@ static bool kprobe_prog_is_valid_access(int off, int size, enum bpf_access_type
 {
 	if (off < 0 || off >= sizeof(struct pt_regs))
 		return false;
-	if (type != BPF_READ)
-		return false;
 	if (off % size != 0)
 		return false;
 	/*
@@ -1532,6 +1530,9 @@ static bool kprobe_prog_is_valid_access(int off, int size, enum bpf_access_type
 	if (off + size > sizeof(struct pt_regs))
 		return false;
 
+	if (type == BPF_WRITE)
+		prog->aux->kprobe_write_ctx = true;
+
 	return true;
 }
 
@@ -2918,6 +2919,10 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
 	if (!is_kprobe_multi(prog))
 		return -EINVAL;
 
+	/* Writing to context is not allowed for kprobes. */
+	if (prog->aux->kprobe_write_ctx)
+		return -EINVAL;
+
 	flags = attr->link_create.kprobe_multi.flags;
 	if (flags & ~BPF_F_KPROBE_MULTI_RETURN)
 		return -EINVAL;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCHv4 bpf-next 2/6] uprobe: Do not emulate/sstep original instruction when ip is changed
  2025-09-16 21:52 [PATCHv4 bpf-next 0/6] uprobe,bpf: Allow to change app registers from uprobe registers Jiri Olsa
  2025-09-16 21:52 ` [PATCHv4 bpf-next 1/6] bpf: Allow uprobe program to change context registers Jiri Olsa
@ 2025-09-16 21:52 ` Jiri Olsa
  2025-09-16 22:28   ` Andrii Nakryiko
  2025-09-16 21:52 ` [PATCHv4 bpf-next 3/6] selftests/bpf: Add uprobe context registers changes test Jiri Olsa
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 13+ messages in thread
From: Jiri Olsa @ 2025-09-16 21:52 UTC (permalink / raw)
  To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
  Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
	Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
	Ingo Molnar

If uprobe handler changes instruction pointer we still execute single
step) or emulate the original instruction and increment the (new) ip
with its length.

This makes the new instruction pointer bogus and application will
likely crash on illegal instruction execution.

If user decided to take execution elsewhere, it makes little sense
to execute the original instruction, so let's skip it.

Acked-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 kernel/events/uprobes.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 7ca1940607bd..2b32c32bcb77 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -2741,6 +2741,13 @@ static void handle_swbp(struct pt_regs *regs)
 
 	handler_chain(uprobe, regs);
 
+	/*
+	 * If user decided to take execution elsewhere, it makes little sense
+	 * to execute the original instruction, so let's skip it.
+	 */
+	if (instruction_pointer(regs) != bp_vaddr)
+		goto out;
+
 	if (arch_uprobe_skip_sstep(&uprobe->arch, regs))
 		goto out;
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCHv4 bpf-next 3/6] selftests/bpf: Add uprobe context registers changes test
  2025-09-16 21:52 [PATCHv4 bpf-next 0/6] uprobe,bpf: Allow to change app registers from uprobe registers Jiri Olsa
  2025-09-16 21:52 ` [PATCHv4 bpf-next 1/6] bpf: Allow uprobe program to change context registers Jiri Olsa
  2025-09-16 21:52 ` [PATCHv4 bpf-next 2/6] uprobe: Do not emulate/sstep original instruction when ip is changed Jiri Olsa
@ 2025-09-16 21:52 ` Jiri Olsa
  2025-09-16 21:52 ` [PATCHv4 bpf-next 4/6] selftests/bpf: Add uprobe context ip register change test Jiri Olsa
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Jiri Olsa @ 2025-09-16 21:52 UTC (permalink / raw)
  To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
  Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
	Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
	Ingo Molnar

Adding test to check we can change common register values through
uprobe program.

It's x86_64 specific test.

Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../testing/selftests/bpf/prog_tests/uprobe.c | 114 +++++++++++++++++-
 .../testing/selftests/bpf/progs/test_uprobe.c |  24 ++++
 2 files changed, 137 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe.c b/tools/testing/selftests/bpf/prog_tests/uprobe.c
index cf3e0e7a64fa..19dd900df188 100644
--- a/tools/testing/selftests/bpf/prog_tests/uprobe.c
+++ b/tools/testing/selftests/bpf/prog_tests/uprobe.c
@@ -2,6 +2,7 @@
 /* Copyright (c) 2023 Hengqi Chen */
 
 #include <test_progs.h>
+#include <asm/ptrace.h>
 #include "test_uprobe.skel.h"
 
 static FILE *urand_spawn(int *pid)
@@ -33,7 +34,7 @@ static int urand_trigger(FILE **urand_pipe)
 	return exit_code;
 }
 
-void test_uprobe(void)
+static void test_uprobe_attach(void)
 {
 	LIBBPF_OPTS(bpf_uprobe_opts, uprobe_opts);
 	struct test_uprobe *skel;
@@ -93,3 +94,114 @@ void test_uprobe(void)
 		pclose(urand_pipe);
 	test_uprobe__destroy(skel);
 }
+
+#ifdef __x86_64__
+__naked __maybe_unused unsigned long uprobe_regs_change_trigger(void)
+{
+	asm volatile (
+		"ret\n"
+	);
+}
+
+static __naked void uprobe_regs_change(struct pt_regs *before, struct pt_regs *after)
+{
+	asm volatile (
+		"movq %r11,  48(%rdi)\n"
+		"movq %r10,  56(%rdi)\n"
+		"movq  %r9,  64(%rdi)\n"
+		"movq  %r8,  72(%rdi)\n"
+		"movq %rax,  80(%rdi)\n"
+		"movq %rcx,  88(%rdi)\n"
+		"movq %rdx,  96(%rdi)\n"
+		"movq %rsi, 104(%rdi)\n"
+		"movq %rdi, 112(%rdi)\n"
+
+		/* save 2nd argument */
+		"pushq %rsi\n"
+		"call uprobe_regs_change_trigger\n"
+
+		/* save  return value and load 2nd argument pointer to rax */
+		"pushq %rax\n"
+		"movq 8(%rsp), %rax\n"
+
+		"movq %r11,  48(%rax)\n"
+		"movq %r10,  56(%rax)\n"
+		"movq  %r9,  64(%rax)\n"
+		"movq  %r8,  72(%rax)\n"
+		"movq %rcx,  88(%rax)\n"
+		"movq %rdx,  96(%rax)\n"
+		"movq %rsi, 104(%rax)\n"
+		"movq %rdi, 112(%rax)\n"
+
+		/* restore return value and 2nd argument */
+		"pop %rax\n"
+		"pop %rsi\n"
+
+		"movq %rax,  80(%rsi)\n"
+		"ret\n"
+	);
+}
+
+static void regs_common(void)
+{
+	struct pt_regs before = {}, after = {}, expected = {
+		.rax = 0xc0ffe,
+		.rcx = 0xbad,
+		.rdx = 0xdead,
+		.r8  = 0x8,
+		.r9  = 0x9,
+		.r10 = 0x10,
+		.r11 = 0x11,
+		.rdi = 0x12,
+		.rsi = 0x13,
+	};
+	LIBBPF_OPTS(bpf_uprobe_opts, uprobe_opts);
+	struct test_uprobe *skel;
+
+	skel = test_uprobe__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "skel_open"))
+		return;
+
+	skel->bss->my_pid = getpid();
+	skel->bss->regs = expected;
+
+	uprobe_opts.func_name = "uprobe_regs_change_trigger";
+	skel->links.test_regs_change = bpf_program__attach_uprobe_opts(skel->progs.test_regs_change,
+							    -1,
+							    "/proc/self/exe",
+							    0 /* offset */,
+							    &uprobe_opts);
+	if (!ASSERT_OK_PTR(skel->links.test_regs_change, "bpf_program__attach_uprobe_opts"))
+		goto cleanup;
+
+	uprobe_regs_change(&before, &after);
+
+	ASSERT_EQ(after.rax, expected.rax, "ax");
+	ASSERT_EQ(after.rcx, expected.rcx, "cx");
+	ASSERT_EQ(after.rdx, expected.rdx, "dx");
+	ASSERT_EQ(after.r8,  expected.r8,  "r8");
+	ASSERT_EQ(after.r9,  expected.r9,  "r9");
+	ASSERT_EQ(after.r10, expected.r10, "r10");
+	ASSERT_EQ(after.r11, expected.r11, "r11");
+	ASSERT_EQ(after.rdi, expected.rdi, "rdi");
+	ASSERT_EQ(after.rsi, expected.rsi, "rsi");
+
+cleanup:
+	test_uprobe__destroy(skel);
+}
+
+static void test_uprobe_regs_change(void)
+{
+	if (test__start_subtest("regs_change_common"))
+		regs_common();
+}
+#else
+static void test_uprobe_regs_change(void) { }
+#endif
+
+void test_uprobe(void)
+{
+	if (test__start_subtest("attach"))
+		test_uprobe_attach();
+	test_uprobe_regs_change();
+}
diff --git a/tools/testing/selftests/bpf/progs/test_uprobe.c b/tools/testing/selftests/bpf/progs/test_uprobe.c
index 896c88a4960d..9437bd76a437 100644
--- a/tools/testing/selftests/bpf/progs/test_uprobe.c
+++ b/tools/testing/selftests/bpf/progs/test_uprobe.c
@@ -59,3 +59,27 @@ int BPF_UPROBE(test4)
 	test4_result = 1;
 	return 0;
 }
+
+#if defined(__TARGET_ARCH_x86)
+struct pt_regs regs;
+
+SEC("uprobe")
+int BPF_UPROBE(test_regs_change)
+{
+	pid_t pid = bpf_get_current_pid_tgid() >> 32;
+
+	if (pid != my_pid)
+		return 0;
+
+	ctx->ax  = regs.ax;
+	ctx->cx  = regs.cx;
+	ctx->dx  = regs.dx;
+	ctx->r8  = regs.r8;
+	ctx->r9  = regs.r9;
+	ctx->r10 = regs.r10;
+	ctx->r11 = regs.r11;
+	ctx->di  = regs.di;
+	ctx->si  = regs.si;
+	return 0;
+}
+#endif
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCHv4 bpf-next 4/6] selftests/bpf: Add uprobe context ip register change test
  2025-09-16 21:52 [PATCHv4 bpf-next 0/6] uprobe,bpf: Allow to change app registers from uprobe registers Jiri Olsa
                   ` (2 preceding siblings ...)
  2025-09-16 21:52 ` [PATCHv4 bpf-next 3/6] selftests/bpf: Add uprobe context registers changes test Jiri Olsa
@ 2025-09-16 21:52 ` Jiri Olsa
  2025-09-16 21:53 ` [PATCHv4 bpf-next 5/6] selftests/bpf: Add kprobe write ctx attach test Jiri Olsa
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Jiri Olsa @ 2025-09-16 21:52 UTC (permalink / raw)
  To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
  Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
	Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
	Ingo Molnar

Adding test to check we can change the application execution
through instruction pointer change through uprobe program.

It's x86_64 specific test.

Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../testing/selftests/bpf/prog_tests/uprobe.c | 42 +++++++++++++++++++
 .../testing/selftests/bpf/progs/test_uprobe.c | 14 +++++++
 2 files changed, 56 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe.c b/tools/testing/selftests/bpf/prog_tests/uprobe.c
index 19dd900df188..86404476c1da 100644
--- a/tools/testing/selftests/bpf/prog_tests/uprobe.c
+++ b/tools/testing/selftests/bpf/prog_tests/uprobe.c
@@ -190,10 +190,52 @@ static void regs_common(void)
 	test_uprobe__destroy(skel);
 }
 
+static noinline unsigned long uprobe_regs_change_ip_1(void)
+{
+	return 0xc0ffee;
+}
+
+static noinline unsigned long uprobe_regs_change_ip_2(void)
+{
+	return 0xdeadbeef;
+}
+
+static void regs_ip(void)
+{
+	LIBBPF_OPTS(bpf_uprobe_opts, uprobe_opts);
+	struct test_uprobe *skel;
+	unsigned long ret;
+
+	skel = test_uprobe__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "skel_open"))
+		return;
+
+	skel->bss->my_pid = getpid();
+	skel->bss->ip = (unsigned long) uprobe_regs_change_ip_2;
+
+	uprobe_opts.func_name = "uprobe_regs_change_ip_1";
+	skel->links.test_regs_change_ip = bpf_program__attach_uprobe_opts(
+						skel->progs.test_regs_change_ip,
+						-1,
+						"/proc/self/exe",
+						0 /* offset */,
+						&uprobe_opts);
+	if (!ASSERT_OK_PTR(skel->links.test_regs_change_ip, "bpf_program__attach_uprobe_opts"))
+		goto cleanup;
+
+	ret = uprobe_regs_change_ip_1();
+	ASSERT_EQ(ret, 0xdeadbeef, "ret");
+
+cleanup:
+	test_uprobe__destroy(skel);
+}
+
 static void test_uprobe_regs_change(void)
 {
 	if (test__start_subtest("regs_change_common"))
 		regs_common();
+	if (test__start_subtest("regs_change_ip"))
+		regs_ip();
 }
 #else
 static void test_uprobe_regs_change(void) { }
diff --git a/tools/testing/selftests/bpf/progs/test_uprobe.c b/tools/testing/selftests/bpf/progs/test_uprobe.c
index 9437bd76a437..12f4065fca20 100644
--- a/tools/testing/selftests/bpf/progs/test_uprobe.c
+++ b/tools/testing/selftests/bpf/progs/test_uprobe.c
@@ -82,4 +82,18 @@ int BPF_UPROBE(test_regs_change)
 	ctx->si  = regs.si;
 	return 0;
 }
+
+unsigned long ip;
+
+SEC("uprobe")
+int BPF_UPROBE(test_regs_change_ip)
+{
+	pid_t pid = bpf_get_current_pid_tgid() >> 32;
+
+	if (pid != my_pid)
+		return 0;
+
+	ctx->ip = ip;
+	return 0;
+}
 #endif
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCHv4 bpf-next 5/6] selftests/bpf: Add kprobe write ctx attach test
  2025-09-16 21:52 [PATCHv4 bpf-next 0/6] uprobe,bpf: Allow to change app registers from uprobe registers Jiri Olsa
                   ` (3 preceding siblings ...)
  2025-09-16 21:52 ` [PATCHv4 bpf-next 4/6] selftests/bpf: Add uprobe context ip register change test Jiri Olsa
@ 2025-09-16 21:53 ` Jiri Olsa
  2025-09-16 21:53 ` [PATCHv4 bpf-next 6/6] selftests/bpf: Add kprobe multi " Jiri Olsa
  2025-09-24  9:50 ` [PATCHv4 bpf-next 0/6] uprobe,bpf: Allow to change app registers from uprobe registers patchwork-bot+netdevbpf
  6 siblings, 0 replies; 13+ messages in thread
From: Jiri Olsa @ 2025-09-16 21:53 UTC (permalink / raw)
  To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
  Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
	Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
	Ingo Molnar

Adding test to check we can't attach standard kprobe program that
writes to the context.

It's x86_64 specific test.

Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../selftests/bpf/prog_tests/attach_probe.c   | 28 +++++++++++++++++++
 .../selftests/bpf/progs/kprobe_write_ctx.c    | 15 ++++++++++
 2 files changed, 43 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/kprobe_write_ctx.c

diff --git a/tools/testing/selftests/bpf/prog_tests/attach_probe.c b/tools/testing/selftests/bpf/prog_tests/attach_probe.c
index cabc51c2ca6b..9e77e5da7097 100644
--- a/tools/testing/selftests/bpf/prog_tests/attach_probe.c
+++ b/tools/testing/selftests/bpf/prog_tests/attach_probe.c
@@ -3,6 +3,7 @@
 #include "test_attach_kprobe_sleepable.skel.h"
 #include "test_attach_probe_manual.skel.h"
 #include "test_attach_probe.skel.h"
+#include "kprobe_write_ctx.skel.h"
 
 /* this is how USDT semaphore is actually defined, except volatile modifier */
 volatile unsigned short uprobe_ref_ctr __attribute__((unused)) __attribute((section(".probes")));
@@ -201,6 +202,31 @@ static void test_attach_kprobe_long_event_name(void)
 	test_attach_probe_manual__destroy(skel);
 }
 
+#ifdef __x86_64__
+/* attach kprobe/kretprobe long event name testings */
+static void test_attach_kprobe_write_ctx(void)
+{
+	struct kprobe_write_ctx *skel = NULL;
+	struct bpf_link *link = NULL;
+
+	skel = kprobe_write_ctx__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "kprobe_write_ctx__open_and_load"))
+		return;
+
+	link = bpf_program__attach_kprobe_opts(skel->progs.kprobe_write_ctx,
+					     "bpf_fentry_test1", NULL);
+	if (!ASSERT_ERR_PTR(link, "bpf_program__attach_kprobe_opts"))
+		bpf_link__destroy(link);
+
+	kprobe_write_ctx__destroy(skel);
+}
+#else
+static void test_attach_kprobe_write_ctx(void)
+{
+	test__skip();
+}
+#endif
+
 static void test_attach_probe_auto(struct test_attach_probe *skel)
 {
 	struct bpf_link *uprobe_err_link;
@@ -406,6 +432,8 @@ void test_attach_probe(void)
 		test_attach_uprobe_long_event_name();
 	if (test__start_subtest("kprobe-long_name"))
 		test_attach_kprobe_long_event_name();
+	if (test__start_subtest("kprobe-write-ctx"))
+		test_attach_kprobe_write_ctx();
 
 cleanup:
 	test_attach_probe__destroy(skel);
diff --git a/tools/testing/selftests/bpf/progs/kprobe_write_ctx.c b/tools/testing/selftests/bpf/progs/kprobe_write_ctx.c
new file mode 100644
index 000000000000..4621a5bef4e2
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/kprobe_write_ctx.c
@@ -0,0 +1,15 @@
+// SPDX-License-Identifier: GPL-2.0
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+#if defined(__TARGET_ARCH_x86)
+SEC("kprobe")
+int kprobe_write_ctx(struct pt_regs *ctx)
+{
+	ctx->ax = 0;
+	return 0;
+}
+#endif
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCHv4 bpf-next 6/6] selftests/bpf: Add kprobe multi write ctx attach test
  2025-09-16 21:52 [PATCHv4 bpf-next 0/6] uprobe,bpf: Allow to change app registers from uprobe registers Jiri Olsa
                   ` (4 preceding siblings ...)
  2025-09-16 21:53 ` [PATCHv4 bpf-next 5/6] selftests/bpf: Add kprobe write ctx attach test Jiri Olsa
@ 2025-09-16 21:53 ` Jiri Olsa
  2025-09-24  9:50 ` [PATCHv4 bpf-next 0/6] uprobe,bpf: Allow to change app registers from uprobe registers patchwork-bot+netdevbpf
  6 siblings, 0 replies; 13+ messages in thread
From: Jiri Olsa @ 2025-09-16 21:53 UTC (permalink / raw)
  To: Oleg Nesterov, Masami Hiramatsu, Peter Zijlstra, Andrii Nakryiko
  Cc: bpf, linux-kernel, linux-trace-kernel, x86, Song Liu,
	Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt,
	Ingo Molnar

Adding test to check we can't attach kprobe multi program
that writes to the context.

It's x86_64 specific test.

Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../bpf/prog_tests/kprobe_multi_test.c        | 27 +++++++++++++++++++
 .../selftests/bpf/progs/kprobe_write_ctx.c    |  7 +++++
 2 files changed, 34 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c b/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c
index 171706e78da8..6cfaa978bc9a 100644
--- a/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c
+++ b/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c
@@ -7,6 +7,7 @@
 #include "kprobe_multi_session.skel.h"
 #include "kprobe_multi_session_cookie.skel.h"
 #include "kprobe_multi_verifier.skel.h"
+#include "kprobe_write_ctx.skel.h"
 #include "bpf/libbpf_internal.h"
 #include "bpf/hashmap.h"
 
@@ -539,6 +540,30 @@ static void test_attach_override(void)
 	kprobe_multi_override__destroy(skel);
 }
 
+#ifdef __x86_64__
+static void test_attach_write_ctx(void)
+{
+	struct kprobe_write_ctx *skel = NULL;
+	struct bpf_link *link = NULL;
+
+	skel = kprobe_write_ctx__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "kprobe_write_ctx__open_and_load"))
+		return;
+
+	link = bpf_program__attach_kprobe_opts(skel->progs.kprobe_multi_write_ctx,
+						     "bpf_fentry_test1", NULL);
+	if (!ASSERT_ERR_PTR(link, "bpf_program__attach_kprobe_opts"))
+		bpf_link__destroy(link);
+
+	kprobe_write_ctx__destroy(skel);
+}
+#else
+static void test_attach_write_ctx(void)
+{
+	test__skip();
+}
+#endif
+
 void serial_test_kprobe_multi_bench_attach(void)
 {
 	if (test__start_subtest("kernel"))
@@ -578,5 +603,7 @@ void test_kprobe_multi_test(void)
 		test_session_cookie_skel_api();
 	if (test__start_subtest("unique_match"))
 		test_unique_match();
+	if (test__start_subtest("attach_write_ctx"))
+		test_attach_write_ctx();
 	RUN_TESTS(kprobe_multi_verifier);
 }
diff --git a/tools/testing/selftests/bpf/progs/kprobe_write_ctx.c b/tools/testing/selftests/bpf/progs/kprobe_write_ctx.c
index 4621a5bef4e2..f77aef0474d3 100644
--- a/tools/testing/selftests/bpf/progs/kprobe_write_ctx.c
+++ b/tools/testing/selftests/bpf/progs/kprobe_write_ctx.c
@@ -12,4 +12,11 @@ int kprobe_write_ctx(struct pt_regs *ctx)
 	ctx->ax = 0;
 	return 0;
 }
+
+SEC("kprobe.multi")
+int kprobe_multi_write_ctx(struct pt_regs *ctx)
+{
+	ctx->ax = 0;
+	return 0;
+}
 #endif
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCHv4 bpf-next 2/6] uprobe: Do not emulate/sstep original instruction when ip is changed
  2025-09-16 21:52 ` [PATCHv4 bpf-next 2/6] uprobe: Do not emulate/sstep original instruction when ip is changed Jiri Olsa
@ 2025-09-16 22:28   ` Andrii Nakryiko
  2025-09-22 20:47     ` Andrii Nakryiko
  2025-09-24  8:49     ` Peter Zijlstra
  0 siblings, 2 replies; 13+ messages in thread
From: Andrii Nakryiko @ 2025-09-16 22:28 UTC (permalink / raw)
  To: Jiri Olsa, Ingo Molnar, Peter Zijlstra
  Cc: Oleg Nesterov, Masami Hiramatsu, Andrii Nakryiko, bpf,
	linux-kernel, linux-trace-kernel, x86, Song Liu, Yonghong Song,
	John Fastabend, Hao Luo, Steven Rostedt

On Tue, Sep 16, 2025 at 2:53 PM Jiri Olsa <jolsa@kernel.org> wrote:
>
> If uprobe handler changes instruction pointer we still execute single
> step) or emulate the original instruction and increment the (new) ip
> with its length.
>
> This makes the new instruction pointer bogus and application will
> likely crash on illegal instruction execution.
>
> If user decided to take execution elsewhere, it makes little sense
> to execute the original instruction, so let's skip it.
>
> Acked-by: Oleg Nesterov <oleg@redhat.com>
> Acked-by: Andrii Nakryiko <andrii@kernel.org>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  kernel/events/uprobes.c | 7 +++++++
>  1 file changed, 7 insertions(+)
>
> diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> index 7ca1940607bd..2b32c32bcb77 100644
> --- a/kernel/events/uprobes.c
> +++ b/kernel/events/uprobes.c
> @@ -2741,6 +2741,13 @@ static void handle_swbp(struct pt_regs *regs)
>
>         handler_chain(uprobe, regs);
>
> +       /*
> +        * If user decided to take execution elsewhere, it makes little sense
> +        * to execute the original instruction, so let's skip it.
> +        */
> +       if (instruction_pointer(regs) != bp_vaddr)
> +               goto out;
> +

Peter, Ingo,

Are you guys ok with us routing this through the bpf-next tree? We'll
have a tiny conflict because in perf/core branch there is
arch_uprobe_optimize() call added after handler_chain(), so git merge
will be a bit confused, probably. But it should be trivially
resolvable.

>         if (arch_uprobe_skip_sstep(&uprobe->arch, regs))
>                 goto out;
>
> --
> 2.51.0
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCHv4 bpf-next 2/6] uprobe: Do not emulate/sstep original instruction when ip is changed
  2025-09-16 22:28   ` Andrii Nakryiko
@ 2025-09-22 20:47     ` Andrii Nakryiko
  2025-09-24  8:49     ` Peter Zijlstra
  1 sibling, 0 replies; 13+ messages in thread
From: Andrii Nakryiko @ 2025-09-22 20:47 UTC (permalink / raw)
  To: Jiri Olsa, Ingo Molnar, Peter Zijlstra
  Cc: Oleg Nesterov, Masami Hiramatsu, Andrii Nakryiko, bpf,
	linux-kernel, linux-trace-kernel, x86, Song Liu, Yonghong Song,
	John Fastabend, Hao Luo, Steven Rostedt

On Tue, Sep 16, 2025 at 3:28 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Tue, Sep 16, 2025 at 2:53 PM Jiri Olsa <jolsa@kernel.org> wrote:
> >
> > If uprobe handler changes instruction pointer we still execute single
> > step) or emulate the original instruction and increment the (new) ip
> > with its length.
> >
> > This makes the new instruction pointer bogus and application will
> > likely crash on illegal instruction execution.
> >
> > If user decided to take execution elsewhere, it makes little sense
> > to execute the original instruction, so let's skip it.
> >
> > Acked-by: Oleg Nesterov <oleg@redhat.com>
> > Acked-by: Andrii Nakryiko <andrii@kernel.org>
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  kernel/events/uprobes.c | 7 +++++++
> >  1 file changed, 7 insertions(+)
> >
> > diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> > index 7ca1940607bd..2b32c32bcb77 100644
> > --- a/kernel/events/uprobes.c
> > +++ b/kernel/events/uprobes.c
> > @@ -2741,6 +2741,13 @@ static void handle_swbp(struct pt_regs *regs)
> >
> >         handler_chain(uprobe, regs);
> >
> > +       /*
> > +        * If user decided to take execution elsewhere, it makes little sense
> > +        * to execute the original instruction, so let's skip it.
> > +        */
> > +       if (instruction_pointer(regs) != bp_vaddr)
> > +               goto out;
> > +
>
> Peter, Ingo,
>
> Are you guys ok with us routing this through the bpf-next tree? We'll
> have a tiny conflict because in perf/core branch there is
> arch_uprobe_optimize() call added after handler_chain(), so git merge
> will be a bit confused, probably. But it should be trivially
> resolvable.

Ping. Any objections for landing this patch in bpf-next?

>
> >         if (arch_uprobe_skip_sstep(&uprobe->arch, regs))
> >                 goto out;
> >
> > --
> > 2.51.0
> >

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCHv4 bpf-next 2/6] uprobe: Do not emulate/sstep original instruction when ip is changed
  2025-09-16 22:28   ` Andrii Nakryiko
  2025-09-22 20:47     ` Andrii Nakryiko
@ 2025-09-24  8:49     ` Peter Zijlstra
  2025-09-24  9:47       ` Alexei Starovoitov
  1 sibling, 1 reply; 13+ messages in thread
From: Peter Zijlstra @ 2025-09-24  8:49 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Jiri Olsa, Ingo Molnar, Oleg Nesterov, Masami Hiramatsu,
	Andrii Nakryiko, bpf, linux-kernel, linux-trace-kernel, x86,
	Song Liu, Yonghong Song, John Fastabend, Hao Luo, Steven Rostedt

On Tue, Sep 16, 2025 at 03:28:52PM -0700, Andrii Nakryiko wrote:
> On Tue, Sep 16, 2025 at 2:53 PM Jiri Olsa <jolsa@kernel.org> wrote:
> >
> > If uprobe handler changes instruction pointer we still execute single
> > step) or emulate the original instruction and increment the (new) ip
> > with its length.
> >
> > This makes the new instruction pointer bogus and application will
> > likely crash on illegal instruction execution.
> >
> > If user decided to take execution elsewhere, it makes little sense
> > to execute the original instruction, so let's skip it.
> >
> > Acked-by: Oleg Nesterov <oleg@redhat.com>
> > Acked-by: Andrii Nakryiko <andrii@kernel.org>
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  kernel/events/uprobes.c | 7 +++++++
> >  1 file changed, 7 insertions(+)
> >
> > diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> > index 7ca1940607bd..2b32c32bcb77 100644
> > --- a/kernel/events/uprobes.c
> > +++ b/kernel/events/uprobes.c
> > @@ -2741,6 +2741,13 @@ static void handle_swbp(struct pt_regs *regs)
> >
> >         handler_chain(uprobe, regs);
> >
> > +       /*
> > +        * If user decided to take execution elsewhere, it makes little sense
> > +        * to execute the original instruction, so let's skip it.
> > +        */
> > +       if (instruction_pointer(regs) != bp_vaddr)
> > +               goto out;
> > +
> 
> Peter, Ingo,
> 
> Are you guys ok with us routing this through the bpf-next tree? We'll
> have a tiny conflict because in perf/core branch there is
> arch_uprobe_optimize() call added after handler_chain(), so git merge
> will be a bit confused, probably. But it should be trivially
> resolvable.

Nah, I suppose that'll be fine. Thanks!

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCHv4 bpf-next 2/6] uprobe: Do not emulate/sstep original instruction when ip is changed
  2025-09-24  8:49     ` Peter Zijlstra
@ 2025-09-24  9:47       ` Alexei Starovoitov
  2025-09-24 10:23         ` Jiri Olsa
  0 siblings, 1 reply; 13+ messages in thread
From: Alexei Starovoitov @ 2025-09-24  9:47 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Andrii Nakryiko, Jiri Olsa, Ingo Molnar, Oleg Nesterov,
	Masami Hiramatsu, Andrii Nakryiko, bpf, LKML, linux-trace-kernel,
	X86 ML, Song Liu, Yonghong Song, John Fastabend, Hao Luo,
	Steven Rostedt

On Wed, Sep 24, 2025 at 11:15 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Tue, Sep 16, 2025 at 03:28:52PM -0700, Andrii Nakryiko wrote:
> > On Tue, Sep 16, 2025 at 2:53 PM Jiri Olsa <jolsa@kernel.org> wrote:
> > >
> > > If uprobe handler changes instruction pointer we still execute single
> > > step) or emulate the original instruction and increment the (new) ip
> > > with its length.
> > >
> > > This makes the new instruction pointer bogus and application will
> > > likely crash on illegal instruction execution.
> > >
> > > If user decided to take execution elsewhere, it makes little sense
> > > to execute the original instruction, so let's skip it.
> > >
> > > Acked-by: Oleg Nesterov <oleg@redhat.com>
> > > Acked-by: Andrii Nakryiko <andrii@kernel.org>
> > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > > ---
> > >  kernel/events/uprobes.c | 7 +++++++
> > >  1 file changed, 7 insertions(+)
> > >
> > > diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> > > index 7ca1940607bd..2b32c32bcb77 100644
> > > --- a/kernel/events/uprobes.c
> > > +++ b/kernel/events/uprobes.c
> > > @@ -2741,6 +2741,13 @@ static void handle_swbp(struct pt_regs *regs)
> > >
> > >         handler_chain(uprobe, regs);
> > >
> > > +       /*
> > > +        * If user decided to take execution elsewhere, it makes little sense
> > > +        * to execute the original instruction, so let's skip it.
> > > +        */
> > > +       if (instruction_pointer(regs) != bp_vaddr)
> > > +               goto out;
> > > +
> >
> > Peter, Ingo,
> >
> > Are you guys ok with us routing this through the bpf-next tree? We'll
> > have a tiny conflict because in perf/core branch there is
> > arch_uprobe_optimize() call added after handler_chain(), so git merge
> > will be a bit confused, probably. But it should be trivially
> > resolvable.
>
> Nah, I suppose that'll be fine. Thanks!

Thanks! Applied.

Jiri,
in the future, please keep the whole history in the cover letter.
v1->v2, v2->v3. Just v4 changes are nice, but pls copy paste
previous cover letters and expand them.
Also please always include links to previous versions in the cover.
Search on lore sucks. Links in the cover are a much better
way to preserve the history.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCHv4 bpf-next 0/6] uprobe,bpf: Allow to change app registers from uprobe registers
  2025-09-16 21:52 [PATCHv4 bpf-next 0/6] uprobe,bpf: Allow to change app registers from uprobe registers Jiri Olsa
                   ` (5 preceding siblings ...)
  2025-09-16 21:53 ` [PATCHv4 bpf-next 6/6] selftests/bpf: Add kprobe multi " Jiri Olsa
@ 2025-09-24  9:50 ` patchwork-bot+netdevbpf
  6 siblings, 0 replies; 13+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-09-24  9:50 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: oleg, mhiramat, peterz, andrii, bpf, linux-kernel,
	linux-trace-kernel, x86, songliubraving, yhs, john.fastabend,
	haoluo, rostedt, mingo

Hello:

This series was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:

On Tue, 16 Sep 2025 23:52:55 +0200 you wrote:
> hi,
> we recently had several requests for tetragon to be able to change
> user application function return value or divert its execution through
> instruction pointer change.
> 
> This patchset adds support for uprobe program to change app's registers
> including instruction pointer.
> 
> [...]

Here is the summary with links:
  - [PATCHv4,bpf-next,1/6] bpf: Allow uprobe program to change context registers
    https://git.kernel.org/bpf/bpf-next/c/7384893d970e
  - [PATCHv4,bpf-next,2/6] uprobe: Do not emulate/sstep original instruction when ip is changed
    https://git.kernel.org/bpf/bpf-next/c/4363264111e1
  - [PATCHv4,bpf-next,3/6] selftests/bpf: Add uprobe context registers changes test
    https://git.kernel.org/bpf/bpf-next/c/7f8a05c5d388
  - [PATCHv4,bpf-next,4/6] selftests/bpf: Add uprobe context ip register change test
    https://git.kernel.org/bpf/bpf-next/c/6a4ea0d1cb44
  - [PATCHv4,bpf-next,5/6] selftests/bpf: Add kprobe write ctx attach test
    https://git.kernel.org/bpf/bpf-next/c/1b881ee294b2
  - [PATCHv4,bpf-next,6/6] selftests/bpf: Add kprobe multi write ctx attach test
    https://git.kernel.org/bpf/bpf-next/c/3d237467a444

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCHv4 bpf-next 2/6] uprobe: Do not emulate/sstep original instruction when ip is changed
  2025-09-24  9:47       ` Alexei Starovoitov
@ 2025-09-24 10:23         ` Jiri Olsa
  0 siblings, 0 replies; 13+ messages in thread
From: Jiri Olsa @ 2025-09-24 10:23 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Peter Zijlstra, Andrii Nakryiko, Ingo Molnar, Oleg Nesterov,
	Masami Hiramatsu, Andrii Nakryiko, bpf, LKML, linux-trace-kernel,
	X86 ML, Song Liu, Yonghong Song, John Fastabend, Hao Luo,
	Steven Rostedt

On Wed, Sep 24, 2025 at 11:47:42AM +0200, Alexei Starovoitov wrote:
> On Wed, Sep 24, 2025 at 11:15 AM Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > On Tue, Sep 16, 2025 at 03:28:52PM -0700, Andrii Nakryiko wrote:
> > > On Tue, Sep 16, 2025 at 2:53 PM Jiri Olsa <jolsa@kernel.org> wrote:
> > > >
> > > > If uprobe handler changes instruction pointer we still execute single
> > > > step) or emulate the original instruction and increment the (new) ip
> > > > with its length.
> > > >
> > > > This makes the new instruction pointer bogus and application will
> > > > likely crash on illegal instruction execution.
> > > >
> > > > If user decided to take execution elsewhere, it makes little sense
> > > > to execute the original instruction, so let's skip it.
> > > >
> > > > Acked-by: Oleg Nesterov <oleg@redhat.com>
> > > > Acked-by: Andrii Nakryiko <andrii@kernel.org>
> > > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > > > ---
> > > >  kernel/events/uprobes.c | 7 +++++++
> > > >  1 file changed, 7 insertions(+)
> > > >
> > > > diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> > > > index 7ca1940607bd..2b32c32bcb77 100644
> > > > --- a/kernel/events/uprobes.c
> > > > +++ b/kernel/events/uprobes.c
> > > > @@ -2741,6 +2741,13 @@ static void handle_swbp(struct pt_regs *regs)
> > > >
> > > >         handler_chain(uprobe, regs);
> > > >
> > > > +       /*
> > > > +        * If user decided to take execution elsewhere, it makes little sense
> > > > +        * to execute the original instruction, so let's skip it.
> > > > +        */
> > > > +       if (instruction_pointer(regs) != bp_vaddr)
> > > > +               goto out;
> > > > +
> > >
> > > Peter, Ingo,
> > >
> > > Are you guys ok with us routing this through the bpf-next tree? We'll
> > > have a tiny conflict because in perf/core branch there is
> > > arch_uprobe_optimize() call added after handler_chain(), so git merge
> > > will be a bit confused, probably. But it should be trivially
> > > resolvable.
> >
> > Nah, I suppose that'll be fine. Thanks!
> 
> Thanks! Applied.
> 
> Jiri,
> in the future, please keep the whole history in the cover letter.
> v1->v2, v2->v3. Just v4 changes are nice, but pls copy paste
> previous cover letters and expand them.

ok

> Also please always include links to previous versions in the cover.
> Search on lore sucks. Links in the cover are a much better
> way to preserve the history.

will add them in future, thanks

jirka

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2025-09-24 10:23 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-16 21:52 [PATCHv4 bpf-next 0/6] uprobe,bpf: Allow to change app registers from uprobe registers Jiri Olsa
2025-09-16 21:52 ` [PATCHv4 bpf-next 1/6] bpf: Allow uprobe program to change context registers Jiri Olsa
2025-09-16 21:52 ` [PATCHv4 bpf-next 2/6] uprobe: Do not emulate/sstep original instruction when ip is changed Jiri Olsa
2025-09-16 22:28   ` Andrii Nakryiko
2025-09-22 20:47     ` Andrii Nakryiko
2025-09-24  8:49     ` Peter Zijlstra
2025-09-24  9:47       ` Alexei Starovoitov
2025-09-24 10:23         ` Jiri Olsa
2025-09-16 21:52 ` [PATCHv4 bpf-next 3/6] selftests/bpf: Add uprobe context registers changes test Jiri Olsa
2025-09-16 21:52 ` [PATCHv4 bpf-next 4/6] selftests/bpf: Add uprobe context ip register change test Jiri Olsa
2025-09-16 21:53 ` [PATCHv4 bpf-next 5/6] selftests/bpf: Add kprobe write ctx attach test Jiri Olsa
2025-09-16 21:53 ` [PATCHv4 bpf-next 6/6] selftests/bpf: Add kprobe multi " Jiri Olsa
2025-09-24  9:50 ` [PATCHv4 bpf-next 0/6] uprobe,bpf: Allow to change app registers from uprobe registers patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).