linux-trace-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv2 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe
@ 2025-11-03 22:09 Jiri Olsa
  2025-11-03 22:09 ` [PATCHv2 1/4] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()" Jiri Olsa
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Jiri Olsa @ 2025-11-03 22:09 UTC (permalink / raw)
  To: Masami Hiramatsu, Steven Rostedt, Josh Poimboeuf
  Cc: Peter Zijlstra, bpf, linux-trace-kernel, x86, Yonghong Song,
	Song Liu, Andrii Nakryiko

hi,
sending fix for ORC stack unwind issue reported in here [1], where
the ORC unwinder won't go pass the return_to_handler function and
we get no stacktrace.

Sending fix for that together with unrelated stacktrace fix (patch 1),
so the attached test can work properly.

It's based on:
  https://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
  probes/for-next

v1: https://lore.kernel.org/bpf/20251027131354.1984006-1-jolsa@kernel.org/

v2 changes:
- added ack for patch 2 [Masami]
- added test for raw tp unwind

thanks,
jirka


[1] https://lore.kernel.org/bpf/aObSyt3qOnS_BMcy@krava/
---
Jiri Olsa (4):
      Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()"
      x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe
      selftests/bpf: Add stacktrace ips test for kprobe_multi/kretprobe_multi
      selftests/bpf: Add stacktrace ips test for raw_tp

 arch/x86/events/core.c                                  |  10 +++----
 arch/x86/include/asm/ftrace.h                           |   5 ++++
 arch/x86/kernel/ftrace_64.S                             |   8 +++++-
 include/linux/ftrace.h                                  |  10 ++++++-
 tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c | 150 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 tools/testing/selftests/bpf/progs/stacktrace_ips.c      |  49 +++++++++++++++++++++++++++++++
 tools/testing/selftests/bpf/test_kmods/bpf_testmod.c    |  26 +++++++++++++++++
 7 files changed, 251 insertions(+), 7 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
 create mode 100644 tools/testing/selftests/bpf/progs/stacktrace_ips.c

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCHv2 1/4] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()"
  2025-11-03 22:09 [PATCHv2 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe Jiri Olsa
@ 2025-11-03 22:09 ` Jiri Olsa
  2025-11-06 12:27   ` Peter Zijlstra
  2025-11-03 22:09 ` [PATCHv2 2/4] x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe Jiri Olsa
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 12+ messages in thread
From: Jiri Olsa @ 2025-11-03 22:09 UTC (permalink / raw)
  To: Masami Hiramatsu, Steven Rostedt, Josh Poimboeuf
  Cc: Song Liu, Peter Zijlstra, bpf, linux-trace-kernel, x86,
	Yonghong Song, Song Liu, Andrii Nakryiko

This reverts commit 83f44ae0f8afcc9da659799db8693f74847e66b3.

Currently we store initial stacktrace entry twice for non-HW ot_regs, which
means callers that fail perf_hw_regs(regs) condition in perf_callchain_kernel.

It's easy to reproduce this bpftrace:

  # bpftrace -e 'tracepoint:sched:sched_process_exec { print(kstack()); }'
  Attaching 1 probe...

        bprm_execve+1767
        bprm_execve+1767
        do_execveat_common.isra.0+425
        __x64_sys_execve+56
        do_syscall_64+133
        entry_SYSCALL_64_after_hwframe+118

When perf_callchain_kernel calls unwind_start with first_frame, AFAICS
we do not skip regs->ip, but it's added as part of the unwind process.
Hence reverting the extra perf_callchain_store for non-hw regs leg.

I was not able to bisect this, so I'm not really sure why this was needed
in v5.2 and why it's not working anymore, but I could see double entries
as far as v5.10.

I did the test for both ORC and framepointer unwind with and without the
this fix and except for the initial entry the stacktraces are the same.

Acked-by: Song Liu <song@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 arch/x86/events/core.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 745caa6c15a3..fa6c47b50989 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -2789,13 +2789,13 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
 		return;
 	}
 
-	if (perf_callchain_store(entry, regs->ip))
-		return;
-
-	if (perf_hw_regs(regs))
+	if (perf_hw_regs(regs)) {
+		if (perf_callchain_store(entry, regs->ip))
+			return;
 		unwind_start(&state, current, regs, NULL);
-	else
+	} else {
 		unwind_start(&state, current, NULL, (void *)regs->sp);
+	}
 
 	for (; !unwind_done(&state); unwind_next_frame(&state)) {
 		addr = unwind_get_return_address(&state);
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCHv2 2/4] x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe
  2025-11-03 22:09 [PATCHv2 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe Jiri Olsa
  2025-11-03 22:09 ` [PATCHv2 1/4] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()" Jiri Olsa
@ 2025-11-03 22:09 ` Jiri Olsa
  2025-11-03 23:47   ` bot+bpf-ci
  2025-11-06 12:29   ` Peter Zijlstra
  2025-11-03 22:09 ` [PATCHv2 3/4] selftests/bpf: Add stacktrace ips test for kprobe_multi/kretprobe_multi Jiri Olsa
  2025-11-03 22:09 ` [PATCHv2 4/4] selftests/bpf: Add stacktrace ips test for raw_tp Jiri Olsa
  3 siblings, 2 replies; 12+ messages in thread
From: Jiri Olsa @ 2025-11-03 22:09 UTC (permalink / raw)
  To: Masami Hiramatsu, Steven Rostedt, Josh Poimboeuf
  Cc: Peter Zijlstra, bpf, linux-trace-kernel, x86, Yonghong Song,
	Song Liu, Andrii Nakryiko

Currently we don't get stack trace via ORC unwinder on top of fgraph exit
handler. We can see that when generating stacktrace from kretprobe_multi
bpf program which is based on fprobe/fgraph.

The reason is that the ORC unwind code won't get pass the return_to_handler
callback installed by fgraph return probe machinery.

Solving this by creating stack frame in return_to_handler expected by
ftrace_graph_ret_addr function to recover original return address and
continue with the unwind.

Also updating the pt_regs data with cs/flags/rsp which are needed for
successful stack retrieval from ebpf bpf_get_stackid helper.
 - in get_perf_callchain we check user_mode(regs) so CS has to be set
 - in perf_callchain_kernel we call perf_hw_regs(regs), so EFLAGS/FIXED
    has to be unset

Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 arch/x86/include/asm/ftrace.h |  5 +++++
 arch/x86/kernel/ftrace_64.S   |  8 +++++++-
 include/linux/ftrace.h        | 10 +++++++++-
 3 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h
index 93156ac4ffe0..b08c95872eed 100644
--- a/arch/x86/include/asm/ftrace.h
+++ b/arch/x86/include/asm/ftrace.h
@@ -56,6 +56,11 @@ arch_ftrace_get_regs(struct ftrace_regs *fregs)
 	return &arch_ftrace_regs(fregs)->regs;
 }
 
+#define arch_ftrace_partial_regs(regs) do {	\
+	regs->flags &= ~X86_EFLAGS_FIXED;	\
+	regs->cs = __KERNEL_CS;			\
+} while (0)
+
 #define arch_ftrace_fill_perf_regs(fregs, _regs) do {	\
 		(_regs)->ip = arch_ftrace_regs(fregs)->regs.ip;		\
 		(_regs)->sp = arch_ftrace_regs(fregs)->regs.sp;		\
diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S
index 367da3638167..823dbdd0eb41 100644
--- a/arch/x86/kernel/ftrace_64.S
+++ b/arch/x86/kernel/ftrace_64.S
@@ -354,12 +354,17 @@ SYM_CODE_START(return_to_handler)
 	UNWIND_HINT_UNDEFINED
 	ANNOTATE_NOENDBR
 
+	/* Restore return_to_handler value that got eaten by previous ret instruction. */
+	subq $8, %rsp
+	UNWIND_HINT_FUNC
+
 	/* Save ftrace_regs for function exit context  */
 	subq $(FRAME_SIZE), %rsp
 
 	movq %rax, RAX(%rsp)
 	movq %rdx, RDX(%rsp)
 	movq %rbp, RBP(%rsp)
+	movq %rsp, RSP(%rsp)
 	movq %rsp, %rdi
 
 	call ftrace_return_to_handler
@@ -368,7 +373,8 @@ SYM_CODE_START(return_to_handler)
 	movq RDX(%rsp), %rdx
 	movq RAX(%rsp), %rax
 
-	addq $(FRAME_SIZE), %rsp
+	addq $(FRAME_SIZE) + 8, %rsp
+
 	/*
 	 * Jump back to the old return address. This cannot be JMP_NOSPEC rdi
 	 * since IBT would demand that contain ENDBR, which simply isn't so for
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 7ded7df6e9b5..07f8c309e432 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -193,6 +193,10 @@ static __always_inline struct pt_regs *ftrace_get_regs(struct ftrace_regs *fregs
 #if !defined(CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS) || \
 	defined(CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS)
 
+#ifndef arch_ftrace_partial_regs
+#define arch_ftrace_partial_regs(regs) do {} while (0)
+#endif
+
 static __always_inline struct pt_regs *
 ftrace_partial_regs(struct ftrace_regs *fregs, struct pt_regs *regs)
 {
@@ -202,7 +206,11 @@ ftrace_partial_regs(struct ftrace_regs *fregs, struct pt_regs *regs)
 	 * Since arch_ftrace_get_regs() will check some members and may return
 	 * NULL, we can not use it.
 	 */
-	return &arch_ftrace_regs(fregs)->regs;
+	regs = &arch_ftrace_regs(fregs)->regs;
+
+	/* Allow arch specific updates to regs. */
+	arch_ftrace_partial_regs(regs);
+	return regs;
 }
 
 #endif /* !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS || CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS */
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCHv2 3/4] selftests/bpf: Add stacktrace ips test for kprobe_multi/kretprobe_multi
  2025-11-03 22:09 [PATCHv2 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe Jiri Olsa
  2025-11-03 22:09 ` [PATCHv2 1/4] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()" Jiri Olsa
  2025-11-03 22:09 ` [PATCHv2 2/4] x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe Jiri Olsa
@ 2025-11-03 22:09 ` Jiri Olsa
  2025-11-03 22:09 ` [PATCHv2 4/4] selftests/bpf: Add stacktrace ips test for raw_tp Jiri Olsa
  3 siblings, 0 replies; 12+ messages in thread
From: Jiri Olsa @ 2025-11-03 22:09 UTC (permalink / raw)
  To: Masami Hiramatsu, Steven Rostedt, Josh Poimboeuf
  Cc: Peter Zijlstra, bpf, linux-trace-kernel, x86, Yonghong Song,
	Song Liu, Andrii Nakryiko

Adding test that attaches kprobe/kretprobe multi and verifies the
ORC stacktrace matches expected functions.

Adding bpf_testmod_stacktrace_test function to bpf_testmod kernel
module which is called through several functions so we get reliable
call path for stacktrace.

The test is only for ORC unwinder to keep it simple.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../selftests/bpf/prog_tests/stacktrace_ips.c | 104 ++++++++++++++++++
 .../selftests/bpf/progs/stacktrace_ips.c      |  41 +++++++
 .../selftests/bpf/test_kmods/bpf_testmod.c    |  26 +++++
 3 files changed, 171 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
 create mode 100644 tools/testing/selftests/bpf/progs/stacktrace_ips.c

diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
new file mode 100644
index 000000000000..6fca459ba550
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
@@ -0,0 +1,104 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include "stacktrace_ips.skel.h"
+
+#ifdef __x86_64__
+static int check_stacktrace_ips(int fd, __u32 key, int cnt, ...)
+{
+	__u64 ips[PERF_MAX_STACK_DEPTH];
+	struct ksyms *ksyms = NULL;
+	int i, err = 0;
+	va_list args;
+
+	/* sorted by addr */
+	ksyms = load_kallsyms_local();
+	if (!ASSERT_OK_PTR(ksyms, "load_kallsyms_local"))
+		return -1;
+
+	/* unlikely, but... */
+	if (!ASSERT_LT(cnt, PERF_MAX_STACK_DEPTH, "check_max"))
+		return -1;
+
+	err = bpf_map_lookup_elem(fd, &key, ips);
+	if (err)
+		goto out;
+
+	/*
+	 * Compare all symbols provided via arguments with stacktrace ips,
+	 * and their related symbol addresses.t
+	 */
+	va_start(args, cnt);
+
+	for (i = 0; i < cnt; i++) {
+		unsigned long val;
+		struct ksym *ksym;
+
+		val = va_arg(args, unsigned long);
+		ksym = ksym_search_local(ksyms, ips[i]);
+		if (!ASSERT_OK_PTR(ksym, "ksym_search_local"))
+			break;
+		ASSERT_EQ(ksym->addr, val, "stack_cmp");
+	}
+
+	va_end(args);
+
+out:
+	free_kallsyms_local(ksyms);
+	return err;
+}
+
+static void test_stacktrace_ips_kprobe_multi(bool retprobe)
+{
+	LIBBPF_OPTS(bpf_kprobe_multi_opts, opts,
+		.retprobe = retprobe
+	);
+	LIBBPF_OPTS(bpf_test_run_opts, topts);
+	struct stacktrace_ips *skel;
+
+	skel = stacktrace_ips__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "stacktrace_ips__open_and_load"))
+		return;
+
+	if (!skel->kconfig->CONFIG_UNWINDER_ORC) {
+		test__skip();
+		goto cleanup;
+	}
+
+	skel->links.kprobe_multi_test = bpf_program__attach_kprobe_multi_opts(
+							skel->progs.kprobe_multi_test,
+							"bpf_testmod_stacktrace_test", &opts);
+	if (!ASSERT_OK_PTR(skel->links.kprobe_multi_test, "bpf_program__attach_kprobe_multi_opts"))
+		goto cleanup;
+
+	trigger_module_test_read(1);
+
+	load_kallsyms();
+
+	check_stacktrace_ips(bpf_map__fd(skel->maps.stackmap), skel->bss->stack_key, 4,
+			     ksym_get_addr("bpf_testmod_stacktrace_test_3"),
+			     ksym_get_addr("bpf_testmod_stacktrace_test_2"),
+			     ksym_get_addr("bpf_testmod_stacktrace_test_1"),
+			     ksym_get_addr("bpf_testmod_test_read"));
+
+cleanup:
+	stacktrace_ips__destroy(skel);
+}
+
+static void __test_stacktrace_ips(void)
+{
+	if (test__start_subtest("kprobe_multi"))
+		test_stacktrace_ips_kprobe_multi(false);
+	if (test__start_subtest("kretprobe_multi"))
+		test_stacktrace_ips_kprobe_multi(true);
+}
+#else
+static void __test_stacktrace_ips(void)
+{
+	test__skip();
+}
+#endif
+
+void test_stacktrace_ips(void)
+{
+	__test_stacktrace_ips();
+}
diff --git a/tools/testing/selftests/bpf/progs/stacktrace_ips.c b/tools/testing/selftests/bpf/progs/stacktrace_ips.c
new file mode 100644
index 000000000000..e2eb30945c1b
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/stacktrace_ips.c
@@ -0,0 +1,41 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2018 Facebook
+
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+#ifndef PERF_MAX_STACK_DEPTH
+#define PERF_MAX_STACK_DEPTH         127
+#endif
+
+typedef __u64 stack_trace_t[PERF_MAX_STACK_DEPTH];
+
+struct {
+	__uint(type, BPF_MAP_TYPE_STACK_TRACE);
+	__uint(max_entries, 16384);
+	__type(key, __u32);
+	__type(value, stack_trace_t);
+} stackmap SEC(".maps");
+
+extern bool CONFIG_UNWINDER_ORC __kconfig __weak;
+
+/*
+ * This function is here to have CONFIG_UNWINDER_ORC
+ * used and added to object BTF.
+ */
+int unused(void)
+{
+	return CONFIG_UNWINDER_ORC ? 0 : 1;
+}
+
+__u32 stack_key;
+
+SEC("kprobe.multi")
+int kprobe_multi_test(struct pt_regs *ctx)
+{
+	stack_key = bpf_get_stackid(ctx, &stackmap, 0);
+	return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
index 8074bc5f6f20..ed0a4721d8fd 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
@@ -417,6 +417,30 @@ noinline int bpf_testmod_fentry_test11(u64 a, void *b, short c, int d,
 	return a + (long)b + c + d + (long)e + f + g + h + i + j + k;
 }
 
+noinline void bpf_testmod_stacktrace_test(void)
+{
+	/* used for stacktrace test as attach function */
+	asm volatile ("");
+}
+
+noinline void bpf_testmod_stacktrace_test_3(void)
+{
+	bpf_testmod_stacktrace_test();
+	asm volatile ("");
+}
+
+noinline void bpf_testmod_stacktrace_test_2(void)
+{
+	bpf_testmod_stacktrace_test_3();
+	asm volatile ("");
+}
+
+noinline void bpf_testmod_stacktrace_test_1(void)
+{
+	bpf_testmod_stacktrace_test_2();
+	asm volatile ("");
+}
+
 int bpf_testmod_fentry_ok;
 
 noinline ssize_t
@@ -497,6 +521,8 @@ bpf_testmod_test_read(struct file *file, struct kobject *kobj,
 			21, 22, 23, 24, 25, 26) != 231)
 		goto out;
 
+	bpf_testmod_stacktrace_test_1();
+
 	bpf_testmod_fentry_ok = 1;
 out:
 	return -EIO; /* always fail */
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCHv2 4/4] selftests/bpf: Add stacktrace ips test for raw_tp
  2025-11-03 22:09 [PATCHv2 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe Jiri Olsa
                   ` (2 preceding siblings ...)
  2025-11-03 22:09 ` [PATCHv2 3/4] selftests/bpf: Add stacktrace ips test for kprobe_multi/kretprobe_multi Jiri Olsa
@ 2025-11-03 22:09 ` Jiri Olsa
  2025-11-03 23:47   ` bot+bpf-ci
  3 siblings, 1 reply; 12+ messages in thread
From: Jiri Olsa @ 2025-11-03 22:09 UTC (permalink / raw)
  To: Masami Hiramatsu, Steven Rostedt, Josh Poimboeuf
  Cc: Peter Zijlstra, bpf, linux-trace-kernel, x86, Yonghong Song,
	Song Liu, Andrii Nakryiko

Adding test that verifies we get expected initial 2 entries from
stacktrace for rawtp probe via ORC unwind.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../selftests/bpf/prog_tests/stacktrace_ips.c | 46 +++++++++++++++++++
 .../selftests/bpf/progs/stacktrace_ips.c      |  8 ++++
 2 files changed, 54 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
index 6fca459ba550..282a068d2064 100644
--- a/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
+++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
@@ -84,12 +84,58 @@ static void test_stacktrace_ips_kprobe_multi(bool retprobe)
 	stacktrace_ips__destroy(skel);
 }
 
+static void test_stacktrace_ips_raw_tp(void)
+{
+	__u32 info_len = sizeof(struct bpf_prog_info);
+	LIBBPF_OPTS(bpf_test_run_opts, topts);
+	struct bpf_prog_info info = {};
+	struct stacktrace_ips *skel;
+	__u64 bpf_prog_ksym = 0;
+	int err;
+
+	skel = stacktrace_ips__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "stacktrace_ips__open_and_load"))
+		return;
+
+	if (!skel->kconfig->CONFIG_UNWINDER_ORC) {
+		test__skip();
+		goto cleanup;
+	}
+
+	skel->links.rawtp_test = bpf_program__attach_raw_tracepoint(
+							skel->progs.rawtp_test,
+							"bpf_testmod_test_read");
+	if (!ASSERT_OK_PTR(skel->links.rawtp_test, "bpf_program__attach_raw_tracepoint"))
+		goto cleanup;
+
+	/* get bpf program address */
+	info.jited_ksyms = ptr_to_u64(&bpf_prog_ksym);
+	info.nr_jited_ksyms = 1;
+	err = bpf_prog_get_info_by_fd(bpf_program__fd(skel->progs.rawtp_test),
+				      &info, &info_len);
+	if (ASSERT_OK(err, "bpf_prog_get_info_by_fd"))
+		goto cleanup;
+
+	trigger_module_test_read(1);
+
+	load_kallsyms();
+
+	check_stacktrace_ips(bpf_map__fd(skel->maps.stackmap), skel->bss->stack_key, 2,
+			     bpf_prog_ksym,
+			     ksym_get_addr("bpf_trace_run2"));
+
+cleanup:
+	stacktrace_ips__destroy(skel);
+}
+
 static void __test_stacktrace_ips(void)
 {
 	if (test__start_subtest("kprobe_multi"))
 		test_stacktrace_ips_kprobe_multi(false);
 	if (test__start_subtest("kretprobe_multi"))
 		test_stacktrace_ips_kprobe_multi(true);
+	if (test__start_subtest("raw_tp"))
+		test_stacktrace_ips_raw_tp();
 }
 #else
 static void __test_stacktrace_ips(void)
diff --git a/tools/testing/selftests/bpf/progs/stacktrace_ips.c b/tools/testing/selftests/bpf/progs/stacktrace_ips.c
index e2eb30945c1b..a96c8150d7f5 100644
--- a/tools/testing/selftests/bpf/progs/stacktrace_ips.c
+++ b/tools/testing/selftests/bpf/progs/stacktrace_ips.c
@@ -38,4 +38,12 @@ int kprobe_multi_test(struct pt_regs *ctx)
 	return 0;
 }
 
+SEC("raw_tp/bpf_testmod_test_read")
+int rawtp_test(void *ctx)
+{
+	/* Skip ebpf program entry in the stack. */
+	stack_key = bpf_get_stackid(ctx, &stackmap, 0);
+	return 0;
+}
+
 char _license[] SEC("license") = "GPL";
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCHv2 4/4] selftests/bpf: Add stacktrace ips test for raw_tp
  2025-11-03 22:09 ` [PATCHv2 4/4] selftests/bpf: Add stacktrace ips test for raw_tp Jiri Olsa
@ 2025-11-03 23:47   ` bot+bpf-ci
  2025-11-04  8:35     ` Jiri Olsa
  0 siblings, 1 reply; 12+ messages in thread
From: bot+bpf-ci @ 2025-11-03 23:47 UTC (permalink / raw)
  To: jolsa, mhiramat, rostedt, jpoimboe
  Cc: peterz, bpf, linux-trace-kernel, x86, yhs, songliubraving, andrii,
	ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm,
	ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 1835 bytes --]

> diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
> index 6fca459ba..282a068d2 100644
> --- a/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
> +++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
> @@ -84,12 +84,58 @@ static void test_stacktrace_ips_kprobe_multi(bool retprobe)
>  	stacktrace_ips__destroy(skel);
>  }
>
> +static void test_stacktrace_ips_raw_tp(void)
> +{

[ ... ]

> +	/* get bpf program address */
> +	info.jited_ksyms = ptr_to_u64(&bpf_prog_ksym);
> +	info.nr_jited_ksyms = 1;
> +	err = bpf_prog_get_info_by_fd(bpf_program__fd(skel->progs.rawtp_test),
> +				      &info, &info_len);
> +	if (ASSERT_OK(err, "bpf_prog_get_info_by_fd"))
> +		goto cleanup;
             ^^^^^^^^^^^^^^

Is the error check inverted in test_stacktrace_ips_raw_tp()?

The ASSERT_OK macro returns true when err == 0 (success), so the code
jumps to cleanup on success and continues on failure. Comparing with
test_stacktrace_ips_kprobe_multi() at line 70, the pattern should be:

  if (!ASSERT_OK_PTR(...))
      goto cleanup;

With the current code, bpf_prog_get_info_by_fd() success skips the test
(trigger_module_test_read and check_stacktrace_ips), while failure
continues with invalid bpf_prog_ksym data.

> +
> +	trigger_module_test_read(1);
> +
> +	load_kallsyms();
> +
> +	check_stacktrace_ips(bpf_map__fd(skel->maps.stackmap), skel->bss->stack_key, 2,
> +			     bpf_prog_ksym,
> +			     ksym_get_addr("bpf_trace_run2"));
> +
> +cleanup:
> +	stacktrace_ips__destroy(skel);
> +}
> +

[ ... ]


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/19051288274

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCHv2 2/4] x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe
  2025-11-03 22:09 ` [PATCHv2 2/4] x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe Jiri Olsa
@ 2025-11-03 23:47   ` bot+bpf-ci
  2025-11-04  9:27     ` Jiri Olsa
  2025-11-06 12:29   ` Peter Zijlstra
  1 sibling, 1 reply; 12+ messages in thread
From: bot+bpf-ci @ 2025-11-03 23:47 UTC (permalink / raw)
  To: jolsa, mhiramat, rostedt, jpoimboe
  Cc: peterz, bpf, linux-trace-kernel, x86, yhs, songliubraving, andrii,
	ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm,
	ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 1825 bytes --]

> diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h
> index 93156ac4f..b08c95872 100644
> --- a/arch/x86/include/asm/ftrace.h
> +++ b/arch/x86/include/asm/ftrace.h
> @@ -56,6 +56,11 @@ arch_ftrace_get_regs(struct ftrace_regs *fregs)
>  	return &arch_ftrace_regs(fregs)->regs;
>  }
>
> +#define arch_ftrace_partial_regs(regs) do {	\
> +	regs->flags &= ~X86_EFLAGS_FIXED;	\
> +	regs->cs = __KERNEL_CS;			\
> +} while (0)
> +

Can arch_ftrace_partial_regs read uninitialized regs->flags?

In return_to_handler, the ftrace_regs structure is allocated on the
stack but only a few fields are initialized (RAX, RDX, RBP, RSP).
The EFLAGS/flags field is never written before
arch_ftrace_partial_regs reads it:

arch/x86/kernel/ftrace_64.S return_to_handler:
    subq $(FRAME_SIZE), %rsp    # Allocate ftrace_regs on stack
    movq %rax, RAX(%rsp)
    movq %rdx, RDX(%rsp)
    movq %rbp, RBP(%rsp)
    movq %rsp, RSP(%rsp)
    # No initialization of EFLAGS field

Then ftrace_partial_regs()->arch_ftrace_partial_regs() executes:
    regs->flags &= ~X86_EFLAGS_FIXED;

This reads the uninitialized flags field. Stack allocations contain
garbage, so regs->flags will have whatever data was previously on
the stack. The &= operation produces undefined results when operating
on uninitialized memory.

For comparison, ftrace_regs_caller explicitly initializes EFLAGS:
    movq MCOUNT_REG_SIZE(%rsp), %rcx
    movq %rcx, EFLAGS(%rsp)

Should return_to_handler initialize regs->flags to 0 (or another
appropriate value) before arch_ftrace_partial_regs modifies it?

[ ... ]

---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/19051288274

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCHv2 4/4] selftests/bpf: Add stacktrace ips test for raw_tp
  2025-11-03 23:47   ` bot+bpf-ci
@ 2025-11-04  8:35     ` Jiri Olsa
  0 siblings, 0 replies; 12+ messages in thread
From: Jiri Olsa @ 2025-11-04  8:35 UTC (permalink / raw)
  To: bot+bpf-ci
  Cc: mhiramat, rostedt, jpoimboe, peterz, bpf, linux-trace-kernel, x86,
	yhs, songliubraving, andrii, ast, daniel, martin.lau, eddyz87,
	yonghong.song, clm, ihor.solodrai

On Mon, Nov 03, 2025 at 11:47:36PM +0000, bot+bpf-ci@kernel.org wrote:
> > diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
> > index 6fca459ba..282a068d2 100644
> > --- a/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
> > +++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
> > @@ -84,12 +84,58 @@ static void test_stacktrace_ips_kprobe_multi(bool retprobe)
> >  	stacktrace_ips__destroy(skel);
> >  }
> >
> > +static void test_stacktrace_ips_raw_tp(void)
> > +{
> 
> [ ... ]
> 
> > +	/* get bpf program address */
> > +	info.jited_ksyms = ptr_to_u64(&bpf_prog_ksym);
> > +	info.nr_jited_ksyms = 1;
> > +	err = bpf_prog_get_info_by_fd(bpf_program__fd(skel->progs.rawtp_test),
> > +				      &info, &info_len);
> > +	if (ASSERT_OK(err, "bpf_prog_get_info_by_fd"))
> > +		goto cleanup;
>              ^^^^^^^^^^^^^^
> 
> Is the error check inverted in test_stacktrace_ips_raw_tp()?

ugh, it is.. will fix it in next version

thanks,
jirka

> 
> The ASSERT_OK macro returns true when err == 0 (success), so the code
> jumps to cleanup on success and continues on failure. Comparing with
> test_stacktrace_ips_kprobe_multi() at line 70, the pattern should be:
> 
>   if (!ASSERT_OK_PTR(...))
>       goto cleanup;
> 
> With the current code, bpf_prog_get_info_by_fd() success skips the test
> (trigger_module_test_read and check_stacktrace_ips), while failure
> continues with invalid bpf_prog_ksym data.
> 
> > +
> > +	trigger_module_test_read(1);
> > +
> > +	load_kallsyms();
> > +
> > +	check_stacktrace_ips(bpf_map__fd(skel->maps.stackmap), skel->bss->stack_key, 2,
> > +			     bpf_prog_ksym,
> > +			     ksym_get_addr("bpf_trace_run2"));
> > +
> > +cleanup:
> > +	stacktrace_ips__destroy(skel);
> > +}
> > +
> 
> [ ... ]
> 
> 
> ---
> AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
> See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
> 
> CI run summary: https://github.com/kernel-patches/bpf/actions/runs/19051288274


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCHv2 2/4] x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe
  2025-11-03 23:47   ` bot+bpf-ci
@ 2025-11-04  9:27     ` Jiri Olsa
  0 siblings, 0 replies; 12+ messages in thread
From: Jiri Olsa @ 2025-11-04  9:27 UTC (permalink / raw)
  To: bot+bpf-ci
  Cc: mhiramat, rostedt, jpoimboe, peterz, bpf, linux-trace-kernel, x86,
	yhs, songliubraving, andrii, ast, daniel, martin.lau, eddyz87,
	yonghong.song, clm, ihor.solodrai

On Mon, Nov 03, 2025 at 11:47:37PM +0000, bot+bpf-ci@kernel.org wrote:
> > diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h
> > index 93156ac4f..b08c95872 100644
> > --- a/arch/x86/include/asm/ftrace.h
> > +++ b/arch/x86/include/asm/ftrace.h
> > @@ -56,6 +56,11 @@ arch_ftrace_get_regs(struct ftrace_regs *fregs)
> >  	return &arch_ftrace_regs(fregs)->regs;
> >  }
> >
> > +#define arch_ftrace_partial_regs(regs) do {	\
> > +	regs->flags &= ~X86_EFLAGS_FIXED;	\
> > +	regs->cs = __KERNEL_CS;			\
> > +} while (0)
> > +
> 
> Can arch_ftrace_partial_regs read uninitialized regs->flags?
> 
> In return_to_handler, the ftrace_regs structure is allocated on the
> stack but only a few fields are initialized (RAX, RDX, RBP, RSP).
> The EFLAGS/flags field is never written before
> arch_ftrace_partial_regs reads it:
> 
> arch/x86/kernel/ftrace_64.S return_to_handler:
>     subq $(FRAME_SIZE), %rsp    # Allocate ftrace_regs on stack
>     movq %rax, RAX(%rsp)
>     movq %rdx, RDX(%rsp)
>     movq %rbp, RBP(%rsp)
>     movq %rsp, RSP(%rsp)
>     # No initialization of EFLAGS field
> 
> Then ftrace_partial_regs()->arch_ftrace_partial_regs() executes:
>     regs->flags &= ~X86_EFLAGS_FIXED;
> 
> This reads the uninitialized flags field. Stack allocations contain
> garbage, so regs->flags will have whatever data was previously on
> the stack. The &= operation produces undefined results when operating
> on uninitialized memory.
> 
> For comparison, ftrace_regs_caller explicitly initializes EFLAGS:
>     movq MCOUNT_REG_SIZE(%rsp), %rcx
>     movq %rcx, EFLAGS(%rsp)
> 
> Should return_to_handler initialize regs->flags to 0 (or another
> appropriate value) before arch_ftrace_partial_regs modifies it?

yes, that was in the initial version [1] but Stephen did not like that,
so I moved the flags init to later where it needs to take into account
other callers (that have meaningful flags) and just zeros the X86_EFLAGS_FIXED,
so we go through !perf_hw_regs leg in perf_callchain_kernel

jirka


[1] https://lore.kernel.org/bpf/aObSyt3qOnS_BMcy@krava/

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCHv2 1/4] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()"
  2025-11-03 22:09 ` [PATCHv2 1/4] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()" Jiri Olsa
@ 2025-11-06 12:27   ` Peter Zijlstra
  0 siblings, 0 replies; 12+ messages in thread
From: Peter Zijlstra @ 2025-11-06 12:27 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Masami Hiramatsu, Steven Rostedt, Josh Poimboeuf, Song Liu, bpf,
	linux-trace-kernel, x86, Yonghong Song, Song Liu, Andrii Nakryiko

On Mon, Nov 03, 2025 at 11:09:21PM +0100, Jiri Olsa wrote:
> This reverts commit 83f44ae0f8afcc9da659799db8693f74847e66b3.
> 
> Currently we store initial stacktrace entry twice for non-HW ot_regs, which
> means callers that fail perf_hw_regs(regs) condition in perf_callchain_kernel.
> 
> It's easy to reproduce this bpftrace:
> 
>   # bpftrace -e 'tracepoint:sched:sched_process_exec { print(kstack()); }'
>   Attaching 1 probe...
> 
>         bprm_execve+1767
>         bprm_execve+1767
>         do_execveat_common.isra.0+425
>         __x64_sys_execve+56
>         do_syscall_64+133
>         entry_SYSCALL_64_after_hwframe+118
> 
> When perf_callchain_kernel calls unwind_start with first_frame, AFAICS
> we do not skip regs->ip, but it's added as part of the unwind process.
> Hence reverting the extra perf_callchain_store for non-hw regs leg.
> 
> I was not able to bisect this, so I'm not really sure why this was needed
> in v5.2 and why it's not working anymore, but I could see double entries
> as far as v5.10.

Probably some ftrace/bpf glue code that doesn't adhere to the contract
set by perf_hw_regs(); as you find in the next patch.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCHv2 2/4] x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe
  2025-11-03 22:09 ` [PATCHv2 2/4] x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe Jiri Olsa
  2025-11-03 23:47   ` bot+bpf-ci
@ 2025-11-06 12:29   ` Peter Zijlstra
  2025-11-07 22:22     ` Alexei Starovoitov
  1 sibling, 1 reply; 12+ messages in thread
From: Peter Zijlstra @ 2025-11-06 12:29 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Masami Hiramatsu, Steven Rostedt, Josh Poimboeuf, bpf,
	linux-trace-kernel, x86, Yonghong Song, Song Liu, Andrii Nakryiko

On Mon, Nov 03, 2025 at 11:09:22PM +0100, Jiri Olsa wrote:
> Currently we don't get stack trace via ORC unwinder on top of fgraph exit
> handler. We can see that when generating stacktrace from kretprobe_multi
> bpf program which is based on fprobe/fgraph.
> 
> The reason is that the ORC unwind code won't get pass the return_to_handler
> callback installed by fgraph return probe machinery.
> 
> Solving this by creating stack frame in return_to_handler expected by
> ftrace_graph_ret_addr function to recover original return address and
> continue with the unwind.
> 
> Also updating the pt_regs data with cs/flags/rsp which are needed for
> successful stack retrieval from ebpf bpf_get_stackid helper.
>  - in get_perf_callchain we check user_mode(regs) so CS has to be set
>  - in perf_callchain_kernel we call perf_hw_regs(regs), so EFLAGS/FIXED
>     has to be unset
> 
> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  arch/x86/include/asm/ftrace.h |  5 +++++
>  arch/x86/kernel/ftrace_64.S   |  8 +++++++-
>  include/linux/ftrace.h        | 10 +++++++++-
>  3 files changed, 21 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h
> index 93156ac4ffe0..b08c95872eed 100644
> --- a/arch/x86/include/asm/ftrace.h
> +++ b/arch/x86/include/asm/ftrace.h
> @@ -56,6 +56,11 @@ arch_ftrace_get_regs(struct ftrace_regs *fregs)
>  	return &arch_ftrace_regs(fregs)->regs;
>  }
>  
> +#define arch_ftrace_partial_regs(regs) do {	\
> +	regs->flags &= ~X86_EFLAGS_FIXED;	\
> +	regs->cs = __KERNEL_CS;			\
> +} while (0)

I lost the plot. Why here and not in asm below?

>  #define arch_ftrace_fill_perf_regs(fregs, _regs) do {	\
>  		(_regs)->ip = arch_ftrace_regs(fregs)->regs.ip;		\
>  		(_regs)->sp = arch_ftrace_regs(fregs)->regs.sp;		\
> diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S
> index 367da3638167..823dbdd0eb41 100644
> --- a/arch/x86/kernel/ftrace_64.S
> +++ b/arch/x86/kernel/ftrace_64.S
> @@ -354,12 +354,17 @@ SYM_CODE_START(return_to_handler)
>  	UNWIND_HINT_UNDEFINED
>  	ANNOTATE_NOENDBR
>  
> +	/* Restore return_to_handler value that got eaten by previous ret instruction. */
> +	subq $8, %rsp
> +	UNWIND_HINT_FUNC
> +
>  	/* Save ftrace_regs for function exit context  */
>  	subq $(FRAME_SIZE), %rsp
>  
>  	movq %rax, RAX(%rsp)
>  	movq %rdx, RDX(%rsp)
>  	movq %rbp, RBP(%rsp)
> +	movq %rsp, RSP(%rsp)
>  	movq %rsp, %rdi
>  
>  	call ftrace_return_to_handler
> @@ -368,7 +373,8 @@ SYM_CODE_START(return_to_handler)
>  	movq RDX(%rsp), %rdx
>  	movq RAX(%rsp), %rax
>  
> -	addq $(FRAME_SIZE), %rsp
> +	addq $(FRAME_SIZE) + 8, %rsp
> +
>  	/*
>  	 * Jump back to the old return address. This cannot be JMP_NOSPEC rdi
>  	 * since IBT would demand that contain ENDBR, which simply isn't so for

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCHv2 2/4] x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe
  2025-11-06 12:29   ` Peter Zijlstra
@ 2025-11-07 22:22     ` Alexei Starovoitov
  0 siblings, 0 replies; 12+ messages in thread
From: Alexei Starovoitov @ 2025-11-07 22:22 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Jiri Olsa, Masami Hiramatsu, Steven Rostedt, Josh Poimboeuf, bpf,
	linux-trace-kernel, X86 ML, Yonghong Song, Song Liu,
	Andrii Nakryiko

On Thu, Nov 6, 2025 at 4:30 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Mon, Nov 03, 2025 at 11:09:22PM +0100, Jiri Olsa wrote:
> > Currently we don't get stack trace via ORC unwinder on top of fgraph exit
> > handler. We can see that when generating stacktrace from kretprobe_multi
> > bpf program which is based on fprobe/fgraph.
> >
> > The reason is that the ORC unwind code won't get pass the return_to_handler
> > callback installed by fgraph return probe machinery.
> >
> > Solving this by creating stack frame in return_to_handler expected by
> > ftrace_graph_ret_addr function to recover original return address and
> > continue with the unwind.
> >
> > Also updating the pt_regs data with cs/flags/rsp which are needed for
> > successful stack retrieval from ebpf bpf_get_stackid helper.
> >  - in get_perf_callchain we check user_mode(regs) so CS has to be set
> >  - in perf_callchain_kernel we call perf_hw_regs(regs), so EFLAGS/FIXED
> >     has to be unset
> >
> > Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  arch/x86/include/asm/ftrace.h |  5 +++++
> >  arch/x86/kernel/ftrace_64.S   |  8 +++++++-
> >  include/linux/ftrace.h        | 10 +++++++++-
> >  3 files changed, 21 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h
> > index 93156ac4ffe0..b08c95872eed 100644
> > --- a/arch/x86/include/asm/ftrace.h
> > +++ b/arch/x86/include/asm/ftrace.h
> > @@ -56,6 +56,11 @@ arch_ftrace_get_regs(struct ftrace_regs *fregs)
> >       return &arch_ftrace_regs(fregs)->regs;
> >  }
> >
> > +#define arch_ftrace_partial_regs(regs) do {  \
> > +     regs->flags &= ~X86_EFLAGS_FIXED;       \
> > +     regs->cs = __KERNEL_CS;                 \
> > +} while (0)
>
> I lost the plot. Why here and not in asm below?

It was discussed during v1/v2:

Jiri:
"Note I feel like it'd be better to set those directly in return_to_handler,
 but Steven suggested setting them later in kprobe_multi code path.
"

Masami:
"Yeah, stacktrace is not always used from the return handler."

If there are any improvements possible it can be a follow up.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2025-11-07 22:22 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-03 22:09 [PATCHv2 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe Jiri Olsa
2025-11-03 22:09 ` [PATCHv2 1/4] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()" Jiri Olsa
2025-11-06 12:27   ` Peter Zijlstra
2025-11-03 22:09 ` [PATCHv2 2/4] x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe Jiri Olsa
2025-11-03 23:47   ` bot+bpf-ci
2025-11-04  9:27     ` Jiri Olsa
2025-11-06 12:29   ` Peter Zijlstra
2025-11-07 22:22     ` Alexei Starovoitov
2025-11-03 22:09 ` [PATCHv2 3/4] selftests/bpf: Add stacktrace ips test for kprobe_multi/kretprobe_multi Jiri Olsa
2025-11-03 22:09 ` [PATCHv2 4/4] selftests/bpf: Add stacktrace ips test for raw_tp Jiri Olsa
2025-11-03 23:47   ` bot+bpf-ci
2025-11-04  8:35     ` Jiri Olsa

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).