linux-trace-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv3 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe
@ 2025-11-04 21:54 Jiri Olsa
  2025-11-04 21:54 ` [PATCHv3 1/4] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()" Jiri Olsa
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Jiri Olsa @ 2025-11-04 21:54 UTC (permalink / raw)
  To: Masami Hiramatsu, Steven Rostedt, Josh Poimboeuf
  Cc: Peter Zijlstra, bpf, linux-trace-kernel, x86, Yonghong Song,
	Song Liu, Andrii Nakryiko

hi,
sending fix for ORC stack unwind issue reported in here [1], where
the ORC unwinder won't go pass the return_to_handler function and
we get no stacktrace.

Sending fix for that together with unrelated stacktrace fix (patch 1),
so the attached test can work properly.

It's based on:
  https://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
  probes/for-next

v1: https://lore.kernel.org/bpf/20251027131354.1984006-1-jolsa@kernel.org/
v2: https://lore.kernel.org/bpf/20251103220924.36371-3-jolsa@kernel.org/

v3 changes:
- fix assert condition in test

thanks,
jirka


[1] https://lore.kernel.org/bpf/aObSyt3qOnS_BMcy@krava/
---
Jiri Olsa (4):
      Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()"
      x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe
      selftests/bpf: Add stacktrace ips test for kprobe_multi/kretprobe_multi
      selftests/bpf: Add stacktrace ips test for raw_tp

 arch/x86/events/core.c                                  |  10 +++----
 arch/x86/include/asm/ftrace.h                           |   5 ++++
 arch/x86/kernel/ftrace_64.S                             |   8 +++++-
 include/linux/ftrace.h                                  |  10 ++++++-
 tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c | 150 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 tools/testing/selftests/bpf/progs/stacktrace_ips.c      |  49 +++++++++++++++++++++++++++++++
 tools/testing/selftests/bpf/test_kmods/bpf_testmod.c    |  26 +++++++++++++++++
 7 files changed, 251 insertions(+), 7 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
 create mode 100644 tools/testing/selftests/bpf/progs/stacktrace_ips.c

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCHv3 1/4] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()"
  2025-11-04 21:54 [PATCHv3 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe Jiri Olsa
@ 2025-11-04 21:54 ` Jiri Olsa
  2025-11-04 21:54 ` [PATCHv3 2/4] x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe Jiri Olsa
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Jiri Olsa @ 2025-11-04 21:54 UTC (permalink / raw)
  To: Masami Hiramatsu, Steven Rostedt, Josh Poimboeuf
  Cc: Song Liu, Peter Zijlstra, bpf, linux-trace-kernel, x86,
	Yonghong Song, Song Liu, Andrii Nakryiko

This reverts commit 83f44ae0f8afcc9da659799db8693f74847e66b3.

Currently we store initial stacktrace entry twice for non-HW ot_regs, which
means callers that fail perf_hw_regs(regs) condition in perf_callchain_kernel.

It's easy to reproduce this bpftrace:

  # bpftrace -e 'tracepoint:sched:sched_process_exec { print(kstack()); }'
  Attaching 1 probe...

        bprm_execve+1767
        bprm_execve+1767
        do_execveat_common.isra.0+425
        __x64_sys_execve+56
        do_syscall_64+133
        entry_SYSCALL_64_after_hwframe+118

When perf_callchain_kernel calls unwind_start with first_frame, AFAICS
we do not skip regs->ip, but it's added as part of the unwind process.
Hence reverting the extra perf_callchain_store for non-hw regs leg.

I was not able to bisect this, so I'm not really sure why this was needed
in v5.2 and why it's not working anymore, but I could see double entries
as far as v5.10.

I did the test for both ORC and framepointer unwind with and without the
this fix and except for the initial entry the stacktraces are the same.

Acked-by: Song Liu <song@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 arch/x86/events/core.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 745caa6c15a3..fa6c47b50989 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -2789,13 +2789,13 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
 		return;
 	}
 
-	if (perf_callchain_store(entry, regs->ip))
-		return;
-
-	if (perf_hw_regs(regs))
+	if (perf_hw_regs(regs)) {
+		if (perf_callchain_store(entry, regs->ip))
+			return;
 		unwind_start(&state, current, regs, NULL);
-	else
+	} else {
 		unwind_start(&state, current, NULL, (void *)regs->sp);
+	}
 
 	for (; !unwind_done(&state); unwind_next_frame(&state)) {
 		addr = unwind_get_return_address(&state);
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCHv3 2/4] x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe
  2025-11-04 21:54 [PATCHv3 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe Jiri Olsa
  2025-11-04 21:54 ` [PATCHv3 1/4] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()" Jiri Olsa
@ 2025-11-04 21:54 ` Jiri Olsa
  2025-11-04 21:54 ` [PATCHv3 3/4] selftests/bpf: Add stacktrace ips test for kprobe_multi/kretprobe_multi Jiri Olsa
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Jiri Olsa @ 2025-11-04 21:54 UTC (permalink / raw)
  To: Masami Hiramatsu, Steven Rostedt, Josh Poimboeuf
  Cc: Peter Zijlstra, bpf, linux-trace-kernel, x86, Yonghong Song,
	Song Liu, Andrii Nakryiko

Currently we don't get stack trace via ORC unwinder on top of fgraph exit
handler. We can see that when generating stacktrace from kretprobe_multi
bpf program which is based on fprobe/fgraph.

The reason is that the ORC unwind code won't get pass the return_to_handler
callback installed by fgraph return probe machinery.

Solving this by creating stack frame in return_to_handler expected by
ftrace_graph_ret_addr function to recover original return address and
continue with the unwind.

Also updating the pt_regs data with cs/flags/rsp which are needed for
successful stack retrieval from ebpf bpf_get_stackid helper.
 - in get_perf_callchain we check user_mode(regs) so CS has to be set
 - in perf_callchain_kernel we call perf_hw_regs(regs), so EFLAGS/FIXED
    has to be unset

Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 arch/x86/include/asm/ftrace.h |  5 +++++
 arch/x86/kernel/ftrace_64.S   |  8 +++++++-
 include/linux/ftrace.h        | 10 +++++++++-
 3 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h
index 93156ac4ffe0..b08c95872eed 100644
--- a/arch/x86/include/asm/ftrace.h
+++ b/arch/x86/include/asm/ftrace.h
@@ -56,6 +56,11 @@ arch_ftrace_get_regs(struct ftrace_regs *fregs)
 	return &arch_ftrace_regs(fregs)->regs;
 }
 
+#define arch_ftrace_partial_regs(regs) do {	\
+	regs->flags &= ~X86_EFLAGS_FIXED;	\
+	regs->cs = __KERNEL_CS;			\
+} while (0)
+
 #define arch_ftrace_fill_perf_regs(fregs, _regs) do {	\
 		(_regs)->ip = arch_ftrace_regs(fregs)->regs.ip;		\
 		(_regs)->sp = arch_ftrace_regs(fregs)->regs.sp;		\
diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S
index 367da3638167..823dbdd0eb41 100644
--- a/arch/x86/kernel/ftrace_64.S
+++ b/arch/x86/kernel/ftrace_64.S
@@ -354,12 +354,17 @@ SYM_CODE_START(return_to_handler)
 	UNWIND_HINT_UNDEFINED
 	ANNOTATE_NOENDBR
 
+	/* Restore return_to_handler value that got eaten by previous ret instruction. */
+	subq $8, %rsp
+	UNWIND_HINT_FUNC
+
 	/* Save ftrace_regs for function exit context  */
 	subq $(FRAME_SIZE), %rsp
 
 	movq %rax, RAX(%rsp)
 	movq %rdx, RDX(%rsp)
 	movq %rbp, RBP(%rsp)
+	movq %rsp, RSP(%rsp)
 	movq %rsp, %rdi
 
 	call ftrace_return_to_handler
@@ -368,7 +373,8 @@ SYM_CODE_START(return_to_handler)
 	movq RDX(%rsp), %rdx
 	movq RAX(%rsp), %rax
 
-	addq $(FRAME_SIZE), %rsp
+	addq $(FRAME_SIZE) + 8, %rsp
+
 	/*
 	 * Jump back to the old return address. This cannot be JMP_NOSPEC rdi
 	 * since IBT would demand that contain ENDBR, which simply isn't so for
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 7ded7df6e9b5..07f8c309e432 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -193,6 +193,10 @@ static __always_inline struct pt_regs *ftrace_get_regs(struct ftrace_regs *fregs
 #if !defined(CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS) || \
 	defined(CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS)
 
+#ifndef arch_ftrace_partial_regs
+#define arch_ftrace_partial_regs(regs) do {} while (0)
+#endif
+
 static __always_inline struct pt_regs *
 ftrace_partial_regs(struct ftrace_regs *fregs, struct pt_regs *regs)
 {
@@ -202,7 +206,11 @@ ftrace_partial_regs(struct ftrace_regs *fregs, struct pt_regs *regs)
 	 * Since arch_ftrace_get_regs() will check some members and may return
 	 * NULL, we can not use it.
 	 */
-	return &arch_ftrace_regs(fregs)->regs;
+	regs = &arch_ftrace_regs(fregs)->regs;
+
+	/* Allow arch specific updates to regs. */
+	arch_ftrace_partial_regs(regs);
+	return regs;
 }
 
 #endif /* !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS || CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS */
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCHv3 3/4] selftests/bpf: Add stacktrace ips test for kprobe_multi/kretprobe_multi
  2025-11-04 21:54 [PATCHv3 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe Jiri Olsa
  2025-11-04 21:54 ` [PATCHv3 1/4] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()" Jiri Olsa
  2025-11-04 21:54 ` [PATCHv3 2/4] x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe Jiri Olsa
@ 2025-11-04 21:54 ` Jiri Olsa
  2025-11-04 21:54 ` [PATCHv3 4/4] selftests/bpf: Add stacktrace ips test for raw_tp Jiri Olsa
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Jiri Olsa @ 2025-11-04 21:54 UTC (permalink / raw)
  To: Masami Hiramatsu, Steven Rostedt, Josh Poimboeuf
  Cc: Peter Zijlstra, bpf, linux-trace-kernel, x86, Yonghong Song,
	Song Liu, Andrii Nakryiko

Adding test that attaches kprobe/kretprobe multi and verifies the
ORC stacktrace matches expected functions.

Adding bpf_testmod_stacktrace_test function to bpf_testmod kernel
module which is called through several functions so we get reliable
call path for stacktrace.

The test is only for ORC unwinder to keep it simple.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../selftests/bpf/prog_tests/stacktrace_ips.c | 104 ++++++++++++++++++
 .../selftests/bpf/progs/stacktrace_ips.c      |  41 +++++++
 .../selftests/bpf/test_kmods/bpf_testmod.c    |  26 +++++
 3 files changed, 171 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
 create mode 100644 tools/testing/selftests/bpf/progs/stacktrace_ips.c

diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
new file mode 100644
index 000000000000..6fca459ba550
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
@@ -0,0 +1,104 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include "stacktrace_ips.skel.h"
+
+#ifdef __x86_64__
+static int check_stacktrace_ips(int fd, __u32 key, int cnt, ...)
+{
+	__u64 ips[PERF_MAX_STACK_DEPTH];
+	struct ksyms *ksyms = NULL;
+	int i, err = 0;
+	va_list args;
+
+	/* sorted by addr */
+	ksyms = load_kallsyms_local();
+	if (!ASSERT_OK_PTR(ksyms, "load_kallsyms_local"))
+		return -1;
+
+	/* unlikely, but... */
+	if (!ASSERT_LT(cnt, PERF_MAX_STACK_DEPTH, "check_max"))
+		return -1;
+
+	err = bpf_map_lookup_elem(fd, &key, ips);
+	if (err)
+		goto out;
+
+	/*
+	 * Compare all symbols provided via arguments with stacktrace ips,
+	 * and their related symbol addresses.t
+	 */
+	va_start(args, cnt);
+
+	for (i = 0; i < cnt; i++) {
+		unsigned long val;
+		struct ksym *ksym;
+
+		val = va_arg(args, unsigned long);
+		ksym = ksym_search_local(ksyms, ips[i]);
+		if (!ASSERT_OK_PTR(ksym, "ksym_search_local"))
+			break;
+		ASSERT_EQ(ksym->addr, val, "stack_cmp");
+	}
+
+	va_end(args);
+
+out:
+	free_kallsyms_local(ksyms);
+	return err;
+}
+
+static void test_stacktrace_ips_kprobe_multi(bool retprobe)
+{
+	LIBBPF_OPTS(bpf_kprobe_multi_opts, opts,
+		.retprobe = retprobe
+	);
+	LIBBPF_OPTS(bpf_test_run_opts, topts);
+	struct stacktrace_ips *skel;
+
+	skel = stacktrace_ips__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "stacktrace_ips__open_and_load"))
+		return;
+
+	if (!skel->kconfig->CONFIG_UNWINDER_ORC) {
+		test__skip();
+		goto cleanup;
+	}
+
+	skel->links.kprobe_multi_test = bpf_program__attach_kprobe_multi_opts(
+							skel->progs.kprobe_multi_test,
+							"bpf_testmod_stacktrace_test", &opts);
+	if (!ASSERT_OK_PTR(skel->links.kprobe_multi_test, "bpf_program__attach_kprobe_multi_opts"))
+		goto cleanup;
+
+	trigger_module_test_read(1);
+
+	load_kallsyms();
+
+	check_stacktrace_ips(bpf_map__fd(skel->maps.stackmap), skel->bss->stack_key, 4,
+			     ksym_get_addr("bpf_testmod_stacktrace_test_3"),
+			     ksym_get_addr("bpf_testmod_stacktrace_test_2"),
+			     ksym_get_addr("bpf_testmod_stacktrace_test_1"),
+			     ksym_get_addr("bpf_testmod_test_read"));
+
+cleanup:
+	stacktrace_ips__destroy(skel);
+}
+
+static void __test_stacktrace_ips(void)
+{
+	if (test__start_subtest("kprobe_multi"))
+		test_stacktrace_ips_kprobe_multi(false);
+	if (test__start_subtest("kretprobe_multi"))
+		test_stacktrace_ips_kprobe_multi(true);
+}
+#else
+static void __test_stacktrace_ips(void)
+{
+	test__skip();
+}
+#endif
+
+void test_stacktrace_ips(void)
+{
+	__test_stacktrace_ips();
+}
diff --git a/tools/testing/selftests/bpf/progs/stacktrace_ips.c b/tools/testing/selftests/bpf/progs/stacktrace_ips.c
new file mode 100644
index 000000000000..e2eb30945c1b
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/stacktrace_ips.c
@@ -0,0 +1,41 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2018 Facebook
+
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+#ifndef PERF_MAX_STACK_DEPTH
+#define PERF_MAX_STACK_DEPTH         127
+#endif
+
+typedef __u64 stack_trace_t[PERF_MAX_STACK_DEPTH];
+
+struct {
+	__uint(type, BPF_MAP_TYPE_STACK_TRACE);
+	__uint(max_entries, 16384);
+	__type(key, __u32);
+	__type(value, stack_trace_t);
+} stackmap SEC(".maps");
+
+extern bool CONFIG_UNWINDER_ORC __kconfig __weak;
+
+/*
+ * This function is here to have CONFIG_UNWINDER_ORC
+ * used and added to object BTF.
+ */
+int unused(void)
+{
+	return CONFIG_UNWINDER_ORC ? 0 : 1;
+}
+
+__u32 stack_key;
+
+SEC("kprobe.multi")
+int kprobe_multi_test(struct pt_regs *ctx)
+{
+	stack_key = bpf_get_stackid(ctx, &stackmap, 0);
+	return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
index 8074bc5f6f20..ed0a4721d8fd 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
@@ -417,6 +417,30 @@ noinline int bpf_testmod_fentry_test11(u64 a, void *b, short c, int d,
 	return a + (long)b + c + d + (long)e + f + g + h + i + j + k;
 }
 
+noinline void bpf_testmod_stacktrace_test(void)
+{
+	/* used for stacktrace test as attach function */
+	asm volatile ("");
+}
+
+noinline void bpf_testmod_stacktrace_test_3(void)
+{
+	bpf_testmod_stacktrace_test();
+	asm volatile ("");
+}
+
+noinline void bpf_testmod_stacktrace_test_2(void)
+{
+	bpf_testmod_stacktrace_test_3();
+	asm volatile ("");
+}
+
+noinline void bpf_testmod_stacktrace_test_1(void)
+{
+	bpf_testmod_stacktrace_test_2();
+	asm volatile ("");
+}
+
 int bpf_testmod_fentry_ok;
 
 noinline ssize_t
@@ -497,6 +521,8 @@ bpf_testmod_test_read(struct file *file, struct kobject *kobj,
 			21, 22, 23, 24, 25, 26) != 231)
 		goto out;
 
+	bpf_testmod_stacktrace_test_1();
+
 	bpf_testmod_fentry_ok = 1;
 out:
 	return -EIO; /* always fail */
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCHv3 4/4] selftests/bpf: Add stacktrace ips test for raw_tp
  2025-11-04 21:54 [PATCHv3 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe Jiri Olsa
                   ` (2 preceding siblings ...)
  2025-11-04 21:54 ` [PATCHv3 3/4] selftests/bpf: Add stacktrace ips test for kprobe_multi/kretprobe_multi Jiri Olsa
@ 2025-11-04 21:54 ` Jiri Olsa
  2025-11-05  2:20 ` [PATCHv3 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe Alexei Starovoitov
  2025-11-06  1:20 ` patchwork-bot+netdevbpf
  5 siblings, 0 replies; 10+ messages in thread
From: Jiri Olsa @ 2025-11-04 21:54 UTC (permalink / raw)
  To: Masami Hiramatsu, Steven Rostedt, Josh Poimboeuf
  Cc: Peter Zijlstra, bpf, linux-trace-kernel, x86, Yonghong Song,
	Song Liu, Andrii Nakryiko

Adding test that verifies we get expected initial 2 entries from
stacktrace for rawtp probe via ORC unwind.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../selftests/bpf/prog_tests/stacktrace_ips.c | 46 +++++++++++++++++++
 .../selftests/bpf/progs/stacktrace_ips.c      |  8 ++++
 2 files changed, 54 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
index 6fca459ba550..c9efdd2a5b18 100644
--- a/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
+++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
@@ -84,12 +84,58 @@ static void test_stacktrace_ips_kprobe_multi(bool retprobe)
 	stacktrace_ips__destroy(skel);
 }
 
+static void test_stacktrace_ips_raw_tp(void)
+{
+	__u32 info_len = sizeof(struct bpf_prog_info);
+	LIBBPF_OPTS(bpf_test_run_opts, topts);
+	struct bpf_prog_info info = {};
+	struct stacktrace_ips *skel;
+	__u64 bpf_prog_ksym = 0;
+	int err;
+
+	skel = stacktrace_ips__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "stacktrace_ips__open_and_load"))
+		return;
+
+	if (!skel->kconfig->CONFIG_UNWINDER_ORC) {
+		test__skip();
+		goto cleanup;
+	}
+
+	skel->links.rawtp_test = bpf_program__attach_raw_tracepoint(
+							skel->progs.rawtp_test,
+							"bpf_testmod_test_read");
+	if (!ASSERT_OK_PTR(skel->links.rawtp_test, "bpf_program__attach_raw_tracepoint"))
+		goto cleanup;
+
+	/* get bpf program address */
+	info.jited_ksyms = ptr_to_u64(&bpf_prog_ksym);
+	info.nr_jited_ksyms = 1;
+	err = bpf_prog_get_info_by_fd(bpf_program__fd(skel->progs.rawtp_test),
+				      &info, &info_len);
+	if (!ASSERT_OK(err, "bpf_prog_get_info_by_fd"))
+		goto cleanup;
+
+	trigger_module_test_read(1);
+
+	load_kallsyms();
+
+	check_stacktrace_ips(bpf_map__fd(skel->maps.stackmap), skel->bss->stack_key, 2,
+			     bpf_prog_ksym,
+			     ksym_get_addr("bpf_trace_run2"));
+
+cleanup:
+	stacktrace_ips__destroy(skel);
+}
+
 static void __test_stacktrace_ips(void)
 {
 	if (test__start_subtest("kprobe_multi"))
 		test_stacktrace_ips_kprobe_multi(false);
 	if (test__start_subtest("kretprobe_multi"))
 		test_stacktrace_ips_kprobe_multi(true);
+	if (test__start_subtest("raw_tp"))
+		test_stacktrace_ips_raw_tp();
 }
 #else
 static void __test_stacktrace_ips(void)
diff --git a/tools/testing/selftests/bpf/progs/stacktrace_ips.c b/tools/testing/selftests/bpf/progs/stacktrace_ips.c
index e2eb30945c1b..a96c8150d7f5 100644
--- a/tools/testing/selftests/bpf/progs/stacktrace_ips.c
+++ b/tools/testing/selftests/bpf/progs/stacktrace_ips.c
@@ -38,4 +38,12 @@ int kprobe_multi_test(struct pt_regs *ctx)
 	return 0;
 }
 
+SEC("raw_tp/bpf_testmod_test_read")
+int rawtp_test(void *ctx)
+{
+	/* Skip ebpf program entry in the stack. */
+	stack_key = bpf_get_stackid(ctx, &stackmap, 0);
+	return 0;
+}
+
 char _license[] SEC("license") = "GPL";
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCHv3 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe
  2025-11-04 21:54 [PATCHv3 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe Jiri Olsa
                   ` (3 preceding siblings ...)
  2025-11-04 21:54 ` [PATCHv3 4/4] selftests/bpf: Add stacktrace ips test for raw_tp Jiri Olsa
@ 2025-11-05  2:20 ` Alexei Starovoitov
  2025-11-05 15:03   ` Steven Rostedt
  2025-11-06  1:20 ` patchwork-bot+netdevbpf
  5 siblings, 1 reply; 10+ messages in thread
From: Alexei Starovoitov @ 2025-11-05  2:20 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Masami Hiramatsu, Steven Rostedt, Josh Poimboeuf, Peter Zijlstra,
	bpf, linux-trace-kernel, X86 ML, Yonghong Song, Song Liu,
	Andrii Nakryiko

On Tue, Nov 4, 2025 at 1:54 PM Jiri Olsa <jolsa@kernel.org> wrote:
>
> hi,
> sending fix for ORC stack unwind issue reported in here [1], where
> the ORC unwinder won't go pass the return_to_handler function and
> we get no stacktrace.
>
> Sending fix for that together with unrelated stacktrace fix (patch 1),
> so the attached test can work properly.
>
> It's based on:
>   https://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
>   probes/for-next
>
> v1: https://lore.kernel.org/bpf/20251027131354.1984006-1-jolsa@kernel.org/
> v2: https://lore.kernel.org/bpf/20251103220924.36371-3-jolsa@kernel.org/
>
> v3 changes:
> - fix assert condition in test
>
> thanks,
> jirka
>
>
> [1] https://lore.kernel.org/bpf/aObSyt3qOnS_BMcy@krava/
> ---
> Jiri Olsa (4):
>       Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()"
>       x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe
>       selftests/bpf: Add stacktrace ips test for kprobe_multi/kretprobe_multi
>       selftests/bpf: Add stacktrace ips test for raw_tp
>
>  arch/x86/events/core.c                                  |  10 +++----
>  arch/x86/include/asm/ftrace.h                           |   5 ++++
>  arch/x86/kernel/ftrace_64.S                             |   8 +++++-
>  include/linux/ftrace.h                                  |  10 ++++++-
>  tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c | 150 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  tools/testing/selftests/bpf/progs/stacktrace_ips.c      |  49 +++++++++++++++++++++++++++++++
>  tools/testing/selftests/bpf/test_kmods/bpf_testmod.c    |  26 +++++++++++++++++
>  7 files changed, 251 insertions(+), 7 deletions(-)
>  create mode 100644 tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
>  create mode 100644 tools/testing/selftests/bpf/progs/stacktrace_ips.c

Steven, Peter,

How should we route this?

If you take it into your tree, please send it to Linus right away,
so we can pull it into bpf/bpf-next trees.
The conflicts in selftests/bpf in the last merge window
were annoying. I don't want to see a repeat.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCHv3 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe
  2025-11-05  2:20 ` [PATCHv3 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe Alexei Starovoitov
@ 2025-11-05 15:03   ` Steven Rostedt
  2025-11-05 23:04     ` Steven Rostedt
  0 siblings, 1 reply; 10+ messages in thread
From: Steven Rostedt @ 2025-11-05 15:03 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Jiri Olsa, Masami Hiramatsu, Josh Poimboeuf, Peter Zijlstra, bpf,
	linux-trace-kernel, X86 ML, Yonghong Song, Song Liu,
	Andrii Nakryiko

On Tue, 4 Nov 2025 18:20:41 -0800
Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:

> Steven, Peter,
> 
> How should we route this?
> 
> If you take it into your tree, please send it to Linus right away,
> so we can pull it into bpf/bpf-next trees.
> The conflicts in selftests/bpf in the last merge window
> were annoying. I don't want to see a repeat.

Let me run this through my tests and then I'll give you an Acked-by and you
can take it through your tree.

-- Steve

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCHv3 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe
  2025-11-05 15:03   ` Steven Rostedt
@ 2025-11-05 23:04     ` Steven Rostedt
  2025-11-06  1:15       ` Alexei Starovoitov
  0 siblings, 1 reply; 10+ messages in thread
From: Steven Rostedt @ 2025-11-05 23:04 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Jiri Olsa, Masami Hiramatsu, Josh Poimboeuf, Peter Zijlstra, bpf,
	linux-trace-kernel, X86 ML, Yonghong Song, Song Liu,
	Andrii Nakryiko

On Wed, 5 Nov 2025 10:03:59 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:

> Let me run this through my tests and then I'll give you an Acked-by and you
> can take it through your tree.

It passed.

Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org>

-- Steve

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCHv3 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe
  2025-11-05 23:04     ` Steven Rostedt
@ 2025-11-06  1:15       ` Alexei Starovoitov
  0 siblings, 0 replies; 10+ messages in thread
From: Alexei Starovoitov @ 2025-11-06  1:15 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Jiri Olsa, Masami Hiramatsu, Josh Poimboeuf, Peter Zijlstra, bpf,
	linux-trace-kernel, X86 ML, Yonghong Song, Song Liu,
	Andrii Nakryiko

On Wed, Nov 5, 2025 at 3:04 PM Steven Rostedt <rostedt@goodmis.org> wrote:
>
> On Wed, 5 Nov 2025 10:03:59 -0500
> Steven Rostedt <rostedt@goodmis.org> wrote:
>
> > Let me run this through my tests and then I'll give you an Acked-by and you
> > can take it through your tree.
>
> It passed.
>
> Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org>

Thanks. Applied.
Will send to Linus in a day or two.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCHv3 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe
  2025-11-04 21:54 [PATCHv3 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe Jiri Olsa
                   ` (4 preceding siblings ...)
  2025-11-05  2:20 ` [PATCHv3 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe Alexei Starovoitov
@ 2025-11-06  1:20 ` patchwork-bot+netdevbpf
  5 siblings, 0 replies; 10+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-11-06  1:20 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: mhiramat, rostedt, jpoimboe, peterz, bpf, linux-trace-kernel, x86,
	yhs, songliubraving, andrii

Hello:

This series was applied to bpf/bpf.git (master)
by Alexei Starovoitov <ast@kernel.org>:

On Tue,  4 Nov 2025 22:54:01 +0100 you wrote:
> hi,
> sending fix for ORC stack unwind issue reported in here [1], where
> the ORC unwinder won't go pass the return_to_handler function and
> we get no stacktrace.
> 
> Sending fix for that together with unrelated stacktrace fix (patch 1),
> so the attached test can work properly.
> 
> [...]

Here is the summary with links:
  - [PATCHv3,1/4] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()"
    https://git.kernel.org/bpf/bpf/c/6d08340d1e35
  - [PATCHv3,2/4] x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe
    https://git.kernel.org/bpf/bpf/c/20a0bc10272f
  - [PATCHv3,3/4] selftests/bpf: Add stacktrace ips test for kprobe_multi/kretprobe_multi
    https://git.kernel.org/bpf/bpf/c/c9e208fa93cd
  - [PATCHv3,4/4] selftests/bpf: Add stacktrace ips test for raw_tp
    https://git.kernel.org/bpf/bpf/c/3490d29964bd

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2025-11-06  1:20 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-04 21:54 [PATCHv3 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe Jiri Olsa
2025-11-04 21:54 ` [PATCHv3 1/4] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()" Jiri Olsa
2025-11-04 21:54 ` [PATCHv3 2/4] x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe Jiri Olsa
2025-11-04 21:54 ` [PATCHv3 3/4] selftests/bpf: Add stacktrace ips test for kprobe_multi/kretprobe_multi Jiri Olsa
2025-11-04 21:54 ` [PATCHv3 4/4] selftests/bpf: Add stacktrace ips test for raw_tp Jiri Olsa
2025-11-05  2:20 ` [PATCHv3 0/4] x86/fgraph,bpf: Fix ORC stack unwind from return probe Alexei Starovoitov
2025-11-05 15:03   ` Steven Rostedt
2025-11-05 23:04     ` Steven Rostedt
2025-11-06  1:15       ` Alexei Starovoitov
2025-11-06  1:20 ` patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).