* [PATCH 0/3] x86/fgraph,bpf: Fix ORC stack unwind from return probe
@ 2025-10-27 13:13 Jiri Olsa
2025-10-27 13:13 ` [PATCH 1/3] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()" Jiri Olsa
` (2 more replies)
0 siblings, 3 replies; 13+ messages in thread
From: Jiri Olsa @ 2025-10-27 13:13 UTC (permalink / raw)
To: Masami Hiramatsu, Steven Rostedt, Josh Poimboeuf
Cc: Peter Zijlstra, bpf, linux-trace-kernel, x86, Yonghong Song,
Song Liu, Andrii Nakryiko
hi,
sending fix for ORC stack unwind issue reported in here [1], where
the ORC unwinder won't go pass the return_to_handler function and
we get no stacktrace.
Sending fix for that together with unrelated stacktrace fix (patch 1),
so the attached test can work properly.
It's based on:
https://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
probes/core
thanks,
jirka
[1] https://lore.kernel.org/bpf/aObSyt3qOnS_BMcy@krava/
---
Jiri Olsa (3):
Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()"
x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe
selftests/bpf: Add stacktrace ips test for kprobe_multi/kretprobe_multi
arch/x86/events/core.c | 10 ++++-----
arch/x86/include/asm/ftrace.h | 5 +++++
arch/x86/kernel/ftrace_64.S | 8 ++++++-
include/linux/ftrace.h | 10 ++++++++-
tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c | 108 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
tools/testing/selftests/bpf/progs/stacktrace_ips.c | 51 +++++++++++++++++++++++++++++++++++++++++++++
tools/testing/selftests/bpf/test_kmods/bpf_testmod.c | 26 +++++++++++++++++++++++
7 files changed, 211 insertions(+), 7 deletions(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
create mode 100644 tools/testing/selftests/bpf/progs/stacktrace_ips.c
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 1/3] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()"
2025-10-27 13:13 [PATCH 0/3] x86/fgraph,bpf: Fix ORC stack unwind from return probe Jiri Olsa
@ 2025-10-27 13:13 ` Jiri Olsa
2025-10-27 13:52 ` bot+bpf-ci
2025-10-27 13:13 ` [PATCH 2/3] x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe Jiri Olsa
2025-10-27 13:13 ` [PATCH 3/3] selftests/bpf: Add stacktrace ips test for kprobe_multi/kretprobe_multi Jiri Olsa
2 siblings, 1 reply; 13+ messages in thread
From: Jiri Olsa @ 2025-10-27 13:13 UTC (permalink / raw)
To: Masami Hiramatsu, Steven Rostedt, Josh Poimboeuf
Cc: Song Liu, Peter Zijlstra, bpf, linux-trace-kernel, x86,
Yonghong Song, Song Liu, Andrii Nakryiko
This reverts commit 83f44ae0f8afcc9da659799db8693f74847e66b3.
When perf_callchain_kernel calls unwind_start with first_frame, AFAICS
we do not skip regs->ip, but it's added as part of the unwind process.
Hence reverting the extra perf_callchain_store for non-hw regs leg.
Acked-by: Song Liu <song@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
arch/x86/events/core.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 745caa6c15a3..fa6c47b50989 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -2789,13 +2789,13 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
return;
}
- if (perf_callchain_store(entry, regs->ip))
- return;
-
- if (perf_hw_regs(regs))
+ if (perf_hw_regs(regs)) {
+ if (perf_callchain_store(entry, regs->ip))
+ return;
unwind_start(&state, current, regs, NULL);
- else
+ } else {
unwind_start(&state, current, NULL, (void *)regs->sp);
+ }
for (; !unwind_done(&state); unwind_next_frame(&state)) {
addr = unwind_get_return_address(&state);
--
2.51.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 2/3] x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe
2025-10-27 13:13 [PATCH 0/3] x86/fgraph,bpf: Fix ORC stack unwind from return probe Jiri Olsa
2025-10-27 13:13 ` [PATCH 1/3] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()" Jiri Olsa
@ 2025-10-27 13:13 ` Jiri Olsa
2025-10-28 7:02 ` Masami Hiramatsu
2025-10-27 13:13 ` [PATCH 3/3] selftests/bpf: Add stacktrace ips test for kprobe_multi/kretprobe_multi Jiri Olsa
2 siblings, 1 reply; 13+ messages in thread
From: Jiri Olsa @ 2025-10-27 13:13 UTC (permalink / raw)
To: Masami Hiramatsu, Steven Rostedt, Josh Poimboeuf
Cc: Peter Zijlstra, bpf, linux-trace-kernel, x86, Yonghong Song,
Song Liu, Andrii Nakryiko
Currently we don't get stack trace via ORC unwinder on top of fgraph exit
handler. We can see that when generating stacktrace from kretprobe_multi
bpf program which is based on fprobe/fgraph.
The reason is that the ORC unwind code won't get pass the return_to_handler
callback installed by fgraph return probe machinery.
Solving this by creating stack frame in return_to_handler expected by
ftrace_graph_ret_addr function to recover original return address and
continue with the unwind.
Also updating the pt_regs data with cs/flags/rsp which are needed for
successful stack retrieval from ebpf bpf_get_stackid helper.
- in get_perf_callchain we check user_mode(regs) so CS has to be set
- in perf_callchain_kernel we call perf_hw_regs(regs), so EFLAGS/FIXED
has to be unset
Note I feel like it'd be better to set those directly in return_to_handler,
but Steven suggested setting them later in kprobe_multi code path.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
arch/x86/include/asm/ftrace.h | 5 +++++
arch/x86/kernel/ftrace_64.S | 8 +++++++-
include/linux/ftrace.h | 10 +++++++++-
3 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h
index 93156ac4ffe0..b08c95872eed 100644
--- a/arch/x86/include/asm/ftrace.h
+++ b/arch/x86/include/asm/ftrace.h
@@ -56,6 +56,11 @@ arch_ftrace_get_regs(struct ftrace_regs *fregs)
return &arch_ftrace_regs(fregs)->regs;
}
+#define arch_ftrace_partial_regs(regs) do { \
+ regs->flags &= ~X86_EFLAGS_FIXED; \
+ regs->cs = __KERNEL_CS; \
+} while (0)
+
#define arch_ftrace_fill_perf_regs(fregs, _regs) do { \
(_regs)->ip = arch_ftrace_regs(fregs)->regs.ip; \
(_regs)->sp = arch_ftrace_regs(fregs)->regs.sp; \
diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S
index 367da3638167..823dbdd0eb41 100644
--- a/arch/x86/kernel/ftrace_64.S
+++ b/arch/x86/kernel/ftrace_64.S
@@ -354,12 +354,17 @@ SYM_CODE_START(return_to_handler)
UNWIND_HINT_UNDEFINED
ANNOTATE_NOENDBR
+ /* Restore return_to_handler value that got eaten by previous ret instruction. */
+ subq $8, %rsp
+ UNWIND_HINT_FUNC
+
/* Save ftrace_regs for function exit context */
subq $(FRAME_SIZE), %rsp
movq %rax, RAX(%rsp)
movq %rdx, RDX(%rsp)
movq %rbp, RBP(%rsp)
+ movq %rsp, RSP(%rsp)
movq %rsp, %rdi
call ftrace_return_to_handler
@@ -368,7 +373,8 @@ SYM_CODE_START(return_to_handler)
movq RDX(%rsp), %rdx
movq RAX(%rsp), %rax
- addq $(FRAME_SIZE), %rsp
+ addq $(FRAME_SIZE) + 8, %rsp
+
/*
* Jump back to the old return address. This cannot be JMP_NOSPEC rdi
* since IBT would demand that contain ENDBR, which simply isn't so for
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 7ded7df6e9b5..07f8c309e432 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -193,6 +193,10 @@ static __always_inline struct pt_regs *ftrace_get_regs(struct ftrace_regs *fregs
#if !defined(CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS) || \
defined(CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS)
+#ifndef arch_ftrace_partial_regs
+#define arch_ftrace_partial_regs(regs) do {} while (0)
+#endif
+
static __always_inline struct pt_regs *
ftrace_partial_regs(struct ftrace_regs *fregs, struct pt_regs *regs)
{
@@ -202,7 +206,11 @@ ftrace_partial_regs(struct ftrace_regs *fregs, struct pt_regs *regs)
* Since arch_ftrace_get_regs() will check some members and may return
* NULL, we can not use it.
*/
- return &arch_ftrace_regs(fregs)->regs;
+ regs = &arch_ftrace_regs(fregs)->regs;
+
+ /* Allow arch specific updates to regs. */
+ arch_ftrace_partial_regs(regs);
+ return regs;
}
#endif /* !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS || CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS */
--
2.51.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 3/3] selftests/bpf: Add stacktrace ips test for kprobe_multi/kretprobe_multi
2025-10-27 13:13 [PATCH 0/3] x86/fgraph,bpf: Fix ORC stack unwind from return probe Jiri Olsa
2025-10-27 13:13 ` [PATCH 1/3] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()" Jiri Olsa
2025-10-27 13:13 ` [PATCH 2/3] x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe Jiri Olsa
@ 2025-10-27 13:13 ` Jiri Olsa
2 siblings, 0 replies; 13+ messages in thread
From: Jiri Olsa @ 2025-10-27 13:13 UTC (permalink / raw)
To: Masami Hiramatsu, Steven Rostedt, Josh Poimboeuf
Cc: Peter Zijlstra, bpf, linux-trace-kernel, x86, Yonghong Song,
Song Liu, Andrii Nakryiko
Adding test that attaches kprobe/kretprobe multi and verifies the
ORC stacktrace matches expected functions.
It skips the test for if kernels built with frame pointer unwinder.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
.../selftests/bpf/prog_tests/stacktrace_ips.c | 108 ++++++++++++++++++
.../selftests/bpf/progs/stacktrace_ips.c | 51 +++++++++
.../selftests/bpf/test_kmods/bpf_testmod.c | 26 +++++
3 files changed, 185 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
create mode 100644 tools/testing/selftests/bpf/progs/stacktrace_ips.c
diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
new file mode 100644
index 000000000000..3dfa03be6cd0
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_ips.c
@@ -0,0 +1,108 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include "stacktrace_ips.skel.h"
+
+#ifdef __x86_64__
+static int check_stacktrace_ips(int fd, __u32 key, int cnt, ...)
+{
+ __u64 ips[PERF_MAX_STACK_DEPTH];
+ struct ksyms *ksyms = NULL;
+ int i, err = 0;
+ va_list args;
+
+ /* sorted by addr */
+ ksyms = load_kallsyms_local();
+ if (!ASSERT_OK_PTR(ksyms, "load_kallsyms_local"))
+ return -1;
+
+ /* unlikely, but... */
+ if (!ASSERT_LT(cnt, PERF_MAX_STACK_DEPTH, "check_max"))
+ return -1;
+
+ err = bpf_map_lookup_elem(fd, &key, ips);
+ if (err)
+ goto out;
+
+ va_start(args, cnt);
+
+ for (i = 0; i < cnt; i++) {
+ unsigned long val;
+ struct ksym *ksym;
+
+ if (!ASSERT_NEQ(ips[i], 0, "ip_not_zero"))
+ break;
+ val = va_arg(args, unsigned long);
+ ksym = ksym_search_local(ksyms, ips[i]);
+ if (!ASSERT_OK_PTR(ksym, "ksym_search_local"))
+ break;
+ ASSERT_EQ(ksym->addr, val, "stack_cmp");
+ }
+
+ va_end(args);
+
+out:
+ free_kallsyms_local(ksyms);
+ return err;
+}
+
+static void test_stacktrace_ips_kprobe_multi(bool retprobe)
+{
+ LIBBPF_OPTS(bpf_kprobe_multi_opts, opts,
+ .retprobe = retprobe
+ );
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ struct stacktrace_ips *skel;
+ int prog_fd, err;
+
+ skel = stacktrace_ips__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "stacktrace_ips__open_and_load"))
+ return;
+
+ if (!skel->kconfig->CONFIG_UNWINDER_ORC) {
+ test__skip();
+ goto cleanup;
+ }
+
+ skel->links.kprobe_multi_stack_test = bpf_program__attach_kprobe_multi_opts(
+ skel->progs.kprobe_multi_stack_test,
+ "bpf_testmod_stacktrace_test", &opts);
+ if (!ASSERT_OK_PTR(skel->links.kprobe_multi_stack_test, "bpf_program__attach_kprobe_multi_opts"))
+ goto cleanup;
+
+ prog_fd = bpf_program__fd(skel->progs.trigger);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, 0, "test_run");
+
+ trigger_module_test_read(1);
+
+ load_kallsyms();
+
+ check_stacktrace_ips(bpf_map__fd(skel->maps.stackmap), skel->bss->stack_key, 3,
+ ksym_get_addr("bpf_testmod_stacktrace_test_3"),
+ ksym_get_addr("bpf_testmod_stacktrace_test_2"),
+ ksym_get_addr("bpf_testmod_stacktrace_test_1"),
+ ksym_get_addr("bpf_testmod_test_read"));
+
+cleanup:
+ stacktrace_ips__destroy(skel);
+}
+
+static void __test_stacktrace_ips(void)
+{
+ if (test__start_subtest("kprobe_multi"))
+ test_stacktrace_ips_kprobe_multi(false);
+ if (test__start_subtest("kretprobe_multi"))
+ test_stacktrace_ips_kprobe_multi(true);
+}
+#else
+static void __test_stacktrace_ips(void)
+{
+ test__skip();
+}
+#endif
+
+void test_stacktrace_ips(void)
+{
+ __test_stacktrace_ips();
+}
diff --git a/tools/testing/selftests/bpf/progs/stacktrace_ips.c b/tools/testing/selftests/bpf/progs/stacktrace_ips.c
new file mode 100644
index 000000000000..984b5b838885
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/stacktrace_ips.c
@@ -0,0 +1,51 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2018 Facebook
+
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+#ifndef PERF_MAX_STACK_DEPTH
+#define PERF_MAX_STACK_DEPTH 127
+#endif
+
+typedef __u64 stack_trace_t[PERF_MAX_STACK_DEPTH];
+
+struct {
+ __uint(type, BPF_MAP_TYPE_STACK_TRACE);
+ __uint(max_entries, 16384);
+ __type(key, __u32);
+ __type(value, stack_trace_t);
+} stackmap SEC(".maps");
+
+extern bool CONFIG_UNWINDER_ORC __kconfig __weak;
+
+/*
+ * This function is here to have CONFIG_UNWINDER_ORC
+ * used and added to object BTF.
+ */
+int unused(void)
+{
+ return CONFIG_UNWINDER_ORC ? 0 : 1;
+}
+
+__u32 stack_key;
+
+/*
+ * No tests in here, just to trigger 'bpf_fentry_test*'
+ * through tracing test_run.
+ */
+SEC("fentry/bpf_modify_return_test")
+int BPF_PROG(trigger)
+{
+ return 0;
+}
+
+SEC("kprobe.multi")
+int kprobe_multi_stack_test(struct pt_regs *ctx)
+{
+ stack_key = bpf_get_stackid(ctx, &stackmap, 0);
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
index 8074bc5f6f20..ed0a4721d8fd 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
@@ -417,6 +417,30 @@ noinline int bpf_testmod_fentry_test11(u64 a, void *b, short c, int d,
return a + (long)b + c + d + (long)e + f + g + h + i + j + k;
}
+noinline void bpf_testmod_stacktrace_test(void)
+{
+ /* used for stacktrace test as attach function */
+ asm volatile ("");
+}
+
+noinline void bpf_testmod_stacktrace_test_3(void)
+{
+ bpf_testmod_stacktrace_test();
+ asm volatile ("");
+}
+
+noinline void bpf_testmod_stacktrace_test_2(void)
+{
+ bpf_testmod_stacktrace_test_3();
+ asm volatile ("");
+}
+
+noinline void bpf_testmod_stacktrace_test_1(void)
+{
+ bpf_testmod_stacktrace_test_2();
+ asm volatile ("");
+}
+
int bpf_testmod_fentry_ok;
noinline ssize_t
@@ -497,6 +521,8 @@ bpf_testmod_test_read(struct file *file, struct kobject *kobj,
21, 22, 23, 24, 25, 26) != 231)
goto out;
+ bpf_testmod_stacktrace_test_1();
+
bpf_testmod_fentry_ok = 1;
out:
return -EIO; /* always fail */
--
2.51.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH 1/3] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()"
2025-10-27 13:13 ` [PATCH 1/3] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()" Jiri Olsa
@ 2025-10-27 13:52 ` bot+bpf-ci
2025-10-27 20:19 ` Josh Poimboeuf
2025-10-27 22:25 ` Jiri Olsa
0 siblings, 2 replies; 13+ messages in thread
From: bot+bpf-ci @ 2025-10-27 13:52 UTC (permalink / raw)
To: jolsa, mhiramat, rostedt, jpoimboe
Cc: song, peterz, bpf, linux-trace-kernel, x86, yhs, songliubraving,
andrii, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song,
clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 2842 bytes --]
```
commit 26b06def1956497a15eecf22787d5430fad2fed6
Author: Jiri Olsa <jolsa@kernel.org>
Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()"
This commit reverts a 2019 fix that ensured regs->ip is stored when
using non-HW regs (regs populated by perf_arch_fetch_caller_regs). The
revert claims the IP is captured during unwinding, but analysis suggests
otherwise.
> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index 745caa6c1..fa6c47b50 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -2789,13 +2789,13 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
> return;
> }
>
> - if (perf_callchain_store(entry, regs->ip))
> - return;
> -
> - if (perf_hw_regs(regs))
> + if (perf_hw_regs(regs)) {
> + if (perf_callchain_store(entry, regs->ip))
> + return;
> unwind_start(&state, current, regs, NULL);
> - else
> + } else {
> unwind_start(&state, current, NULL, (void *)regs->sp);
> + }
When perf_hw_regs(regs) returns false (which happens when regs comes from
perf_arch_fetch_caller_regs since it sets regs->flags=0), the else branch
calls unwind_start with regs=NULL. Can this lose the regs->ip value?
Looking at the code flow:
perf_arch_fetch_caller_regs() sets:
- regs->ip = CALLER_ADDR0 (the caller's instruction pointer)
- regs->flags = 0 (no X86_EFLAGS_FIXED)
perf_hw_regs() checks:
- returns (regs->flags & X86_EFLAGS_FIXED), which is false
The else branch executes:
- unwind_start(&state, current, NULL, (void *)regs->sp)
In __unwind_start() at arch/x86/kernel/unwind_orc.c, when regs=NULL:
- Takes the "else if (task == current)" path (line 711)
- Initializes state->ip via inline asm from current location
- This is __unwind_start's IP, not the original regs->ip
- Then advances to first_frame at regs->sp (lines 755-758)
The original regs->ip value (captured by perf_arch_fetch_caller_regs) is
never stored in the callchain. This was the exact issue fixed by commit
83f44ae0f8af in 2019, which stated:
"The stacktrace_map_raw_tp BPF selftest is failing because the RIP
saved by perf_arch_fetch_caller_regs() isn't getting saved by
perf_callchain_kernel()."
Does this revert re-introduce the BPF selftest failure that was fixed in
2019? The test tools/testing/selftests/bpf/prog_tests/stacktrace_map_raw_tp.c
still exists in the kernel tree.
>
> for (; !unwind_done(&state); unwind_next_frame(&state)) {
> addr = unwind_get_return_address(&state);
```
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
In-Reply-To-Subject: `Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()"`
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/18842579839
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1/3] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()"
2025-10-27 13:52 ` bot+bpf-ci
@ 2025-10-27 20:19 ` Josh Poimboeuf
2025-10-27 22:38 ` Jiri Olsa
2025-10-27 22:25 ` Jiri Olsa
1 sibling, 1 reply; 13+ messages in thread
From: Josh Poimboeuf @ 2025-10-27 20:19 UTC (permalink / raw)
To: bot+bpf-ci
Cc: jolsa, mhiramat, rostedt, song, peterz, bpf, linux-trace-kernel,
x86, yhs, songliubraving, andrii, ast, daniel, martin.lau,
eddyz87, yonghong.song, clm, ihor.solodrai
On Mon, Oct 27, 2025 at 01:52:18PM +0000, bot+bpf-ci@kernel.org wrote:
> Does this revert re-introduce the BPF selftest failure that was fixed in
> 2019? The test tools/testing/selftests/bpf/prog_tests/stacktrace_map_raw_tp.c
> still exists in the kernel tree.
I have the same question. And note there may be subtle differences
between the frame pointer and ORC unwinders. The testcase would need to
pass for both.
--
Josh
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1/3] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()"
2025-10-27 13:52 ` bot+bpf-ci
2025-10-27 20:19 ` Josh Poimboeuf
@ 2025-10-27 22:25 ` Jiri Olsa
1 sibling, 0 replies; 13+ messages in thread
From: Jiri Olsa @ 2025-10-27 22:25 UTC (permalink / raw)
To: bot+bpf-ci
Cc: mhiramat, rostedt, jpoimboe, song, peterz, bpf,
linux-trace-kernel, x86, yhs, songliubraving, andrii, ast, daniel,
martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai
On Mon, Oct 27, 2025 at 01:52:18PM +0000, bot+bpf-ci@kernel.org wrote:
>
> ```
> commit 26b06def1956497a15eecf22787d5430fad2fed6
> Author: Jiri Olsa <jolsa@kernel.org>
>
> Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()"
>
> This commit reverts a 2019 fix that ensured regs->ip is stored when
> using non-HW regs (regs populated by perf_arch_fetch_caller_regs). The
> revert claims the IP is captured during unwinding, but analysis suggests
> otherwise.
>
> > diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> > index 745caa6c1..fa6c47b50 100644
> > --- a/arch/x86/events/core.c
> > +++ b/arch/x86/events/core.c
> > @@ -2789,13 +2789,13 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
> > return;
> > }
> >
> > - if (perf_callchain_store(entry, regs->ip))
> > - return;
> > -
> > - if (perf_hw_regs(regs))
> > + if (perf_hw_regs(regs)) {
> > + if (perf_callchain_store(entry, regs->ip))
> > + return;
> > unwind_start(&state, current, regs, NULL);
> > - else
> > + } else {
> > unwind_start(&state, current, NULL, (void *)regs->sp);
> > + }
>
> When perf_hw_regs(regs) returns false (which happens when regs comes from
> perf_arch_fetch_caller_regs since it sets regs->flags=0), the else branch
> calls unwind_start with regs=NULL. Can this lose the regs->ip value?
>
> Looking at the code flow:
>
> perf_arch_fetch_caller_regs() sets:
> - regs->ip = CALLER_ADDR0 (the caller's instruction pointer)
> - regs->flags = 0 (no X86_EFLAGS_FIXED)
>
> perf_hw_regs() checks:
> - returns (regs->flags & X86_EFLAGS_FIXED), which is false
>
> The else branch executes:
> - unwind_start(&state, current, NULL, (void *)regs->sp)
>
> In __unwind_start() at arch/x86/kernel/unwind_orc.c, when regs=NULL:
> - Takes the "else if (task == current)" path (line 711)
> - Initializes state->ip via inline asm from current location
> - This is __unwind_start's IP, not the original regs->ip
> - Then advances to first_frame at regs->sp (lines 755-758)
>
> The original regs->ip value (captured by perf_arch_fetch_caller_regs) is
> never stored in the callchain. This was the exact issue fixed by commit
IIUC it is, unwinder will eventually unwind to it
> 83f44ae0f8af in 2019, which stated:
>
> "The stacktrace_map_raw_tp BPF selftest is failing because the RIP
> saved by perf_arch_fetch_caller_regs() isn't getting saved by
> perf_callchain_kernel()."
>
> Does this revert re-introduce the BPF selftest failure that was fixed in
> 2019? The test tools/testing/selftests/bpf/prog_tests/stacktrace_map_raw_tp.c
> still exists in the kernel tree.
stacktrace_map_raw_tp does not check what's the first ip of the
stacktrace and it passes with or without this change
I can see double entries on current code and just one with this change:
# bpftrace -e 'tracepoint:sched:sched_process_exec { print(kstack()); }'
Attaching 1 probe...
bprm_execve+1767
bprm_execve+1767
do_execveat_common.isra.0+425
__x64_sys_execve+56
do_syscall_64+133
entry_SYSCALL_64_after_hwframe+118
thanks,
jirka
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1/3] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()"
2025-10-27 20:19 ` Josh Poimboeuf
@ 2025-10-27 22:38 ` Jiri Olsa
2025-10-29 3:39 ` Josh Poimboeuf
0 siblings, 1 reply; 13+ messages in thread
From: Jiri Olsa @ 2025-10-27 22:38 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: bot+bpf-ci, mhiramat, rostedt, song, peterz, bpf,
linux-trace-kernel, x86, yhs, songliubraving, andrii, ast, daniel,
martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai
On Mon, Oct 27, 2025 at 01:19:52PM -0700, Josh Poimboeuf wrote:
> On Mon, Oct 27, 2025 at 01:52:18PM +0000, bot+bpf-ci@kernel.org wrote:
> > Does this revert re-introduce the BPF selftest failure that was fixed in
> > 2019? The test tools/testing/selftests/bpf/prog_tests/stacktrace_map_raw_tp.c
> > still exists in the kernel tree.
>
> I have the same question. And note there may be subtle differences
> between the frame pointer and ORC unwinders. The testcase would need to
> pass for both.
as I wrote in the other email that test does not check ips directly,
it just compare stacks taken from bpf_get_stackid and bpf_get_stack
helpers.. so it passes for both orc and frame pointer unwinder
thanks,
jirka
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/3] x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe
2025-10-27 13:13 ` [PATCH 2/3] x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe Jiri Olsa
@ 2025-10-28 7:02 ` Masami Hiramatsu
0 siblings, 0 replies; 13+ messages in thread
From: Masami Hiramatsu @ 2025-10-28 7:02 UTC (permalink / raw)
To: Jiri Olsa
Cc: Steven Rostedt, Josh Poimboeuf, Peter Zijlstra, bpf,
linux-trace-kernel, x86, Yonghong Song, Song Liu, Andrii Nakryiko
On Mon, 27 Oct 2025 14:13:53 +0100
Jiri Olsa <jolsa@kernel.org> wrote:
> Currently we don't get stack trace via ORC unwinder on top of fgraph exit
> handler. We can see that when generating stacktrace from kretprobe_multi
> bpf program which is based on fprobe/fgraph.
>
> The reason is that the ORC unwind code won't get pass the return_to_handler
> callback installed by fgraph return probe machinery.
>
> Solving this by creating stack frame in return_to_handler expected by
> ftrace_graph_ret_addr function to recover original return address and
> continue with the unwind.
>
> Also updating the pt_regs data with cs/flags/rsp which are needed for
> successful stack retrieval from ebpf bpf_get_stackid helper.
> - in get_perf_callchain we check user_mode(regs) so CS has to be set
> - in perf_callchain_kernel we call perf_hw_regs(regs), so EFLAGS/FIXED
> has to be unset
>
> Note I feel like it'd be better to set those directly in return_to_handler,
> but Steven suggested setting them later in kprobe_multi code path.
Yeah, stacktrace is not always used from the return handler.
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Thanks for the fix. This looks good to me.
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> ---
> arch/x86/include/asm/ftrace.h | 5 +++++
> arch/x86/kernel/ftrace_64.S | 8 +++++++-
> include/linux/ftrace.h | 10 +++++++++-
> 3 files changed, 21 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h
> index 93156ac4ffe0..b08c95872eed 100644
> --- a/arch/x86/include/asm/ftrace.h
> +++ b/arch/x86/include/asm/ftrace.h
> @@ -56,6 +56,11 @@ arch_ftrace_get_regs(struct ftrace_regs *fregs)
> return &arch_ftrace_regs(fregs)->regs;
> }
>
> +#define arch_ftrace_partial_regs(regs) do { \
> + regs->flags &= ~X86_EFLAGS_FIXED; \
> + regs->cs = __KERNEL_CS; \
> +} while (0)
> +
> #define arch_ftrace_fill_perf_regs(fregs, _regs) do { \
> (_regs)->ip = arch_ftrace_regs(fregs)->regs.ip; \
> (_regs)->sp = arch_ftrace_regs(fregs)->regs.sp; \
> diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S
> index 367da3638167..823dbdd0eb41 100644
> --- a/arch/x86/kernel/ftrace_64.S
> +++ b/arch/x86/kernel/ftrace_64.S
> @@ -354,12 +354,17 @@ SYM_CODE_START(return_to_handler)
> UNWIND_HINT_UNDEFINED
> ANNOTATE_NOENDBR
>
> + /* Restore return_to_handler value that got eaten by previous ret instruction. */
> + subq $8, %rsp
> + UNWIND_HINT_FUNC
> +
> /* Save ftrace_regs for function exit context */
> subq $(FRAME_SIZE), %rsp
>
> movq %rax, RAX(%rsp)
> movq %rdx, RDX(%rsp)
> movq %rbp, RBP(%rsp)
> + movq %rsp, RSP(%rsp)
> movq %rsp, %rdi
>
> call ftrace_return_to_handler
> @@ -368,7 +373,8 @@ SYM_CODE_START(return_to_handler)
> movq RDX(%rsp), %rdx
> movq RAX(%rsp), %rax
>
> - addq $(FRAME_SIZE), %rsp
> + addq $(FRAME_SIZE) + 8, %rsp
> +
> /*
> * Jump back to the old return address. This cannot be JMP_NOSPEC rdi
> * since IBT would demand that contain ENDBR, which simply isn't so for
> diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
> index 7ded7df6e9b5..07f8c309e432 100644
> --- a/include/linux/ftrace.h
> +++ b/include/linux/ftrace.h
> @@ -193,6 +193,10 @@ static __always_inline struct pt_regs *ftrace_get_regs(struct ftrace_regs *fregs
> #if !defined(CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS) || \
> defined(CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS)
>
> +#ifndef arch_ftrace_partial_regs
> +#define arch_ftrace_partial_regs(regs) do {} while (0)
> +#endif
> +
> static __always_inline struct pt_regs *
> ftrace_partial_regs(struct ftrace_regs *fregs, struct pt_regs *regs)
> {
> @@ -202,7 +206,11 @@ ftrace_partial_regs(struct ftrace_regs *fregs, struct pt_regs *regs)
> * Since arch_ftrace_get_regs() will check some members and may return
> * NULL, we can not use it.
> */
> - return &arch_ftrace_regs(fregs)->regs;
> + regs = &arch_ftrace_regs(fregs)->regs;
> +
> + /* Allow arch specific updates to regs. */
> + arch_ftrace_partial_regs(regs);
> + return regs;
> }
>
> #endif /* !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS || CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS */
> --
> 2.51.0
>
--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1/3] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()"
2025-10-27 22:38 ` Jiri Olsa
@ 2025-10-29 3:39 ` Josh Poimboeuf
2025-10-29 7:17 ` Jiri Olsa
0 siblings, 1 reply; 13+ messages in thread
From: Josh Poimboeuf @ 2025-10-29 3:39 UTC (permalink / raw)
To: Jiri Olsa
Cc: bot+bpf-ci, mhiramat, rostedt, song, peterz, bpf,
linux-trace-kernel, x86, yhs, songliubraving, andrii, ast, daniel,
martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai
On Mon, Oct 27, 2025 at 11:38:50PM +0100, Jiri Olsa wrote:
> On Mon, Oct 27, 2025 at 01:19:52PM -0700, Josh Poimboeuf wrote:
> > On Mon, Oct 27, 2025 at 01:52:18PM +0000, bot+bpf-ci@kernel.org wrote:
> > > Does this revert re-introduce the BPF selftest failure that was fixed in
> > > 2019? The test tools/testing/selftests/bpf/prog_tests/stacktrace_map_raw_tp.c
> > > still exists in the kernel tree.
> >
> > I have the same question. And note there may be subtle differences
> > between the frame pointer and ORC unwinders. The testcase would need to
> > pass for both.
>
> as I wrote in the other email that test does not check ips directly,
> it just compare stacks taken from bpf_get_stackid and bpf_get_stack
> helpers.. so it passes for both orc and frame pointer unwinder
Ok. So the original fix wasn't actually a fix at all? It would be good
to understand that and mention it in the commit log. Otherwise it's not
clear why it's ok to revert a fix with no real explanation.
--
Josh
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1/3] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()"
2025-10-29 3:39 ` Josh Poimboeuf
@ 2025-10-29 7:17 ` Jiri Olsa
2025-10-30 21:48 ` Jiri Olsa
0 siblings, 1 reply; 13+ messages in thread
From: Jiri Olsa @ 2025-10-29 7:17 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Jiri Olsa, bot+bpf-ci, mhiramat, rostedt, song, peterz, bpf,
linux-trace-kernel, x86, yhs, songliubraving, andrii, ast, daniel,
martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai
On Tue, Oct 28, 2025 at 08:39:33PM -0700, Josh Poimboeuf wrote:
> On Mon, Oct 27, 2025 at 11:38:50PM +0100, Jiri Olsa wrote:
> > On Mon, Oct 27, 2025 at 01:19:52PM -0700, Josh Poimboeuf wrote:
> > > On Mon, Oct 27, 2025 at 01:52:18PM +0000, bot+bpf-ci@kernel.org wrote:
> > > > Does this revert re-introduce the BPF selftest failure that was fixed in
> > > > 2019? The test tools/testing/selftests/bpf/prog_tests/stacktrace_map_raw_tp.c
> > > > still exists in the kernel tree.
> > >
> > > I have the same question. And note there may be subtle differences
> > > between the frame pointer and ORC unwinders. The testcase would need to
> > > pass for both.
> >
> > as I wrote in the other email that test does not check ips directly,
> > it just compare stacks taken from bpf_get_stackid and bpf_get_stack
> > helpers.. so it passes for both orc and frame pointer unwinder
>
> Ok. So the original fix wasn't actually a fix at all? It would be good
> to understand that and mention it in the commit log. Otherwise it's not
> clear why it's ok to revert a fix with no real explanation.
I think it was a fix when it was pushed 6 years ago, but some
unwind change along that time made it redundant, I'll try to
find what the change was
jirka
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1/3] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()"
2025-10-29 7:17 ` Jiri Olsa
@ 2025-10-30 21:48 ` Jiri Olsa
2025-10-31 1:37 ` Josh Poimboeuf
0 siblings, 1 reply; 13+ messages in thread
From: Jiri Olsa @ 2025-10-30 21:48 UTC (permalink / raw)
To: Jiri Olsa, Josh Poimboeuf
Cc: bot+bpf-ci, mhiramat, rostedt, song, peterz, bpf,
linux-trace-kernel, x86, yhs, songliubraving, andrii, ast, daniel,
martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai
On Wed, Oct 29, 2025 at 08:17:05AM +0100, Jiri Olsa wrote:
> On Tue, Oct 28, 2025 at 08:39:33PM -0700, Josh Poimboeuf wrote:
> > On Mon, Oct 27, 2025 at 11:38:50PM +0100, Jiri Olsa wrote:
> > > On Mon, Oct 27, 2025 at 01:19:52PM -0700, Josh Poimboeuf wrote:
> > > > On Mon, Oct 27, 2025 at 01:52:18PM +0000, bot+bpf-ci@kernel.org wrote:
> > > > > Does this revert re-introduce the BPF selftest failure that was fixed in
> > > > > 2019? The test tools/testing/selftests/bpf/prog_tests/stacktrace_map_raw_tp.c
> > > > > still exists in the kernel tree.
> > > >
> > > > I have the same question. And note there may be subtle differences
> > > > between the frame pointer and ORC unwinders. The testcase would need to
> > > > pass for both.
> > >
> > > as I wrote in the other email that test does not check ips directly,
> > > it just compare stacks taken from bpf_get_stackid and bpf_get_stack
> > > helpers.. so it passes for both orc and frame pointer unwinder
> >
> > Ok. So the original fix wasn't actually a fix at all? It would be good
> > to understand that and mention it in the commit log. Otherwise it's not
> > clear why it's ok to revert a fix with no real explanation.
>
> I think it was a fix when it was pushed 6 years ago, but some
> unwind change along that time made it redundant, I'll try to
> find what the change was
hum I can't tell what changed since v5.2 (kernel version when [1] landed)
that reverted the behaviour which the [1] commit was fixing
I did the test for both orc and framepointer unwind with and without the
fix (revert of [1]) and except for the initial entry it does not seem to
change the rest of the unwind ... though I'd expect orc unwind to have
more entries
please check results below
any idea? thanks,
jirka
[1] 83f44ae0f8af perf/x86: Always store regs->ip in perf_callchain_kernel()
[2] ae6a45a08689 x86/unwind/orc: Fall back to using frame pointers for generated code
---
framepointer + fix:
bpf_prog_2beb79c650d605dd_rawtracepoint_bpf_testmod_test_read_1+324
bpf_trace_run2+216
__bpf_trace_bpf_testmod_test_read+13
bpf_testmod_test_read+1322
sysfs_kf_bin_read+103
kernfs_fop_read_iter+243
vfs_read+549
ksys_read+115
__x64_sys_read+29
x64_sys_call+6112
do_syscall_64+133
entry_SYSCALL_64_after_hwframe+118
framepointer withtout fix:
bpf_prog_2beb79c650d605dd_rawtracepoint_bpf_testmod_test_read_1+324
bpf_prog_2beb79c650d605dd_rawtracepoint_bpf_testmod_test_read_1+324
bpf_trace_run2+216
__bpf_trace_bpf_testmod_test_read+13
bpf_testmod_test_read+1322
sysfs_kf_bin_read+103
kernfs_fop_read_iter+243
vfs_read+549
ksys_read+115
__x64_sys_read+29
x64_sys_call+6112
do_syscall_64+133
entry_SYSCALL_64_after_hwframe+118
orc + fix:
bpf_prog_2beb79c650d605dd_rawtracepoint_bpf_testmod_test_read_1+324
bpf_trace_run2+214
bpf_testmod_test_read+1322
kernfs_fop_read_iter+228
vfs_read+550
ksys_read+112
do_syscall_64+133
entry_SYSCALL_64_after_hwframe+118
orc without fix:
bpf_prog_2beb79c650d605dd_rawtracepoint_bpf_testmod_test_read_1+324
bpf_prog_2beb79c650d605dd_rawtracepoint_bpf_testmod_test_read_1+324
bpf_trace_run2+214
bpf_testmod_test_read+1322
kernfs_fop_read_iter+228
vfs_read+550
ksys_read+112
do_syscall_64+133
entry_SYSCALL_64_after_hwframe+118
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1/3] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()"
2025-10-30 21:48 ` Jiri Olsa
@ 2025-10-31 1:37 ` Josh Poimboeuf
0 siblings, 0 replies; 13+ messages in thread
From: Josh Poimboeuf @ 2025-10-31 1:37 UTC (permalink / raw)
To: Jiri Olsa
Cc: bot+bpf-ci, mhiramat, rostedt, song, peterz, bpf,
linux-trace-kernel, x86, yhs, songliubraving, andrii, ast, daniel,
martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai
On Thu, Oct 30, 2025 at 10:48:03PM +0100, Jiri Olsa wrote:
> On Wed, Oct 29, 2025 at 08:17:05AM +0100, Jiri Olsa wrote:
> > On Tue, Oct 28, 2025 at 08:39:33PM -0700, Josh Poimboeuf wrote:
> > > On Mon, Oct 27, 2025 at 11:38:50PM +0100, Jiri Olsa wrote:
> > > > On Mon, Oct 27, 2025 at 01:19:52PM -0700, Josh Poimboeuf wrote:
> > > > > On Mon, Oct 27, 2025 at 01:52:18PM +0000, bot+bpf-ci@kernel.org wrote:
> > > > > > Does this revert re-introduce the BPF selftest failure that was fixed in
> > > > > > 2019? The test tools/testing/selftests/bpf/prog_tests/stacktrace_map_raw_tp.c
> > > > > > still exists in the kernel tree.
> > > > >
> > > > > I have the same question. And note there may be subtle differences
> > > > > between the frame pointer and ORC unwinders. The testcase would need to
> > > > > pass for both.
> > > >
> > > > as I wrote in the other email that test does not check ips directly,
> > > > it just compare stacks taken from bpf_get_stackid and bpf_get_stack
> > > > helpers.. so it passes for both orc and frame pointer unwinder
> > >
> > > Ok. So the original fix wasn't actually a fix at all? It would be good
> > > to understand that and mention it in the commit log. Otherwise it's not
> > > clear why it's ok to revert a fix with no real explanation.
> >
> > I think it was a fix when it was pushed 6 years ago, but some
> > unwind change along that time made it redundant, I'll try to
> > find what the change was
>
> hum I can't tell what changed since v5.2 (kernel version when [1] landed)
> that reverted the behaviour which the [1] commit was fixing
>
> I did the test for both orc and framepointer unwind with and without the
> fix (revert of [1]) and except for the initial entry it does not seem to
> change the rest of the unwind ... though I'd expect orc unwind to have
> more entries
>
> please check results below
>
> any idea? thanks,
> jirka
The "missing" ORC entries are probably fine, they're likely caused by
the compiler generating more tail calls with FP disabled.
--
Josh
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2025-10-31 1:37 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-27 13:13 [PATCH 0/3] x86/fgraph,bpf: Fix ORC stack unwind from return probe Jiri Olsa
2025-10-27 13:13 ` [PATCH 1/3] Revert "perf/x86: Always store regs->ip in perf_callchain_kernel()" Jiri Olsa
2025-10-27 13:52 ` bot+bpf-ci
2025-10-27 20:19 ` Josh Poimboeuf
2025-10-27 22:38 ` Jiri Olsa
2025-10-29 3:39 ` Josh Poimboeuf
2025-10-29 7:17 ` Jiri Olsa
2025-10-30 21:48 ` Jiri Olsa
2025-10-31 1:37 ` Josh Poimboeuf
2025-10-27 22:25 ` Jiri Olsa
2025-10-27 13:13 ` [PATCH 2/3] x86/fgraph,bpf: Fix stack ORC unwind from kprobe_multi return probe Jiri Olsa
2025-10-28 7:02 ` Masami Hiramatsu
2025-10-27 13:13 ` [PATCH 3/3] selftests/bpf: Add stacktrace ips test for kprobe_multi/kretprobe_multi Jiri Olsa
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).