* [PATCH v2 0/2] bpf, x86/unwind/orc: Support reliable unwinding through BPF stack frames
@ 2025-12-04 3:32 Josh Poimboeuf
2025-12-04 3:32 ` [PATCH v2 1/2] bpf: Add bpf_has_frame_pointer() Josh Poimboeuf
` (3 more replies)
0 siblings, 4 replies; 7+ messages in thread
From: Josh Poimboeuf @ 2025-12-04 3:32 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, bpf, Andrey Grodzovsky, Petr Mladek,
Song Liu, Raja Khan, Miroslav Benes, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Peter Zijlstra
Fix livepatch stalls which may be seen when a task is blocked with BPF
JIT on its kernel stack.
Changes since v1 (https://lore.kernel.org/cover.1764699074.git.jpoimboe@kernel.org):
- fix NULL ptr deref in __arch_prepare_bpf_trampoline()
Josh Poimboeuf (2):
bpf: Add bpf_has_frame_pointer()
x86/unwind/orc: Support reliable unwinding through BPF stack frames
arch/x86/kernel/unwind_orc.c | 39 +++++++++++++++++++++++++-----------
arch/x86/net/bpf_jit_comp.c | 12 +++++++++++
include/linux/bpf.h | 3 +++
kernel/bpf/core.c | 16 +++++++++++++++
4 files changed, 58 insertions(+), 12 deletions(-)
--
2.51.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 1/2] bpf: Add bpf_has_frame_pointer()
2025-12-04 3:32 [PATCH v2 0/2] bpf, x86/unwind/orc: Support reliable unwinding through BPF stack frames Josh Poimboeuf
@ 2025-12-04 3:32 ` Josh Poimboeuf
2025-12-04 14:04 ` Jiri Olsa
2025-12-04 3:32 ` [PATCH v2 2/2] x86/unwind/orc: Support reliable unwinding through BPF stack frames Josh Poimboeuf
` (2 subsequent siblings)
3 siblings, 1 reply; 7+ messages in thread
From: Josh Poimboeuf @ 2025-12-04 3:32 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, bpf, Andrey Grodzovsky, Petr Mladek,
Song Liu, Raja Khan, Miroslav Benes, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Peter Zijlstra
Introduce a bpf_has_frame_pointer() helper that unwinders can call to
determine whether a given instruction pointer is within the valid frame
pointer region of a BPF JIT program or trampoline (i.e., after the
prologue, before the epilogue).
This will enable livepatch (with the ORC unwinder) to reliably unwind
through BPF JIT frames.
Acked-by: Song Liu <song@kernel.org>
Acked-and-tested-by: Andrey Grodzovsky<andrey.grodzovsky@crowdstrike.com>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
arch/x86/net/bpf_jit_comp.c | 12 ++++++++++++
include/linux/bpf.h | 3 +++
kernel/bpf/core.c | 16 ++++++++++++++++
3 files changed, 31 insertions(+)
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index de5083cb1d37..3ec4fa94086a 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -1661,6 +1661,9 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
emit_prologue(&prog, image, stack_depth,
bpf_prog_was_classic(bpf_prog), tail_call_reachable,
bpf_is_subprog(bpf_prog), bpf_prog->aux->exception_cb);
+
+ bpf_prog->aux->ksym.fp_start = prog - temp;
+
/* Exception callback will clobber callee regs for its own use, and
* restore the original callee regs from main prog's stack frame.
*/
@@ -2716,6 +2719,8 @@ st: if (is_imm8(insn->off))
pop_r12(&prog);
}
EMIT1(0xC9); /* leave */
+ bpf_prog->aux->ksym.fp_end = prog - temp;
+
emit_return(&prog, image + addrs[i - 1] + (prog - temp));
break;
@@ -3299,6 +3304,9 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
}
EMIT1(0x55); /* push rbp */
EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */
+ if (im)
+ im->ksym.fp_start = prog - (u8 *)rw_image;
+
if (!is_imm8(stack_size)) {
/* sub rsp, stack_size */
EMIT3_off32(0x48, 0x81, 0xEC, stack_size);
@@ -3436,7 +3444,11 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
emit_ldx(&prog, BPF_DW, BPF_REG_6, BPF_REG_FP, -rbx_off);
+
EMIT1(0xC9); /* leave */
+ if (im)
+ im->ksym.fp_end = prog - (u8 *)rw_image;
+
if (flags & BPF_TRAMP_F_SKIP_FRAME) {
/* skip our return address and return to parent */
EMIT4(0x48, 0x83, 0xC4, 8); /* add rsp, 8 */
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index d808253f2e94..e3f56e8443da 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1257,6 +1257,8 @@ struct bpf_ksym {
struct list_head lnode;
struct latch_tree_node tnode;
bool prog;
+ u32 fp_start;
+ u32 fp_end;
};
enum bpf_tramp_prog_type {
@@ -1483,6 +1485,7 @@ void bpf_image_ksym_add(struct bpf_ksym *ksym);
void bpf_image_ksym_del(struct bpf_ksym *ksym);
void bpf_ksym_add(struct bpf_ksym *ksym);
void bpf_ksym_del(struct bpf_ksym *ksym);
+bool bpf_has_frame_pointer(unsigned long ip);
int bpf_jit_charge_modmem(u32 size);
void bpf_jit_uncharge_modmem(u32 size);
bool bpf_prog_has_trampoline(const struct bpf_prog *prog);
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index d595fe512498..7cd8382d1152 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -760,6 +760,22 @@ struct bpf_prog *bpf_prog_ksym_find(unsigned long addr)
NULL;
}
+bool bpf_has_frame_pointer(unsigned long ip)
+{
+ struct bpf_ksym *ksym;
+ unsigned long offset;
+
+ guard(rcu)();
+
+ ksym = bpf_ksym_find(ip);
+ if (!ksym || !ksym->fp_start || !ksym->fp_end)
+ return false;
+
+ offset = ip - ksym->start;
+
+ return offset >= ksym->fp_start && offset < ksym->fp_end;
+}
+
const struct exception_table_entry *search_bpf_extables(unsigned long addr)
{
const struct exception_table_entry *e = NULL;
--
2.51.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 2/2] x86/unwind/orc: Support reliable unwinding through BPF stack frames
2025-12-04 3:32 [PATCH v2 0/2] bpf, x86/unwind/orc: Support reliable unwinding through BPF stack frames Josh Poimboeuf
2025-12-04 3:32 ` [PATCH v2 1/2] bpf: Add bpf_has_frame_pointer() Josh Poimboeuf
@ 2025-12-04 3:32 ` Josh Poimboeuf
2025-12-04 14:27 ` [PATCH v2 0/2] bpf, " Jiri Olsa
2025-12-10 7:40 ` patchwork-bot+netdevbpf
3 siblings, 0 replies; 7+ messages in thread
From: Josh Poimboeuf @ 2025-12-04 3:32 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, bpf, Andrey Grodzovsky, Petr Mladek,
Song Liu, Raja Khan, Miroslav Benes, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Peter Zijlstra
BPF JIT programs and trampolines use a frame pointer, so the current ORC
unwinder strategy of falling back to frame pointers (when an ORC entry
is missing) usually works in practice when unwinding through BPF JIT
stack frames.
However, that frame pointer fallback is just a guess, so the unwind gets
marked unreliable for live patching, which can cause livepatch
transition stalls.
Make the common case reliable by calling the bpf_has_frame_pointer()
helper to detect the valid frame pointer region of BPF JIT programs and
trampolines.
Fixes: ee9f8fce9964 ("x86/unwind: Add the ORC unwinder")
Reported-by: Andrey Grodzovsky <andrey.grodzovsky@crowdstrike.com>
Closes: https://lore.kernel.org/0e555733-c670-4e84-b2e6-abb8b84ade38@crowdstrike.com
Acked-by: Song Liu <song@kernel.org>
Acked-and-tested-by: Andrey Grodzovsky<andrey.grodzovsky@crowdstrike.com>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
arch/x86/kernel/unwind_orc.c | 39 +++++++++++++++++++++++++-----------
1 file changed, 27 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
index 977ee75e047c..f610fde2d5c4 100644
--- a/arch/x86/kernel/unwind_orc.c
+++ b/arch/x86/kernel/unwind_orc.c
@@ -2,6 +2,7 @@
#include <linux/objtool.h>
#include <linux/module.h>
#include <linux/sort.h>
+#include <linux/bpf.h>
#include <asm/ptrace.h>
#include <asm/stacktrace.h>
#include <asm/unwind.h>
@@ -172,6 +173,25 @@ static struct orc_entry *orc_ftrace_find(unsigned long ip)
}
#endif
+/* Fake frame pointer entry -- used as a fallback for generated code */
+static struct orc_entry orc_fp_entry = {
+ .type = ORC_TYPE_CALL,
+ .sp_reg = ORC_REG_BP,
+ .sp_offset = 16,
+ .bp_reg = ORC_REG_PREV_SP,
+ .bp_offset = -16,
+};
+
+static struct orc_entry *orc_bpf_find(unsigned long ip)
+{
+#ifdef CONFIG_BPF_JIT
+ if (bpf_has_frame_pointer(ip))
+ return &orc_fp_entry;
+#endif
+
+ return NULL;
+}
+
/*
* If we crash with IP==0, the last successfully executed instruction
* was probably an indirect function call with a NULL function pointer,
@@ -186,15 +206,6 @@ static struct orc_entry null_orc_entry = {
.type = ORC_TYPE_CALL
};
-/* Fake frame pointer entry -- used as a fallback for generated code */
-static struct orc_entry orc_fp_entry = {
- .type = ORC_TYPE_CALL,
- .sp_reg = ORC_REG_BP,
- .sp_offset = 16,
- .bp_reg = ORC_REG_PREV_SP,
- .bp_offset = -16,
-};
-
static struct orc_entry *orc_find(unsigned long ip)
{
static struct orc_entry *orc;
@@ -238,6 +249,11 @@ static struct orc_entry *orc_find(unsigned long ip)
if (orc)
return orc;
+ /* BPF lookup: */
+ orc = orc_bpf_find(ip);
+ if (orc)
+ return orc;
+
return orc_ftrace_find(ip);
}
@@ -495,9 +511,8 @@ bool unwind_next_frame(struct unwind_state *state)
if (!orc) {
/*
* As a fallback, try to assume this code uses a frame pointer.
- * This is useful for generated code, like BPF, which ORC
- * doesn't know about. This is just a guess, so the rest of
- * the unwind is no longer considered reliable.
+ * This is just a guess, so the rest of the unwind is no longer
+ * considered reliable.
*/
orc = &orc_fp_entry;
state->error = true;
--
2.51.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v2 1/2] bpf: Add bpf_has_frame_pointer()
2025-12-04 3:32 ` [PATCH v2 1/2] bpf: Add bpf_has_frame_pointer() Josh Poimboeuf
@ 2025-12-04 14:04 ` Jiri Olsa
2025-12-04 17:02 ` Josh Poimboeuf
0 siblings, 1 reply; 7+ messages in thread
From: Jiri Olsa @ 2025-12-04 14:04 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: x86, linux-kernel, live-patching, bpf, Andrey Grodzovsky,
Petr Mladek, Song Liu, Raja Khan, Miroslav Benes,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Peter Zijlstra
On Wed, Dec 03, 2025 at 07:32:15PM -0800, Josh Poimboeuf wrote:
> Introduce a bpf_has_frame_pointer() helper that unwinders can call to
> determine whether a given instruction pointer is within the valid frame
> pointer region of a BPF JIT program or trampoline (i.e., after the
> prologue, before the epilogue).
>
> This will enable livepatch (with the ORC unwinder) to reliably unwind
> through BPF JIT frames.
>
> Acked-by: Song Liu <song@kernel.org>
> Acked-and-tested-by: Andrey Grodzovsky<andrey.grodzovsky@crowdstrike.com>
> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> ---
> arch/x86/net/bpf_jit_comp.c | 12 ++++++++++++
> include/linux/bpf.h | 3 +++
> kernel/bpf/core.c | 16 ++++++++++++++++
> 3 files changed, 31 insertions(+)
>
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index de5083cb1d37..3ec4fa94086a 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -1661,6 +1661,9 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
> emit_prologue(&prog, image, stack_depth,
> bpf_prog_was_classic(bpf_prog), tail_call_reachable,
> bpf_is_subprog(bpf_prog), bpf_prog->aux->exception_cb);
> +
> + bpf_prog->aux->ksym.fp_start = prog - temp;
> +
> /* Exception callback will clobber callee regs for its own use, and
> * restore the original callee regs from main prog's stack frame.
> */
> @@ -2716,6 +2719,8 @@ st: if (is_imm8(insn->off))
> pop_r12(&prog);
> }
> EMIT1(0xC9); /* leave */
> + bpf_prog->aux->ksym.fp_end = prog - temp;
> +
> emit_return(&prog, image + addrs[i - 1] + (prog - temp));
> break;
>
> @@ -3299,6 +3304,9 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
> }
> EMIT1(0x55); /* push rbp */
> EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */
> + if (im)
> + im->ksym.fp_start = prog - (u8 *)rw_image;
> +
> if (!is_imm8(stack_size)) {
> /* sub rsp, stack_size */
> EMIT3_off32(0x48, 0x81, 0xEC, stack_size);
> @@ -3436,7 +3444,11 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
> emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
>
> emit_ldx(&prog, BPF_DW, BPF_REG_6, BPF_REG_FP, -rbx_off);
> +
> EMIT1(0xC9); /* leave */
> + if (im)
> + im->ksym.fp_end = prog - (u8 *)rw_image;
is the null check needed? there are other places in the function that
use 'im' without that
thanks,
jirka
> +
> if (flags & BPF_TRAMP_F_SKIP_FRAME) {
> /* skip our return address and return to parent */
> EMIT4(0x48, 0x83, 0xC4, 8); /* add rsp, 8 */
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index d808253f2e94..e3f56e8443da 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1257,6 +1257,8 @@ struct bpf_ksym {
> struct list_head lnode;
> struct latch_tree_node tnode;
> bool prog;
> + u32 fp_start;
> + u32 fp_end;
> };
>
> enum bpf_tramp_prog_type {
> @@ -1483,6 +1485,7 @@ void bpf_image_ksym_add(struct bpf_ksym *ksym);
> void bpf_image_ksym_del(struct bpf_ksym *ksym);
> void bpf_ksym_add(struct bpf_ksym *ksym);
> void bpf_ksym_del(struct bpf_ksym *ksym);
> +bool bpf_has_frame_pointer(unsigned long ip);
> int bpf_jit_charge_modmem(u32 size);
> void bpf_jit_uncharge_modmem(u32 size);
> bool bpf_prog_has_trampoline(const struct bpf_prog *prog);
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index d595fe512498..7cd8382d1152 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -760,6 +760,22 @@ struct bpf_prog *bpf_prog_ksym_find(unsigned long addr)
> NULL;
> }
>
> +bool bpf_has_frame_pointer(unsigned long ip)
> +{
> + struct bpf_ksym *ksym;
> + unsigned long offset;
> +
> + guard(rcu)();
> +
> + ksym = bpf_ksym_find(ip);
> + if (!ksym || !ksym->fp_start || !ksym->fp_end)
> + return false;
> +
> + offset = ip - ksym->start;
> +
> + return offset >= ksym->fp_start && offset < ksym->fp_end;
> +}
> +
> const struct exception_table_entry *search_bpf_extables(unsigned long addr)
> {
> const struct exception_table_entry *e = NULL;
> --
> 2.51.1
>
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 0/2] bpf, x86/unwind/orc: Support reliable unwinding through BPF stack frames
2025-12-04 3:32 [PATCH v2 0/2] bpf, x86/unwind/orc: Support reliable unwinding through BPF stack frames Josh Poimboeuf
2025-12-04 3:32 ` [PATCH v2 1/2] bpf: Add bpf_has_frame_pointer() Josh Poimboeuf
2025-12-04 3:32 ` [PATCH v2 2/2] x86/unwind/orc: Support reliable unwinding through BPF stack frames Josh Poimboeuf
@ 2025-12-04 14:27 ` Jiri Olsa
2025-12-10 7:40 ` patchwork-bot+netdevbpf
3 siblings, 0 replies; 7+ messages in thread
From: Jiri Olsa @ 2025-12-04 14:27 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: x86, linux-kernel, live-patching, bpf, Andrey Grodzovsky,
Petr Mladek, Song Liu, Raja Khan, Miroslav Benes,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Peter Zijlstra
On Wed, Dec 03, 2025 at 07:32:14PM -0800, Josh Poimboeuf wrote:
> Fix livepatch stalls which may be seen when a task is blocked with BPF
> JIT on its kernel stack.
>
> Changes since v1 (https://lore.kernel.org/cover.1764699074.git.jpoimboe@kernel.org):
> - fix NULL ptr deref in __arch_prepare_bpf_trampoline()
>
> Josh Poimboeuf (2):
> bpf: Add bpf_has_frame_pointer()
> x86/unwind/orc: Support reliable unwinding through BPF stack frames
>
tried with bpftrace and it seems to go over bpf_prog properly
in this case:
bpf_prog_2beb79c650d605dd_fentry_bpf_testmod_bpf_kfunc_common_test_1+320
bpf_trampoline_354334973728+60
bpf_kfunc_common_test+9
bpf_prog_f837cdd29a0519b9_test1+25
trace_call_bpf+345
kprobe_perf_func+76
aggr_pre_handler+72
kprobe_ftrace_handler+361
drm_core_init+202
bpf_fentry_test1+9
bpf_prog_test_run_tracing+357
__sys_bpf+2263
__x64_sys_bpf+33
do_syscall_64+134
entry_SYSCALL_64_after_hwframe+118
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
thanks,
jirka
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 1/2] bpf: Add bpf_has_frame_pointer()
2025-12-04 14:04 ` Jiri Olsa
@ 2025-12-04 17:02 ` Josh Poimboeuf
0 siblings, 0 replies; 7+ messages in thread
From: Josh Poimboeuf @ 2025-12-04 17:02 UTC (permalink / raw)
To: Jiri Olsa
Cc: x86, linux-kernel, live-patching, bpf, Andrey Grodzovsky,
Petr Mladek, Song Liu, Raja Khan, Miroslav Benes,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Peter Zijlstra
On Thu, Dec 04, 2025 at 03:04:23PM +0100, Jiri Olsa wrote:
> On Wed, Dec 03, 2025 at 07:32:15PM -0800, Josh Poimboeuf wrote:
> > EMIT1(0xC9); /* leave */
> > + if (im)
> > + im->ksym.fp_end = prog - (u8 *)rw_image;
>
> is the null check needed? there are other places in the function that
> use 'im' without that
That was a NULL pointer dereference found by BPF CI.
bpf_struct_ops_prepare_trampoline() calls arch_prepare_bpf_trampoline()
with NULL im.
--
Josh
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 0/2] bpf, x86/unwind/orc: Support reliable unwinding through BPF stack frames
2025-12-04 3:32 [PATCH v2 0/2] bpf, x86/unwind/orc: Support reliable unwinding through BPF stack frames Josh Poimboeuf
` (2 preceding siblings ...)
2025-12-04 14:27 ` [PATCH v2 0/2] bpf, " Jiri Olsa
@ 2025-12-10 7:40 ` patchwork-bot+netdevbpf
3 siblings, 0 replies; 7+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-12-10 7:40 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: x86, linux-kernel, live-patching, bpf, andrey.grodzovsky, pmladek,
song, raja.khan, mbenes, ast, daniel, andrii, peterz
Hello:
This series was applied to bpf/bpf.git (master)
by Alexei Starovoitov <ast@kernel.org>:
On Wed, 3 Dec 2025 19:32:14 -0800 you wrote:
> Fix livepatch stalls which may be seen when a task is blocked with BPF
> JIT on its kernel stack.
>
> Changes since v1 (https://lore.kernel.org/cover.1764699074.git.jpoimboe@kernel.org):
> - fix NULL ptr deref in __arch_prepare_bpf_trampoline()
>
> Josh Poimboeuf (2):
> bpf: Add bpf_has_frame_pointer()
> x86/unwind/orc: Support reliable unwinding through BPF stack frames
>
> [...]
Here is the summary with links:
- [v2,1/2] bpf: Add bpf_has_frame_pointer()
https://git.kernel.org/bpf/bpf/c/ca45c84afb8c
- [v2,2/2] x86/unwind/orc: Support reliable unwinding through BPF stack frames
(no matching commit)
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2025-12-10 7:43 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-04 3:32 [PATCH v2 0/2] bpf, x86/unwind/orc: Support reliable unwinding through BPF stack frames Josh Poimboeuf
2025-12-04 3:32 ` [PATCH v2 1/2] bpf: Add bpf_has_frame_pointer() Josh Poimboeuf
2025-12-04 14:04 ` Jiri Olsa
2025-12-04 17:02 ` Josh Poimboeuf
2025-12-04 3:32 ` [PATCH v2 2/2] x86/unwind/orc: Support reliable unwinding through BPF stack frames Josh Poimboeuf
2025-12-04 14:27 ` [PATCH v2 0/2] bpf, " Jiri Olsa
2025-12-10 7:40 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).