BPF List
 help / color / mirror / Atom feed
* BPF fentry/fexit trampolines stall livepatch stalls transition due to missing ORC unwind metadata
@ 2025-11-19 15:41 Andrey Grodzovsky
  2025-11-20 12:15 ` Miroslav Benes
  0 siblings, 1 reply; 11+ messages in thread
From: Andrey Grodzovsky @ 2025-11-19 15:41 UTC (permalink / raw)
  To: bpf, live-patching
  Cc: DL Linux Open Source Team, Petr Mladek, Song Liu, andrii,
	Raja Khan

Hello BPF and livepatch teams,

This is somewhat a followup on 
https://lists.ubuntu.com/archives/kernel-team/2025-October/163881.html 
as we continue encounter issues and conflicts between BPF and livepatch.

We've encountered an issue between BPF fentry/fexit trampolines and 
kernel livepatching (kpatch/livepatch) on x86_64 systems with ORC 
unwinder enabled. I'm reaching out to understand if this is a known 
limitation and to explore potential solutions. I assume it's known as I 
see information along this lines in 
https://www.kernel.org/doc/Documentation/livepatch/reliable-stacktrace.rst

Problem Summary

When BPF programs attach to kernel functions using fentry/fexit hooks, 
the resulting JIT-compiled trampolines lack ORC unwind metadata. This 
causes livepatch transition stall when threads are blocked in hooked 
functions, as the stack becomes unreliable for unwinding purposes.

In our case the environment is

- RHEL 9.6 (kernel 5.14.0-570.17.1.el9_6.x86_64)
- CONFIG_UNWINDER_ORC=y
- CONFIG_BPF_JIT_ALWAYS_ON=y
- BPF fentry/fexit hooks on inet_recvmsg()

Scenario:
1. BPF program attached to inet_recvmsg via fentry/fexit (creates BPF 
trampoline)
2. CIFS filesystem mounted (creates cifsd kernel thread)
3. cifsd thread blocks in inet_recvmsg → BPF trampoline is on the stack
4. Attempt to load kpatch module
5. Livepatch transition stalls indefinitely

Error Message (repeated every ~1 second):
livepatch: klp_try_switch_task: cifsd:2886 has an unreliable stack

Stack trace showing BPF trampoline:
cifsd           D  0  2886
Call Trace:
  wait_woken+0x50/0x60
  sk_wait_data+0x176/0x190
  tcp_recvmsg_locked+0x234/0x920
  tcp_recvmsg+0x78/0x210
  inet_recvmsg+0x5c/0x140
  bpf_trampoline_6442469985+0x89/0x130  ← NO ORC metadata
  sock_recvmsg+0x95/0xa0
  cifs_readv_from_socket+0x1ca/0x2d0 [cifs]
  ...

As far as I understand and please correct me if it's wrong -

The failure occurs in arch/x86/kernel/unwind_orc.c

orc = orc_find(state->signal ? state->ip : state->ip - 1);
if (!orc) {
     /*
      * As a fallback, try to assume this code uses a frame pointer.
      * This is useful for generated code, like BPF, which ORC
      * doesn't know about.  This is just a guess, so the rest of
      * the unwind is no longer considered reliable.
      */
     orc = &orc_fp_entry;
     state->error = true;  // ← Marks stack as unreliable
}

When orc_find() returns NULL for the BPF trampoline address, the 
unwinder falls back to frame pointers and marks the stack unreliable. 
This causes arch_stack_walk_reliable() to fail, which in turn causes 
livepatch's klp_check_stack() to return -EINVAL before even checking if 
to-be-patched functions are on the stack.

Key observations:
1. The kernel comment explicitly mentions "generated code, like BPF"
2. Documentation/livepatch/reliable-stacktrace.rst lists "Dynamically 
generated code (e.g. eBPF)" as causing unreliable stacks
3. Native kernel functions have ORC metadata from objtool during build
4. Ftrace trampolines have special ORC handling via orc_ftrace_find()
5. BPF JIT trampolines have no such handling - Is this correct ?

Impact

This affects production systems where:
- Security/observability tools use BPF fentry/fexit hooks
- Live kernel patching is required for security updates
- Kernel threads may be blocked in hooked network/storage functions

The livepatch transition can stall for 60+ seconds before failing, 
blocking critical security patches.

Questions for the Community

1. Is this a known limitation (I assume yes) ?
2. Runtime ORC generation? Could the BPF JIT generate ORC unwind entries 
for trampolines, similar to how ftrace trampolines are handled?
3. Trampoline registration? Could BPF trampolines register their address 
ranges with the ORC unwinder to avoid the "unreliable" marking?
4. Alternative unwinding? Could livepatch use an alternative unwinding 
method when BPF trampolines are detected (e.g., frame pointers with 
validation)?
5. Workarounds? I mention one bellow and I would be happy to hear if 
anyone has a better idea to propose ?

The only possible workaround I see is switching everything from 
trampoline based hooks to kprobe since I assume kprobes won't have this 
issue

BPF kprobes use the ftrace infrastructure with kprobe_ftrace_handler, 
which has ORC metadata and special handling in the unwinder. The stack 
remains reliable:
inet_recvmsg+0x50/0x140  ← Has ORC metadata
kprobe_ftrace_handler+... ← Has ORC metadata

Problem with kprobes is obviously their performance penalty.

Additional Context

 From arch/x86/net/bpf_jit_comp.c:3559:
bool bpf_jit_supports_exceptions(void)
{
     /* We unwind through both kernel frames (starting from within bpf_throw
      * call) and BPF frames. Therefore we require ORC unwinder to be 
enabled
      * to walk kernel frames and reach BPF frames in the stack trace.
      */
     return IS_ENABLED(CONFIG_UNWINDER_ORC);
}

This shows that BPF already has some integration with ORC for exception 
handling. Could this be extended to trampolines?

References

- Kernel: 5.14.0-570.17.1.el9_6.x86_64
- Code: arch/x86/kernel/unwind_orc.c:510-519
- Docs: Documentation/livepatch/reliable-stacktrace.rst lines 84-85, 111-112

I appreciate any guidance on whether this is something that could be 
addressed in the kernel, or if we should focus on user-space workarounds.

Thanks,
Andrey

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: BPF fentry/fexit trampolines stall livepatch stalls transition due to missing ORC unwind metadata
  2025-11-19 15:41 BPF fentry/fexit trampolines stall livepatch stalls transition due to missing ORC unwind metadata Andrey Grodzovsky
@ 2025-11-20 12:15 ` Miroslav Benes
  2025-11-22  0:56   ` Josh Poimboeuf
  0 siblings, 1 reply; 11+ messages in thread
From: Miroslav Benes @ 2025-11-20 12:15 UTC (permalink / raw)
  To: Andrey Grodzovsky
  Cc: bpf, live-patching, DL Linux Open Source Team, Petr Mladek,
	Song Liu, andrii, Raja Khan

[-- Attachment #1: Type: text/plain, Size: 4894 bytes --]

Hello Andrey,

On Wed, 19 Nov 2025, Andrey Grodzovsky wrote:

> Hello BPF and livepatch teams,
> 
> This is somewhat a followup on
> https://lists.ubuntu.com/archives/kernel-team/2025-October/163881.html as we
> continue encounter issues and conflicts between BPF and livepatch.
> 
> We've encountered an issue between BPF fentry/fexit trampolines and kernel
> livepatching (kpatch/livepatch) on x86_64 systems with ORC unwinder enabled.
> I'm reaching out to understand if this is a known limitation and to explore
> potential solutions. I assume it's known as I see information along this lines
> in https://www.kernel.org/doc/Documentation/livepatch/reliable-stacktrace.rst
> 
> Problem Summary
> 
> When BPF programs attach to kernel functions using fentry/fexit hooks, the
> resulting JIT-compiled trampolines lack ORC unwind metadata. This causes
> livepatch transition stall when threads are blocked in hooked functions, as
> the stack becomes unreliable for unwinding purposes.
> 
> In our case the environment is
> 
> - RHEL 9.6 (kernel 5.14.0-570.17.1.el9_6.x86_64)
> - CONFIG_UNWINDER_ORC=y
> - CONFIG_BPF_JIT_ALWAYS_ON=y
> - BPF fentry/fexit hooks on inet_recvmsg()
> 
> Scenario:
> 1. BPF program attached to inet_recvmsg via fentry/fexit (creates BPF
> trampoline)
> 2. CIFS filesystem mounted (creates cifsd kernel thread)
> 3. cifsd thread blocks in inet_recvmsg → BPF trampoline is on the stack
> 4. Attempt to load kpatch module
> 5. Livepatch transition stalls indefinitely
> 
> Error Message (repeated every ~1 second):
> livepatch: klp_try_switch_task: cifsd:2886 has an unreliable stack
> 
> Stack trace showing BPF trampoline:
> cifsd           D  0  2886
> Call Trace:
>  wait_woken+0x50/0x60
>  sk_wait_data+0x176/0x190
>  tcp_recvmsg_locked+0x234/0x920
>  tcp_recvmsg+0x78/0x210
>  inet_recvmsg+0x5c/0x140
>  bpf_trampoline_6442469985+0x89/0x130  ← NO ORC metadata
>  sock_recvmsg+0x95/0xa0
>  cifs_readv_from_socket+0x1ca/0x2d0 [cifs]
>  ...
> 
> As far as I understand and please correct me if it's wrong -
> 
> The failure occurs in arch/x86/kernel/unwind_orc.c
> 
> orc = orc_find(state->signal ? state->ip : state->ip - 1);
> if (!orc) {
>     /*
>      * As a fallback, try to assume this code uses a frame pointer.
>      * This is useful for generated code, like BPF, which ORC
>      * doesn't know about.  This is just a guess, so the rest of
>      * the unwind is no longer considered reliable.
>      */
>     orc = &orc_fp_entry;
>     state->error = true;  // ← Marks stack as unreliable
> }
> 
> When orc_find() returns NULL for the BPF trampoline address, the unwinder
> falls back to frame pointers and marks the stack unreliable. This causes
> arch_stack_walk_reliable() to fail, which in turn causes livepatch's
> klp_check_stack() to return -EINVAL before even checking if to-be-patched
> functions are on the stack.
> 
> Key observations:
> 1. The kernel comment explicitly mentions "generated code, like BPF"
> 2. Documentation/livepatch/reliable-stacktrace.rst lists "Dynamically
> generated code (e.g. eBPF)" as causing unreliable stacks
> 3. Native kernel functions have ORC metadata from objtool during build
> 4. Ftrace trampolines have special ORC handling via orc_ftrace_find()
> 5. BPF JIT trampolines have no such handling - Is this correct ?

Yes, all you findings are correct and the above explains the situation 
really well. Thank you for summing it up.

> Impact
> 
> This affects production systems where:
> - Security/observability tools use BPF fentry/fexit hooks
> - Live kernel patching is required for security updates
> - Kernel threads may be blocked in hooked network/storage functions
> 
> The livepatch transition can stall for 60+ seconds before failing, blocking
> critical security patches.

Unfortunately yes.

> Questions for the Community
> 
> 1. Is this a known limitation (I assume yes) ?

Yes.

> 2. Runtime ORC generation? Could the BPF JIT generate ORC unwind entries for
> trampolines, similar to how ftrace trampolines are handled?
> 3. Trampoline registration? Could BPF trampolines register their address
> ranges with the ORC unwinder to avoid the "unreliable" marking?
> 4. Alternative unwinding? Could livepatch use an alternative unwinding method
> when BPF trampolines are detected (e.g., frame pointers with validation)?
> 5. Workarounds? I mention one bellow and I would be happy to hear if anyone
> has a better idea to propose ?

There is a parallel discussion going on under sframe unwiding enablement 
for arm64. See this subthread 
https://lore.kernel.org/all/CADBMgpwZ32+shSa0SwO8y4G-Zw14ae-FcoWreA_ptMf08Mu9dA@mail.gmail.com/T/#u

I would really welcome if it is solved eventually because it seems we will 
meet the described issue more and more often (Josh, I think this email 
shows that it happens in practice with the existing monitoring services 
based on BPF).

Regards,
Miroslav

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: BPF fentry/fexit trampolines stall livepatch stalls transition due to missing ORC unwind metadata
  2025-11-20 12:15 ` Miroslav Benes
@ 2025-11-22  0:56   ` Josh Poimboeuf
  2025-11-24 17:14     ` Alexei Starovoitov
  2025-11-24 22:06     ` [External] " Andrey Grodzovsky
  0 siblings, 2 replies; 11+ messages in thread
From: Josh Poimboeuf @ 2025-11-22  0:56 UTC (permalink / raw)
  To: Miroslav Benes
  Cc: Andrey Grodzovsky, bpf, live-patching, DL Linux Open Source Team,
	Petr Mladek, Song Liu, andrii, Raja Khan

On Thu, Nov 20, 2025 at 01:15:12PM +0100, Miroslav Benes wrote:
> > Impact
> > 
> > This affects production systems where:
> > - Security/observability tools use BPF fentry/fexit hooks
> > - Live kernel patching is required for security updates
> > - Kernel threads may be blocked in hooked network/storage functions
> > 
> > The livepatch transition can stall for 60+ seconds before failing, blocking
> > critical security patches.
> 
> Unfortunately yes.
> 
> > Questions for the Community
> > 
> > 1. Is this a known limitation (I assume yes) ?
> 
> Yes.
> 
> > 2. Runtime ORC generation? Could the BPF JIT generate ORC unwind entries for
> > trampolines, similar to how ftrace trampolines are handled?
> > 3. Trampoline registration? Could BPF trampolines register their address
> > ranges with the ORC unwinder to avoid the "unreliable" marking?
> > 4. Alternative unwinding? Could livepatch use an alternative unwinding method
> > when BPF trampolines are detected (e.g., frame pointers with validation)?
> > 5. Workarounds? I mention one bellow and I would be happy to hear if anyone
> > has a better idea to propose ?
> 
> There is a parallel discussion going on under sframe unwiding enablement 
> for arm64. See this subthread 
> https://lore.kernel.org/all/CADBMgpwZ32+shSa0SwO8y4G-Zw14ae-FcoWreA_ptMf08Mu9dA@mail.gmail.com/T/#u
> 
> I would really welcome if it is solved eventually because it seems we will 
> meet the described issue more and more often (Josh, I think this email 
> shows that it happens in practice with the existing monitoring services 
> based on BPF).

Maybe we can take advantage of the fact that BPF uses frame pointers
unconditionally, and avoid the complexity of "dynamic ORC" for now, by
just having BPF keep track of where the frame pointer is valid (after
the prologue, before the epilogue).

Something like the below (completely untested).

Andrey, can you try this patch?

---8<---

diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
index 977ee75e047c..f610fde2d5c4 100644
--- a/arch/x86/kernel/unwind_orc.c
+++ b/arch/x86/kernel/unwind_orc.c
@@ -2,6 +2,7 @@
 #include <linux/objtool.h>
 #include <linux/module.h>
 #include <linux/sort.h>
+#include <linux/bpf.h>
 #include <asm/ptrace.h>
 #include <asm/stacktrace.h>
 #include <asm/unwind.h>
@@ -172,6 +173,25 @@ static struct orc_entry *orc_ftrace_find(unsigned long ip)
 }
 #endif
 
+/* Fake frame pointer entry -- used as a fallback for generated code */
+static struct orc_entry orc_fp_entry = {
+	.type		= ORC_TYPE_CALL,
+	.sp_reg		= ORC_REG_BP,
+	.sp_offset	= 16,
+	.bp_reg		= ORC_REG_PREV_SP,
+	.bp_offset	= -16,
+};
+
+static struct orc_entry *orc_bpf_find(unsigned long ip)
+{
+#ifdef CONFIG_BPF_JIT
+	if (bpf_has_frame_pointer(ip))
+		return &orc_fp_entry;
+#endif
+
+	return NULL;
+}
+
 /*
  * If we crash with IP==0, the last successfully executed instruction
  * was probably an indirect function call with a NULL function pointer,
@@ -186,15 +206,6 @@ static struct orc_entry null_orc_entry = {
 	.type = ORC_TYPE_CALL
 };
 
-/* Fake frame pointer entry -- used as a fallback for generated code */
-static struct orc_entry orc_fp_entry = {
-	.type		= ORC_TYPE_CALL,
-	.sp_reg		= ORC_REG_BP,
-	.sp_offset	= 16,
-	.bp_reg		= ORC_REG_PREV_SP,
-	.bp_offset	= -16,
-};
-
 static struct orc_entry *orc_find(unsigned long ip)
 {
 	static struct orc_entry *orc;
@@ -238,6 +249,11 @@ static struct orc_entry *orc_find(unsigned long ip)
 	if (orc)
 		return orc;
 
+	/* BPF lookup: */
+	orc = orc_bpf_find(ip);
+	if (orc)
+		return orc;
+
 	return orc_ftrace_find(ip);
 }
 
@@ -495,9 +511,8 @@ bool unwind_next_frame(struct unwind_state *state)
 	if (!orc) {
 		/*
 		 * As a fallback, try to assume this code uses a frame pointer.
-		 * This is useful for generated code, like BPF, which ORC
-		 * doesn't know about.  This is just a guess, so the rest of
-		 * the unwind is no longer considered reliable.
+		 * This is just a guess, so the rest of the unwind is no longer
+		 * considered reliable.
 		 */
 		orc = &orc_fp_entry;
 		state->error = true;
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index de5083cb1d37..510e3e62fd2f 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -1661,6 +1661,9 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
 	emit_prologue(&prog, image, stack_depth,
 		      bpf_prog_was_classic(bpf_prog), tail_call_reachable,
 		      bpf_is_subprog(bpf_prog), bpf_prog->aux->exception_cb);
+
+	bpf_prog->aux->ksym.fp_start = prog - temp;
+
 	/* Exception callback will clobber callee regs for its own use, and
 	 * restore the original callee regs from main prog's stack frame.
 	 */
@@ -2716,6 +2719,8 @@ st:			if (is_imm8(insn->off))
 					pop_r12(&prog);
 			}
 			EMIT1(0xC9);         /* leave */
+			bpf_prog->aux->ksym.fp_end = prog - temp;
+
 			emit_return(&prog, image + addrs[i - 1] + (prog - temp));
 			break;
 
@@ -3299,6 +3304,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
 	}
 	EMIT1(0x55);		 /* push rbp */
 	EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */
+	im->ksym.fp_start = prog - (u8 *)rw_image;
+
 	if (!is_imm8(stack_size)) {
 		/* sub rsp, stack_size */
 		EMIT3_off32(0x48, 0x81, 0xEC, stack_size);
@@ -3436,7 +3443,10 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
 		emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
 
 	emit_ldx(&prog, BPF_DW, BPF_REG_6, BPF_REG_FP, -rbx_off);
+
 	EMIT1(0xC9); /* leave */
+	im->ksym.fp_end = prog - (u8 *)rw_image;
+
 	if (flags & BPF_TRAMP_F_SKIP_FRAME) {
 		/* skip our return address and return to parent */
 		EMIT4(0x48, 0x83, 0xC4, 8); /* add rsp, 8 */
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index d808253f2e94..e3f56e8443da 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1257,6 +1257,8 @@ struct bpf_ksym {
 	struct list_head	 lnode;
 	struct latch_tree_node	 tnode;
 	bool			 prog;
+	u32			 fp_start;
+	u32			 fp_end;
 };
 
 enum bpf_tramp_prog_type {
@@ -1483,6 +1485,7 @@ void bpf_image_ksym_add(struct bpf_ksym *ksym);
 void bpf_image_ksym_del(struct bpf_ksym *ksym);
 void bpf_ksym_add(struct bpf_ksym *ksym);
 void bpf_ksym_del(struct bpf_ksym *ksym);
+bool bpf_has_frame_pointer(unsigned long ip);
 int bpf_jit_charge_modmem(u32 size);
 void bpf_jit_uncharge_modmem(u32 size);
 bool bpf_prog_has_trampoline(const struct bpf_prog *prog);
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index d595fe512498..7cd8382d1152 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -760,6 +760,22 @@ struct bpf_prog *bpf_prog_ksym_find(unsigned long addr)
 	       NULL;
 }
 
+bool bpf_has_frame_pointer(unsigned long ip)
+{
+	struct bpf_ksym *ksym;
+	unsigned long offset;
+
+	guard(rcu)();
+
+	ksym = bpf_ksym_find(ip);
+	if (!ksym || !ksym->fp_start || !ksym->fp_end)
+		return false;
+
+	offset = ip - ksym->start;
+
+	return offset >= ksym->fp_start && offset < ksym->fp_end;
+}
+
 const struct exception_table_entry *search_bpf_extables(unsigned long addr)
 {
 	const struct exception_table_entry *e = NULL;

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: BPF fentry/fexit trampolines stall livepatch stalls transition due to missing ORC unwind metadata
  2025-11-22  0:56   ` Josh Poimboeuf
@ 2025-11-24 17:14     ` Alexei Starovoitov
  2025-11-24 19:51       ` Josh Poimboeuf
  2025-11-24 22:06     ` [External] " Andrey Grodzovsky
  1 sibling, 1 reply; 11+ messages in thread
From: Alexei Starovoitov @ 2025-11-24 17:14 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Miroslav Benes, Andrey Grodzovsky, bpf, live-patching,
	DL Linux Open Source Team, Petr Mladek, Song Liu, Andrii Nakryiko,
	Raja Khan

On Fri, Nov 21, 2025 at 4:56 PM Josh Poimboeuf <jpoimboe@kernel.org> wrote:
>
>
> Maybe we can take advantage of the fact that BPF uses frame pointers
> unconditionally, and avoid the complexity of "dynamic ORC" for now, by
> just having BPF keep track of where the frame pointer is valid (after
> the prologue, before the epilogue).

...
>                         EMIT1(0xC9);         /* leave */
> +                       bpf_prog->aux->ksym.fp_end = prog - temp;
> +
>                         emit_return(&prog, image + addrs[i - 1] + (prog - temp));
>                         break;
>
> @@ -3299,6 +3304,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
>         }
>         EMIT1(0x55);             /* push rbp */
>         EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */
> +       im->ksym.fp_start = prog - (u8 *)rw_image;
> +

Overall makes sense to me, but do you have to skip the prologue/epilogue ?
What happens if it's just bpf_ksym_find() ?
Only irq can interrupt this push/mov sequence and it uses a different irq stack.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: BPF fentry/fexit trampolines stall livepatch stalls transition due to missing ORC unwind metadata
  2025-11-24 17:14     ` Alexei Starovoitov
@ 2025-11-24 19:51       ` Josh Poimboeuf
  0 siblings, 0 replies; 11+ messages in thread
From: Josh Poimboeuf @ 2025-11-24 19:51 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Miroslav Benes, Andrey Grodzovsky, bpf, live-patching,
	DL Linux Open Source Team, Petr Mladek, Song Liu, Andrii Nakryiko,
	Raja Khan

On Mon, Nov 24, 2025 at 09:14:59AM -0800, Alexei Starovoitov wrote:
> On Fri, Nov 21, 2025 at 4:56 PM Josh Poimboeuf <jpoimboe@kernel.org> wrote:
> >
> >
> > Maybe we can take advantage of the fact that BPF uses frame pointers
> > unconditionally, and avoid the complexity of "dynamic ORC" for now, by
> > just having BPF keep track of where the frame pointer is valid (after
> > the prologue, before the epilogue).
> 
> ...
> >                         EMIT1(0xC9);         /* leave */
> > +                       bpf_prog->aux->ksym.fp_end = prog - temp;
> > +
> >                         emit_return(&prog, image + addrs[i - 1] + (prog - temp));
> >                         break;
> >
> > @@ -3299,6 +3304,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
> >         }
> >         EMIT1(0x55);             /* push rbp */
> >         EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */
> > +       im->ksym.fp_start = prog - (u8 *)rw_image;
> > +
> 
> Overall makes sense to me, but do you have to skip the prologue/epilogue ?
> What happens if it's just bpf_ksym_find() ?
> Only irq can interrupt this push/mov sequence and it uses a different irq stack.

On x86-64, IRQs actually just use the task stack, but that doesn't
really matter: either way ORC needs to unwind through it and it needs to
know whether the unwind info is reliable.

If BPF gets preempted in the prologue, and ORC just does bpf_ksym_find()
and assumes frame pointer, the unwind will either fail outright or skip
stack frames.

Particularly for the latter case that would break live patching.

Today it already tries to fall back to frame pointers for BPF, and that
usually works.  But it marks the unwind as "unreliable" because it's
just a guess.

This patch would help mark the typical non-preempted BPF case as
"reliable" so it would work with live patching.

-- 
Josh

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [External] Re: BPF fentry/fexit trampolines stall livepatch stalls transition due to missing ORC unwind metadata
  2025-11-22  0:56   ` Josh Poimboeuf
  2025-11-24 17:14     ` Alexei Starovoitov
@ 2025-11-24 22:06     ` Andrey Grodzovsky
  2025-11-24 22:51       ` Josh Poimboeuf
  1 sibling, 1 reply; 11+ messages in thread
From: Andrey Grodzovsky @ 2025-11-24 22:06 UTC (permalink / raw)
  To: Josh Poimboeuf, Miroslav Benes
  Cc: bpf, live-patching, DL Linux Open Source Team, Petr Mladek,
	Song Liu, andrii, Raja Khan

On 11/21/25 19:56, Josh Poimboeuf wrote:
> On Thu, Nov 20, 2025 at 01:15:12PM +0100, Miroslav Benes wrote:
>>> Impact
>>>
>>> This affects production systems where:
>>> - Security/observability tools use BPF fentry/fexit hooks
>>> - Live kernel patching is required for security updates
>>> - Kernel threads may be blocked in hooked network/storage functions
>>>
>>> The livepatch transition can stall for 60+ seconds before failing, blocking
>>> critical security patches.
>>
>> Unfortunately yes.
>>
>>> Questions for the Community
>>>
>>> 1. Is this a known limitation (I assume yes) ?
>>
>> Yes.
>>
>>> 2. Runtime ORC generation? Could the BPF JIT generate ORC unwind entries for
>>> trampolines, similar to how ftrace trampolines are handled?
>>> 3. Trampoline registration? Could BPF trampolines register their address
>>> ranges with the ORC unwinder to avoid the "unreliable" marking?
>>> 4. Alternative unwinding? Could livepatch use an alternative unwinding method
>>> when BPF trampolines are detected (e.g., frame pointers with validation)?
>>> 5. Workarounds? I mention one bellow and I would be happy to hear if anyone
>>> has a better idea to propose ?
>>
>> There is a parallel discussion going on under sframe unwiding enablement
>> for arm64. See this subthread
>> https://urldefense.com/v3/__https://lore.kernel.org/all/CADBMgpwZ32*shSa0SwO8y4G-Zw14ae-FcoWreA_ptMf08Mu9dA@mail.gmail.com/T/*u__;KyM!!BmdzS3_lV9HdKG8!xBaPCVSvWNSvM982QEXiUrY2fkvotAVXyevfcIz2wDAiLfEC9tKjgVRR11EnkWJifLB3FMTh2ZVU9HwDPT0dZjkow7oFEw$
>>
>> I would really welcome if it is solved eventually because it seems we will
>> meet the described issue more and more often (Josh, I think this email
>> shows that it happens in practice with the existing monitoring services
>> based on BPF).
> 
> Maybe we can take advantage of the fact that BPF uses frame pointers
> unconditionally, and avoid the complexity of "dynamic ORC" for now, by
> just having BPF keep track of where the frame pointer is valid (after
> the prologue, before the epilogue).
> 
> Something like the below (completely untested).
> 
> Andrey, can you try this patch?

Hey Josh, thank you for looking, can you please advise the stable
kernel version you have made this changes on top off so I can cleanly
apply ? Alternatively just provide git commit sha in Linus
tree I can reset my branch to.


I will happily test this as soon as I can and report back.

Thanks,
Andrey

> 
> ---8<---
> 
> diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
> index 977ee75e047c..f610fde2d5c4 100644
> --- a/arch/x86/kernel/unwind_orc.c
> +++ b/arch/x86/kernel/unwind_orc.c
> @@ -2,6 +2,7 @@
>   #include <linux/objtool.h>
>   #include <linux/module.h>
>   #include <linux/sort.h>
> +#include <linux/bpf.h>
>   #include <asm/ptrace.h>
>   #include <asm/stacktrace.h>
>   #include <asm/unwind.h>
> @@ -172,6 +173,25 @@ static struct orc_entry *orc_ftrace_find(unsigned long ip)
>   }
>   #endif
>   
> +/* Fake frame pointer entry -- used as a fallback for generated code */
> +static struct orc_entry orc_fp_entry = {
> +	.type		= ORC_TYPE_CALL,
> +	.sp_reg		= ORC_REG_BP,
> +	.sp_offset	= 16,
> +	.bp_reg		= ORC_REG_PREV_SP,
> +	.bp_offset	= -16,
> +};
> +
> +static struct orc_entry *orc_bpf_find(unsigned long ip)
> +{
> +#ifdef CONFIG_BPF_JIT
> +	if (bpf_has_frame_pointer(ip))
> +		return &orc_fp_entry;
> +#endif
> +
> +	return NULL;
> +}
> +
>   /*
>    * If we crash with IP==0, the last successfully executed instruction
>    * was probably an indirect function call with a NULL function pointer,
> @@ -186,15 +206,6 @@ static struct orc_entry null_orc_entry = {
>   	.type = ORC_TYPE_CALL
>   };
>   
> -/* Fake frame pointer entry -- used as a fallback for generated code */
> -static struct orc_entry orc_fp_entry = {
> -	.type		= ORC_TYPE_CALL,
> -	.sp_reg		= ORC_REG_BP,
> -	.sp_offset	= 16,
> -	.bp_reg		= ORC_REG_PREV_SP,
> -	.bp_offset	= -16,
> -};
> -
>   static struct orc_entry *orc_find(unsigned long ip)
>   {
>   	static struct orc_entry *orc;
> @@ -238,6 +249,11 @@ static struct orc_entry *orc_find(unsigned long ip)
>   	if (orc)
>   		return orc;
>   
> +	/* BPF lookup: */
> +	orc = orc_bpf_find(ip);
> +	if (orc)
> +		return orc;
> +
>   	return orc_ftrace_find(ip);
>   }
>   
> @@ -495,9 +511,8 @@ bool unwind_next_frame(struct unwind_state *state)
>   	if (!orc) {
>   		/*
>   		 * As a fallback, try to assume this code uses a frame pointer.
> -		 * This is useful for generated code, like BPF, which ORC
> -		 * doesn't know about.  This is just a guess, so the rest of
> -		 * the unwind is no longer considered reliable.
> +		 * This is just a guess, so the rest of the unwind is no longer
> +		 * considered reliable.
>   		 */
>   		orc = &orc_fp_entry;
>   		state->error = true;
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index de5083cb1d37..510e3e62fd2f 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -1661,6 +1661,9 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
>   	emit_prologue(&prog, image, stack_depth,
>   		      bpf_prog_was_classic(bpf_prog), tail_call_reachable,
>   		      bpf_is_subprog(bpf_prog), bpf_prog->aux->exception_cb);
> +
> +	bpf_prog->aux->ksym.fp_start = prog - temp;
> +
>   	/* Exception callback will clobber callee regs for its own use, and
>   	 * restore the original callee regs from main prog's stack frame.
>   	 */
> @@ -2716,6 +2719,8 @@ st:			if (is_imm8(insn->off))
>   					pop_r12(&prog);
>   			}
>   			EMIT1(0xC9);         /* leave */
> +			bpf_prog->aux->ksym.fp_end = prog - temp;
> +
>   			emit_return(&prog, image + addrs[i - 1] + (prog - temp));
>   			break;
>   
> @@ -3299,6 +3304,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
>   	}
>   	EMIT1(0x55);		 /* push rbp */
>   	EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */
> +	im->ksym.fp_start = prog - (u8 *)rw_image;
> +
>   	if (!is_imm8(stack_size)) {
>   		/* sub rsp, stack_size */
>   		EMIT3_off32(0x48, 0x81, 0xEC, stack_size);
> @@ -3436,7 +3443,10 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
>   		emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
>   
>   	emit_ldx(&prog, BPF_DW, BPF_REG_6, BPF_REG_FP, -rbx_off);
> +
>   	EMIT1(0xC9); /* leave */
> +	im->ksym.fp_end = prog - (u8 *)rw_image;
> +
>   	if (flags & BPF_TRAMP_F_SKIP_FRAME) {
>   		/* skip our return address and return to parent */
>   		EMIT4(0x48, 0x83, 0xC4, 8); /* add rsp, 8 */
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index d808253f2e94..e3f56e8443da 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1257,6 +1257,8 @@ struct bpf_ksym {
>   	struct list_head	 lnode;
>   	struct latch_tree_node	 tnode;
>   	bool			 prog;
> +	u32			 fp_start;
> +	u32			 fp_end;
>   };
>   
>   enum bpf_tramp_prog_type {
> @@ -1483,6 +1485,7 @@ void bpf_image_ksym_add(struct bpf_ksym *ksym);
>   void bpf_image_ksym_del(struct bpf_ksym *ksym);
>   void bpf_ksym_add(struct bpf_ksym *ksym);
>   void bpf_ksym_del(struct bpf_ksym *ksym);
> +bool bpf_has_frame_pointer(unsigned long ip);
>   int bpf_jit_charge_modmem(u32 size);
>   void bpf_jit_uncharge_modmem(u32 size);
>   bool bpf_prog_has_trampoline(const struct bpf_prog *prog);
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index d595fe512498..7cd8382d1152 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -760,6 +760,22 @@ struct bpf_prog *bpf_prog_ksym_find(unsigned long addr)
>   	       NULL;
>   }
>   
> +bool bpf_has_frame_pointer(unsigned long ip)
> +{
> +	struct bpf_ksym *ksym;
> +	unsigned long offset;
> +
> +	guard(rcu)();
> +
> +	ksym = bpf_ksym_find(ip);
> +	if (!ksym || !ksym->fp_start || !ksym->fp_end)
> +		return false;
> +
> +	offset = ip - ksym->start;
> +
> +	return offset >= ksym->fp_start && offset < ksym->fp_end;
> +}
> +
>   const struct exception_table_entry *search_bpf_extables(unsigned long addr)
>   {
>   	const struct exception_table_entry *e = NULL;


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [External] Re: BPF fentry/fexit trampolines stall livepatch stalls transition due to missing ORC unwind metadata
  2025-11-24 22:06     ` [External] " Andrey Grodzovsky
@ 2025-11-24 22:51       ` Josh Poimboeuf
  2025-11-24 22:54         ` Andrey Grodzovsky
  0 siblings, 1 reply; 11+ messages in thread
From: Josh Poimboeuf @ 2025-11-24 22:51 UTC (permalink / raw)
  To: Andrey Grodzovsky
  Cc: Miroslav Benes, bpf, live-patching, DL Linux Open Source Team,
	Petr Mladek, Song Liu, andrii, Raja Khan

On Mon, Nov 24, 2025 at 05:06:04PM -0500, Andrey Grodzovsky wrote:
> > Andrey, can you try this patch?
> 
> Hey Josh, thank you for looking, can you please advise the stable
> kernel version you have made this changes on top off so I can cleanly
> apply ? Alternatively just provide git commit sha in Linus
> tree I can reset my branch to.
> 
> 
> I will happily test this as soon as I can and report back.

It's based on Linus's tree.

-- 
Josh

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [External] Re: BPF fentry/fexit trampolines stall livepatch stalls transition due to missing ORC unwind metadata
  2025-11-24 22:51       ` Josh Poimboeuf
@ 2025-11-24 22:54         ` Andrey Grodzovsky
  2025-11-25  0:06           ` Josh Poimboeuf
  0 siblings, 1 reply; 11+ messages in thread
From: Andrey Grodzovsky @ 2025-11-24 22:54 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Miroslav Benes, bpf, live-patching, DL Linux Open Source Team,
	Petr Mladek, Song Liu, andrii, Raja Khan

On 11/24/25 17:51, Josh Poimboeuf wrote:
> On Mon, Nov 24, 2025 at 05:06:04PM -0500, Andrey Grodzovsky wrote:
>>> Andrey, can you try this patch?
>>
>> Hey Josh, thank you for looking, can you please advise the stable
>> kernel version you have made this changes on top off so I can cleanly
>> apply ? Alternatively just provide git commit sha in Linus
>> tree I can reset my branch to.
>>
>>
>> I will happily test this as soon as I can and report back.
> 
> It's based on Linus's tree.
> 

Latest more or less ?

Andrey

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [External] Re: BPF fentry/fexit trampolines stall livepatch stalls transition due to missing ORC unwind metadata
  2025-11-24 22:54         ` Andrey Grodzovsky
@ 2025-11-25  0:06           ` Josh Poimboeuf
  2025-11-27 14:55             ` Andrey Grodzovsky
  0 siblings, 1 reply; 11+ messages in thread
From: Josh Poimboeuf @ 2025-11-25  0:06 UTC (permalink / raw)
  To: Andrey Grodzovsky
  Cc: Miroslav Benes, bpf, live-patching, DL Linux Open Source Team,
	Petr Mladek, Song Liu, andrii, Raja Khan

On Mon, Nov 24, 2025 at 05:54:15PM -0500, Andrey Grodzovsky wrote:
> On 11/24/25 17:51, Josh Poimboeuf wrote:
> > On Mon, Nov 24, 2025 at 05:06:04PM -0500, Andrey Grodzovsky wrote:
> > > > Andrey, can you try this patch?
> > > 
> > > Hey Josh, thank you for looking, can you please advise the stable
> > > kernel version you have made this changes on top off so I can cleanly
> > > apply ? Alternatively just provide git commit sha in Linus
> > > tree I can reset my branch to.
> > > 
> > > 
> > > I will happily test this as soon as I can and report back.
> > 
> > It's based on Linus's tree.
> > 
> 
> Latest more or less ?

Yes, it still applies to his latest master (v6.18-rc7).

-- 
Josh

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [External] Re: BPF fentry/fexit trampolines stall livepatch stalls transition due to missing ORC unwind metadata
  2025-11-25  0:06           ` Josh Poimboeuf
@ 2025-11-27 14:55             ` Andrey Grodzovsky
  2025-12-01 20:59               ` Josh Poimboeuf
  0 siblings, 1 reply; 11+ messages in thread
From: Andrey Grodzovsky @ 2025-11-27 14:55 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Miroslav Benes, bpf, live-patching, DL Linux Open Source Team,
	Petr Mladek, Song Liu, andrii, Raja Khan

On 11/24/25 19:06, Josh Poimboeuf wrote:
> On Mon, Nov 24, 2025 at 05:54:15PM -0500, Andrey Grodzovsky wrote:
>> On 11/24/25 17:51, Josh Poimboeuf wrote:
>>> On Mon, Nov 24, 2025 at 05:06:04PM -0500, Andrey Grodzovsky wrote:
>>>>> Andrey, can you try this patch?
>>>>
>>>> Hey Josh, thank you for looking, can you please advise the stable
>>>> kernel version you have made this changes on top off so I can cleanly
>>>> apply ? Alternatively just provide git commit sha in Linus
>>>> tree I can reset my branch to.
>>>>
>>>>
>>>> I will happily test this as soon as I can and report back.
>>>
>>> It's based on Linus's tree.
>>>
>>
>> Latest more or less ?
> 
> Yes, it still applies to his latest master (v6.18-rc7).
> 

Tested, looks good.

kernel as above, before patch application dmesg

ubuntu-24-04@ubuntu-24-04:~/livepatch-bpf-test$ sudo insmod 
livepatches/livepatch_cmdline_test.ko
ubuntu-24-04@ubuntu-24-04:~/livepatch-bpf-test$ sudo dmesg -w
[  128.434944] livepatch_cmdline_test: loading out-of-tree module taints 
kernel.
[  128.434955] livepatch_cmdline_test: tainting kernel with TAINT_LIVEPATCH
[  128.434958] livepatch_cmdline_test: module verification failed: 
signature and/or required key missing - tainting kernel
[  128.435579] livepatch_cmdline_test: initializing
[  128.435640] livepatch: enabling patch 'livepatch_cmdline_test'
[  128.435643] livepatch: 'livepatch_cmdline_test': initializing 
patching transition
[  128.439051] livepatch: 'livepatch_cmdline_test': starting patching 
transition
[  128.439982] livepatch: klp_try_switch_task: tcp_recv_hang:1219 has an 
unreliable stack
[  128.440028] livepatch: klp_try_switch_task: swapper/0:0 is running
[  128.440041] livepatch: klp_try_switch_task: swapper/1:0 is running
[  128.440051] livepatch: klp_try_switch_task: swapper/3:0 is running
[  128.440060] livepatch: klp_try_switch_task: swapper/4:0 is running
[  128.440069] livepatch: klp_try_switch_task: swapper/5:0 is running
[  128.440078] livepatch: klp_try_switch_task: swapper/6:0 is running
[  128.440090] livepatch: klp_try_switch_task: swapper/7:0 is running
[  128.440099] livepatch: klp_try_switch_task: swapper/8:0 is running
[  128.440125] livepatch: klp_try_switch_task: swapper/9:0 is running
[  128.440138] livepatch: klp_try_switch_task: swapper/12:0 is running
[  128.440147] livepatch: klp_try_switch_task: swapper/13:0 is running
[  128.440161] livepatch_cmdline_test: patch enabled successfully
[  129.586202] livepatch: klp_try_switch_task: tcp_recv_hang:1219 has an 
unreliable stack
[  130.651011] livepatch: klp_try_switch_task: tcp_recv_hang:1219 has an 
unreliable stack
[  131.595098] livepatch: klp_try_switch_task: tcp_recv_hang:1219 has an 
unreliable stack
[  132.910147] livepatch: klp_try_switch_task: tcp_recv_hang:1219 has an 
unreliable stack
[  134.578055] livepatch: klp_try_switch_task: tcp_recv_hang:1219 has an 
unreliable stack
[  135.613832] livepatch: klp_try_switch_task: tcp_recv_hang:1219 has an 
unreliable stack
[  136.646585] livepatch: klp_try_switch_task: tcp_recv_hang:1219 has an 
unreliable stack
[  137.589959] livepatch: klp_try_switch_task: tcp_recv_hang:1219 has an 
unreliable stack
[  138.607729] livepatch: klp_try_switch_task: tcp_recv_hang:1219 has an 
unreliable stack
[  139.570125] livepatch: klp_try_switch_task: tcp_recv_hang:1219 has an 
unreliable stack
[  140.601715] livepatch: klp_try_switch_task: tcp_recv_hang:1219 has an 
unreliable stack
[  141.643745] livepatch: klp_try_switch_task: tcp_recv_hang:1219 has an 
unreliable stack
[  142.595307] livepatch: klp_try_switch_task: tcp_recv_hang:1219 has an 
unreliable stack
[  143.605116] livepatch: klp_try_switch_task: tcp_recv_hang:1219 has an 
unreliable stack
[  144.628517] livepatch: klp_try_switch_task: tcp_recv_hang:1219 has an 
unreliable stack
[  144.628544] livepatch: signaling remaining tasks
[  145.611813] livepatch: 'livepatch_cmdline_test': completing patching 
transition
[  145.612271] livepatch: 'livepatch_cmdline_test': patching complete

After patch application -

ubuntu-24-04@ubuntu-24-04:~$ sudo insmod 
livepatches/livepatc^Ccmdline_test.ko
ubuntu-24-04@ubuntu-24-04:~$ cd livepatch-bpf-test/
ubuntu-24-04@ubuntu-24-04:~/livepatch-bpf-test$ sudo insmod 
livepatches/livepatch_cmdline_test.ko
ubuntu-24-04@ubuntu-24-04:~/livepatch-bpf-test$ sudo dmesg -dw
[  270.168371 <    0.000000>] livepatch_cmdline_test: loading 
out-of-tree module taints kernel.
[  270.168386 <    0.000015>] livepatch_cmdline_test: tainting kernel 
with TAINT_LIVEPATCH
[  270.168389 <    0.000003>] livepatch_cmdline_test: module 
verification failed: signature and/or required key missing - tainting kernel
[  270.169202 <    0.000813>] livepatch_cmdline_test: initializing
[  270.169260 <    0.000058>] livepatch: enabling patch 
'livepatch_cmdline_test'
[  270.169262 <    0.000002>] livepatch: 'livepatch_cmdline_test': 
initializing patching transition
[  270.171969 <    0.002707>] livepatch: 'livepatch_cmdline_test': 
starting patching transition
[  270.172892 <    0.000923>] livepatch: klp_try_switch_task: 
swapper/0:0 is running
[  270.172904 <    0.000012>] livepatch: klp_try_switch_task: 
swapper/2:0 is running
[  270.172912 <    0.000008>] livepatch: klp_try_switch_task: 
swapper/4:0 is running
[  270.172920 <    0.000008>] livepatch: klp_try_switch_task: 
swapper/5:0 is running
[  270.172927 <    0.000007>] livepatch: klp_try_switch_task: 
swapper/7:0 is running
[  270.172935 <    0.000008>] livepatch: klp_try_switch_task: 
swapper/8:0 is running
[  270.172942 <    0.000007>] livepatch: klp_try_switch_task: 
swapper/9:0 is running
[  270.172954 <    0.000012>] livepatch: klp_try_switch_task: 
swapper/10:0 is running
[  270.172959 <    0.000005>] livepatch: klp_try_switch_task: 
swapper/11:0 is running
[  270.172966 <    0.000007>] livepatch: klp_try_switch_task: 
swapper/13:0 is running
[  270.172971 <    0.000005>] livepatch_cmdline_test: patch enabled 
successfully
[  271.008394 <    0.835423>] livepatch: 'livepatch_cmdline_test': 
completing patching transition
[  271.009156 <    0.000762>] livepatch: 'livepatch_cmdline_test': 
patching complete


ubuntu-24-04@ubuntu-24-04:~/livepatch-bpf-test$ sudo cat /proc/kallsyms 
| grep cmdline_proc_show
ffffffffb4a55c20 t __pfx_cmdline_proc_show
ffffffffb4a55c30 t cmdline_proc_show
ffffffffc09300b0 t livepatch_cmdline_proc_show	[livepatch_cmdline_test]
ffffffffc09300a0 t __pfx_livepatch_cmdline_proc_show 
[livepatch_cmdline_test]
ubuntu-24-04@ubuntu-24-04:~/livepatch-bpf-test$


Andrey


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [External] Re: BPF fentry/fexit trampolines stall livepatch stalls transition due to missing ORC unwind metadata
  2025-11-27 14:55             ` Andrey Grodzovsky
@ 2025-12-01 20:59               ` Josh Poimboeuf
  0 siblings, 0 replies; 11+ messages in thread
From: Josh Poimboeuf @ 2025-12-01 20:59 UTC (permalink / raw)
  To: Andrey Grodzovsky
  Cc: Miroslav Benes, bpf, live-patching, DL Linux Open Source Team,
	Petr Mladek, Song Liu, andrii, Raja Khan

On Thu, Nov 27, 2025 at 09:55:15AM -0500, Andrey Grodzovsky wrote:
> On 11/24/25 19:06, Josh Poimboeuf wrote:
> > On Mon, Nov 24, 2025 at 05:54:15PM -0500, Andrey Grodzovsky wrote:
> > > On 11/24/25 17:51, Josh Poimboeuf wrote:
> > > > On Mon, Nov 24, 2025 at 05:06:04PM -0500, Andrey Grodzovsky wrote:
> > > > > > Andrey, can you try this patch?
> > > > > 
> > > > > Hey Josh, thank you for looking, can you please advise the stable
> > > > > kernel version you have made this changes on top off so I can cleanly
> > > > > apply ? Alternatively just provide git commit sha in Linus
> > > > > tree I can reset my branch to.
> > > > > 
> > > > > 
> > > > > I will happily test this as soon as I can and report back.
> > > > 
> > > > It's based on Linus's tree.
> > > > 
> > > 
> > > Latest more or less ?
> > 
> > Yes, it still applies to his latest master (v6.18-rc7).
> > 
> 
> Tested, looks good.

Thanks, I'll post proper patches shortly.

-- 
Josh

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2025-12-01 20:59 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-19 15:41 BPF fentry/fexit trampolines stall livepatch stalls transition due to missing ORC unwind metadata Andrey Grodzovsky
2025-11-20 12:15 ` Miroslav Benes
2025-11-22  0:56   ` Josh Poimboeuf
2025-11-24 17:14     ` Alexei Starovoitov
2025-11-24 19:51       ` Josh Poimboeuf
2025-11-24 22:06     ` [External] " Andrey Grodzovsky
2025-11-24 22:51       ` Josh Poimboeuf
2025-11-24 22:54         ` Andrey Grodzovsky
2025-11-25  0:06           ` Josh Poimboeuf
2025-11-27 14:55             ` Andrey Grodzovsky
2025-12-01 20:59               ` Josh Poimboeuf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox