public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next v2 1/2] bpf: Add preempt disable for bpf_get_stack
@ 2026-02-06  9:06 Tao Chen
  2026-02-06  9:06 ` [PATCH bpf-next v2 2/2] bpf: Add preempt disable for bpf_get_stackid Tao Chen
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Tao Chen @ 2026-02-06  9:06 UTC (permalink / raw)
  To: song, jolsa, ast, daniel, andrii, martin.lau, eddyz87,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo
  Cc: bpf, linux-kernel, Tao Chen

The get_perf_callchain() return values may be reused if a task is preempted
after the BPF program enters migrate disable mode, so we should add
preempt_disable. And as Andrii suggested, BPF can guarantee perf callchain
buffer won't be released during use, for bpf_get_stack_id, BPF stack map
will keep them alive by delaying put_callchain_buffer() until freeing time
or for bpf_get_stack/bpf_get_task_stack, BPF program itself will hold these
buffers alive again, until freeing time which is delayed until after
RCU Tasks Trace + RCU grace period.

Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Tao Chen <chen.dylane@linux.dev>
---

Change list:
 - v1 -> v2
   - add preempt_disable for bpf_get_stack in patch1
   - add patch2
 - v1: https://lore.kernel.org/bpf/20260128165710.928294-1-chen.dylane@linux.dev

 kernel/bpf/stackmap.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index da3d328f5c1..1b100a03ef2 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@ -460,8 +460,8 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
 
 	max_depth = stack_map_calculate_max_depth(size, elem_size, flags);
 
-	if (may_fault)
-		rcu_read_lock(); /* need RCU for perf's callchain below */
+	if (!trace_in)
+		preempt_disable();
 
 	if (trace_in) {
 		trace = trace_in;
@@ -474,8 +474,8 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
 	}
 
 	if (unlikely(!trace) || trace->nr < skip) {
-		if (may_fault)
-			rcu_read_unlock();
+		if (!trace_in)
+			preempt_enable();
 		goto err_fault;
 	}
 
@@ -493,9 +493,8 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
 		memcpy(buf, ips, copy_len);
 	}
 
-	/* trace/ips should not be dereferenced after this point */
-	if (may_fault)
-		rcu_read_unlock();
+	if (!trace_in)
+		preempt_enable();
 
 	if (user_build_id)
 		stack_map_get_build_id_offset(buf, trace_nr, user, may_fault);
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2026-02-11  7:18 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-06  9:06 [PATCH bpf-next v2 1/2] bpf: Add preempt disable for bpf_get_stack Tao Chen
2026-02-06  9:06 ` [PATCH bpf-next v2 2/2] bpf: Add preempt disable for bpf_get_stackid Tao Chen
2026-02-06  9:34   ` bot+bpf-ci
2026-02-06  9:58     ` Tao Chen
2026-02-06 17:20   ` Andrii Nakryiko
2026-02-11  7:18     ` Tao Chen
2026-02-06 14:19 ` [syzbot ci] Re: bpf: Add preempt disable for bpf_get_stack syzbot ci
2026-02-06 17:12 ` [PATCH bpf-next v2 1/2] " Andrii Nakryiko
2026-02-11  7:10   ` Tao Chen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox