linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH bpf-next v3 0/2] Pass external callchain entry to get_perf_callchain
@ 2025-10-19 17:01 Tao Chen
  2025-10-19 17:01 ` [PATCH bpf-next v3 1/2] perf: Use extern perf_callchain_entry for get_perf_callchain Tao Chen
  2025-10-19 17:01 ` [PATCH bpf-next v3 2/2] bpf: Use per-cpu BPF callchain entry to save callchain Tao Chen
  0 siblings, 2 replies; 8+ messages in thread
From: Tao Chen @ 2025-10-19 17:01 UTC (permalink / raw)
  To: peterz, mingo, acme, namhyung, mark.rutland, alexander.shishkin,
	jolsa, irogers, adrian.hunter, kan.liang, song, ast, daniel,
	andrii, martin.lau, eddyz87, yonghong.song, john.fastabend,
	kpsingh, sdf, haoluo
  Cc: linux-perf-users, linux-kernel, bpf, Tao Chen

Background
==========
Alexei noted we should use preempt_disable to protect get_perf_callchain
in bpf stackmap.
https://lore.kernel.org/bpf/CAADnVQ+s8B7-fvR1TNO-bniSyKv57cH_ihRszmZV7pQDyV=VDQ@mail.gmail.com

A previous patch was submitted to attempt fixing this issue. And Andrii
suggested teach get_perf_callchain to let us pass that buffer directly to
avoid that unnecessary copy.
https://lore.kernel.org/bpf/20250926153952.1661146-1-chen.dylane@linux.dev

Proposed Solution
=================
Add external perf_callchain_entry parameter for get_perf_callchain to
allow us to use external buffer from BPF side. The biggest advantage is
that it can reduce unnecessary copies.

Todo
====
If the above changes are reasonable, it seems that get_callchain_entry_for_task
could also use an external perf_callchain_entry.

But I'm not sure if this modification is appropriate. After all, the
implementation of get_callchain_entry in the perf subsystem seems much more
complex than directly using an external buffer.

Comments and suggestions are always welcome.

Change list:
 - v1 -> v2
   From Jiri
   - rebase code, fix conflict
 - v1: https://lore.kernel.org/bpf/20251013174721.2681091-1-chen.dylane@linux.dev
 
 - v2 -> v3:
   From Andrii
   - entries per CPU used in a stack-like fashion
 - v2: https://lore.kernel.org/bpf/20251014100128.2721104-1-chen.dylane@linux.dev

Tao Chen (2):
  perf: Use extern perf_callchain_entry for get_perf_callchain
  bpf: Use per-cpu BPF callchain entry to save callchain

 include/linux/perf_event.h |   4 +-
 kernel/bpf/stackmap.c      | 100 ++++++++++++++++++++++++++++---------
 kernel/events/callchain.c  |  13 +++--
 kernel/events/core.c       |   2 +-
 4 files changed, 88 insertions(+), 31 deletions(-)

-- 
2.48.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH bpf-next v3 1/2] perf: Use extern perf_callchain_entry for get_perf_callchain
  2025-10-19 17:01 [RFC PATCH bpf-next v3 0/2] Pass external callchain entry to get_perf_callchain Tao Chen
@ 2025-10-19 17:01 ` Tao Chen
  2025-10-20 11:40   ` Peter Zijlstra
  2025-10-19 17:01 ` [PATCH bpf-next v3 2/2] bpf: Use per-cpu BPF callchain entry to save callchain Tao Chen
  1 sibling, 1 reply; 8+ messages in thread
From: Tao Chen @ 2025-10-19 17:01 UTC (permalink / raw)
  To: peterz, mingo, acme, namhyung, mark.rutland, alexander.shishkin,
	jolsa, irogers, adrian.hunter, kan.liang, song, ast, daniel,
	andrii, martin.lau, eddyz87, yonghong.song, john.fastabend,
	kpsingh, sdf, haoluo
  Cc: linux-perf-users, linux-kernel, bpf, Tao Chen

From bpf stack map, we want to use our own buffers to avoid unnecessary
copy, so let us pass it directly. BPF will use this in the next patch.

Signed-off-by: Tao Chen <chen.dylane@linux.dev>
---
 include/linux/perf_event.h |  4 ++--
 kernel/bpf/stackmap.c      |  4 ++--
 kernel/events/callchain.c  | 13 +++++++++----
 kernel/events/core.c       |  2 +-
 4 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index fd1d91017b9..b144da7d803 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -1719,8 +1719,8 @@ DECLARE_PER_CPU(struct perf_callchain_entry, perf_callchain_entry);
 extern void perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs);
 extern void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs);
 extern struct perf_callchain_entry *
-get_perf_callchain(struct pt_regs *regs, bool kernel, bool user,
-		   u32 max_stack, bool crosstask, bool add_mark);
+get_perf_callchain(struct pt_regs *regs, struct perf_callchain_entry *external_entry,
+		   bool kernel, bool user, u32 max_stack, bool crosstask, bool add_mark);
 extern int get_callchain_buffers(int max_stack);
 extern void put_callchain_buffers(void);
 extern struct perf_callchain_entry *get_callchain_entry(int *rctx);
diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index 4d53cdd1374..94e46b7f340 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@ -314,7 +314,7 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map,
 	if (max_depth > sysctl_perf_event_max_stack)
 		max_depth = sysctl_perf_event_max_stack;
 
-	trace = get_perf_callchain(regs, kernel, user, max_depth,
+	trace = get_perf_callchain(regs, NULL, kernel, user, max_depth,
 				   false, false);
 
 	if (unlikely(!trace))
@@ -451,7 +451,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
 	else if (kernel && task)
 		trace = get_callchain_entry_for_task(task, max_depth);
 	else
-		trace = get_perf_callchain(regs, kernel, user, max_depth,
+		trace = get_perf_callchain(regs, NULL, kernel, user, max_depth,
 					   crosstask, false);
 
 	if (unlikely(!trace) || trace->nr < skip) {
diff --git a/kernel/events/callchain.c b/kernel/events/callchain.c
index 808c0d7a31f..851e8f9d026 100644
--- a/kernel/events/callchain.c
+++ b/kernel/events/callchain.c
@@ -217,8 +217,8 @@ static void fixup_uretprobe_trampoline_entries(struct perf_callchain_entry *entr
 }
 
 struct perf_callchain_entry *
-get_perf_callchain(struct pt_regs *regs, bool kernel, bool user,
-		   u32 max_stack, bool crosstask, bool add_mark)
+get_perf_callchain(struct pt_regs *regs, struct perf_callchain_entry *external_entry,
+		   bool kernel, bool user, u32 max_stack, bool crosstask, bool add_mark)
 {
 	struct perf_callchain_entry *entry;
 	struct perf_callchain_entry_ctx ctx;
@@ -228,7 +228,11 @@ get_perf_callchain(struct pt_regs *regs, bool kernel, bool user,
 	if (crosstask && user && !kernel)
 		return NULL;
 
-	entry = get_callchain_entry(&rctx);
+	if (external_entry)
+		entry = external_entry;
+	else
+		entry = get_callchain_entry(&rctx);
+
 	if (!entry)
 		return NULL;
 
@@ -260,7 +264,8 @@ get_perf_callchain(struct pt_regs *regs, bool kernel, bool user,
 	}
 
 exit_put:
-	put_callchain_entry(rctx);
+	if (!external_entry)
+		put_callchain_entry(rctx);
 
 	return entry;
 }
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 7541f6f85fc..5d8e146003a 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -8217,7 +8217,7 @@ perf_callchain(struct perf_event *event, struct pt_regs *regs)
 	if (!kernel && !user)
 		return &__empty_callchain;
 
-	callchain = get_perf_callchain(regs, kernel, user,
+	callchain = get_perf_callchain(regs, NULL, kernel, user,
 				       max_stack, crosstask, true);
 	return callchain ?: &__empty_callchain;
 }
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH bpf-next v3 2/2] bpf: Use per-cpu BPF callchain entry to save callchain
  2025-10-19 17:01 [RFC PATCH bpf-next v3 0/2] Pass external callchain entry to get_perf_callchain Tao Chen
  2025-10-19 17:01 ` [PATCH bpf-next v3 1/2] perf: Use extern perf_callchain_entry for get_perf_callchain Tao Chen
@ 2025-10-19 17:01 ` Tao Chen
  2025-10-20  5:50   ` Tao Chen
  2025-10-20 11:03   ` Peter Zijlstra
  1 sibling, 2 replies; 8+ messages in thread
From: Tao Chen @ 2025-10-19 17:01 UTC (permalink / raw)
  To: peterz, mingo, acme, namhyung, mark.rutland, alexander.shishkin,
	jolsa, irogers, adrian.hunter, kan.liang, song, ast, daniel,
	andrii, martin.lau, eddyz87, yonghong.song, john.fastabend,
	kpsingh, sdf, haoluo
  Cc: linux-perf-users, linux-kernel, bpf, Tao Chen

As Alexei noted, get_perf_callchain() return values may be reused
if a task is preempted after the BPF program enters migrate disable
mode. Drawing on the per-cpu design of bpf_bprintf_buffers,
per-cpu BPF callchain entry is used here.

Signed-off-by: Tao Chen <chen.dylane@linux.dev>
---
 kernel/bpf/stackmap.c | 98 ++++++++++++++++++++++++++++++++-----------
 1 file changed, 74 insertions(+), 24 deletions(-)

diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index 94e46b7f340..3513077c57d 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@ -31,6 +31,52 @@ struct bpf_stack_map {
 	struct stack_map_bucket *buckets[] __counted_by(n_buckets);
 };
 
+struct bpf_perf_callchain_entry {
+	u64 nr;
+	u64 ip[PERF_MAX_STACK_DEPTH];
+};
+
+#define MAX_PERF_CALLCHAIN_PREEMPT 3
+static DEFINE_PER_CPU(struct bpf_perf_callchain_entry[MAX_PERF_CALLCHAIN_PREEMPT],
+		      bpf_perf_callchain_entries);
+static DEFINE_PER_CPU(int, bpf_perf_callchain_preempt_cnt);
+
+static int bpf_get_perf_callchain_or_entry(struct perf_callchain_entry **entry,
+					   struct pt_regs *regs, bool kernel,
+					   bool user, u32 max_stack, bool crosstack,
+					   bool add_mark, bool get_callchain)
+{
+	struct bpf_perf_callchain_entry *bpf_entry;
+	struct perf_callchain_entry *perf_entry;
+	int preempt_cnt;
+
+	preempt_cnt = this_cpu_inc_return(bpf_perf_callchain_preempt_cnt);
+	if (WARN_ON_ONCE(preempt_cnt > MAX_PERF_CALLCHAIN_PREEMPT)) {
+		this_cpu_dec(bpf_perf_callchain_preempt_cnt);
+		return -EBUSY;
+	}
+
+	bpf_entry = this_cpu_ptr(&bpf_perf_callchain_entries[preempt_cnt - 1]);
+	if (!get_callchain) {
+		*entry = (struct perf_callchain_entry *)bpf_entry;
+		return 0;
+	}
+
+	perf_entry = get_perf_callchain(regs, (struct perf_callchain_entry *)bpf_entry,
+					kernel, user, max_stack,
+					crosstack, add_mark);
+	*entry = perf_entry;
+
+	return 0;
+}
+
+static void bpf_put_perf_callchain(void)
+{
+	if (WARN_ON_ONCE(this_cpu_read(bpf_perf_callchain_preempt_cnt) == 0))
+		return;
+	this_cpu_dec(bpf_perf_callchain_preempt_cnt);
+}
+
 static inline bool stack_map_use_build_id(struct bpf_map *map)
 {
 	return (map->map_flags & BPF_F_STACK_BUILD_ID);
@@ -192,11 +238,11 @@ get_callchain_entry_for_task(struct task_struct *task, u32 max_depth)
 {
 #ifdef CONFIG_STACKTRACE
 	struct perf_callchain_entry *entry;
-	int rctx;
-
-	entry = get_callchain_entry(&rctx);
+	int ret;
 
-	if (!entry)
+	ret = bpf_get_perf_callchain_or_entry(&entry, NULL, false, false, 0, false, false,
+					      false);
+	if (ret)
 		return NULL;
 
 	entry->nr = stack_trace_save_tsk(task, (unsigned long *)entry->ip,
@@ -216,7 +262,7 @@ get_callchain_entry_for_task(struct task_struct *task, u32 max_depth)
 			to[i] = (u64)(from[i]);
 	}
 
-	put_callchain_entry(rctx);
+	bpf_put_perf_callchain();
 
 	return entry;
 #else /* CONFIG_STACKTRACE */
@@ -305,6 +351,7 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map,
 	bool user = flags & BPF_F_USER_STACK;
 	struct perf_callchain_entry *trace;
 	bool kernel = !user;
+	int err;
 
 	if (unlikely(flags & ~(BPF_F_SKIP_FIELD_MASK | BPF_F_USER_STACK |
 			       BPF_F_FAST_STACK_CMP | BPF_F_REUSE_STACKID)))
@@ -314,14 +361,15 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map,
 	if (max_depth > sysctl_perf_event_max_stack)
 		max_depth = sysctl_perf_event_max_stack;
 
-	trace = get_perf_callchain(regs, NULL, kernel, user, max_depth,
-				   false, false);
+	err = bpf_get_perf_callchain_or_entry(&trace, regs, kernel, user, max_depth,
+					      false, false, true);
+	if (err)
+		return err;
 
-	if (unlikely(!trace))
-		/* couldn't fetch the stack trace */
-		return -EFAULT;
+	err = __bpf_get_stackid(map, trace, flags);
+	bpf_put_perf_callchain();
 
-	return __bpf_get_stackid(map, trace, flags);
+	return err;
 }
 
 const struct bpf_func_proto bpf_get_stackid_proto = {
@@ -443,20 +491,23 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
 	if (sysctl_perf_event_max_stack < max_depth)
 		max_depth = sysctl_perf_event_max_stack;
 
-	if (may_fault)
-		rcu_read_lock(); /* need RCU for perf's callchain below */
-
 	if (trace_in)
 		trace = trace_in;
-	else if (kernel && task)
+	else if (kernel && task) {
 		trace = get_callchain_entry_for_task(task, max_depth);
-	else
-		trace = get_perf_callchain(regs, NULL, kernel, user, max_depth,
-					   crosstask, false);
+	} else {
+		err = bpf_get_perf_callchain_or_entry(&trace, regs, kernel, user, max_depth,
+						      false, false, true);
+		if (err)
+			return err;
+	}
+
+	if (unlikely(!trace))
+		goto err_fault;
 
-	if (unlikely(!trace) || trace->nr < skip) {
-		if (may_fault)
-			rcu_read_unlock();
+	if (trace->nr < skip) {
+		if (!trace_in)
+			bpf_put_perf_callchain();
 		goto err_fault;
 	}
 
@@ -475,9 +526,8 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
 		memcpy(buf, ips, copy_len);
 	}
 
-	/* trace/ips should not be dereferenced after this point */
-	if (may_fault)
-		rcu_read_unlock();
+	if (!trace_in)
+		bpf_put_perf_callchain();
 
 	if (user_build_id)
 		stack_map_get_build_id_offset(buf, trace_nr, user, may_fault);
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH bpf-next v3 2/2] bpf: Use per-cpu BPF callchain entry to save callchain
  2025-10-19 17:01 ` [PATCH bpf-next v3 2/2] bpf: Use per-cpu BPF callchain entry to save callchain Tao Chen
@ 2025-10-20  5:50   ` Tao Chen
  2025-10-20 11:03   ` Peter Zijlstra
  1 sibling, 0 replies; 8+ messages in thread
From: Tao Chen @ 2025-10-20  5:50 UTC (permalink / raw)
  To: peterz, mingo, acme, namhyung, mark.rutland, alexander.shishkin,
	jolsa, irogers, adrian.hunter, kan.liang, song, ast, daniel,
	andrii, martin.lau, eddyz87, yonghong.song, john.fastabend,
	kpsingh, sdf, haoluo
  Cc: linux-perf-users, linux-kernel, bpf

在 2025/10/20 01:01, Tao Chen 写道:
> As Alexei noted, get_perf_callchain() return values may be reused
> if a task is preempted after the BPF program enters migrate disable
> mode. Drawing on the per-cpu design of bpf_bprintf_buffers,
> per-cpu BPF callchain entry is used here.
> 
> Signed-off-by: Tao Chen <chen.dylane@linux.dev>
> ---
>   kernel/bpf/stackmap.c | 98 ++++++++++++++++++++++++++++++++-----------
>   1 file changed, 74 insertions(+), 24 deletions(-)
> 
> diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
> index 94e46b7f340..3513077c57d 100644
> --- a/kernel/bpf/stackmap.c
> +++ b/kernel/bpf/stackmap.c
> @@ -31,6 +31,52 @@ struct bpf_stack_map {
>   	struct stack_map_bucket *buckets[] __counted_by(n_buckets);
>   };
>   
> +struct bpf_perf_callchain_entry {
> +	u64 nr;
> +	u64 ip[PERF_MAX_STACK_DEPTH];
> +};
> +
> +#define MAX_PERF_CALLCHAIN_PREEMPT 3
> +static DEFINE_PER_CPU(struct bpf_perf_callchain_entry[MAX_PERF_CALLCHAIN_PREEMPT],
> +		      bpf_perf_callchain_entries);
> +static DEFINE_PER_CPU(int, bpf_perf_callchain_preempt_cnt);
> +
> +static int bpf_get_perf_callchain_or_entry(struct perf_callchain_entry **entry,
> +					   struct pt_regs *regs, bool kernel,
> +					   bool user, u32 max_stack, bool crosstack,
> +					   bool add_mark, bool get_callchain)
> +{
> +	struct bpf_perf_callchain_entry *bpf_entry;
> +	struct perf_callchain_entry *perf_entry;
> +	int preempt_cnt;
> +
> +	preempt_cnt = this_cpu_inc_return(bpf_perf_callchain_preempt_cnt);
> +	if (WARN_ON_ONCE(preempt_cnt > MAX_PERF_CALLCHAIN_PREEMPT)) {
> +		this_cpu_dec(bpf_perf_callchain_preempt_cnt);
> +		return -EBUSY;
> +	}
> +
> +	bpf_entry = this_cpu_ptr(&bpf_perf_callchain_entries[preempt_cnt - 1]);
> +	if (!get_callchain) {
> +		*entry = (struct perf_callchain_entry *)bpf_entry;
> +		return 0;
> +	}
> +
> +	perf_entry = get_perf_callchain(regs, (struct perf_callchain_entry *)bpf_entry,
> +					kernel, user, max_stack,
> +					crosstack, add_mark);
> +	*entry = perf_entry;
> +
> +	return 0;
> +}
> +
> +static void bpf_put_perf_callchain(void)
> +{
> +	if (WARN_ON_ONCE(this_cpu_read(bpf_perf_callchain_preempt_cnt) == 0))
> +		return;
> +	this_cpu_dec(bpf_perf_callchain_preempt_cnt);
> +}
> +
>   static inline bool stack_map_use_build_id(struct bpf_map *map)
>   {
>   	return (map->map_flags & BPF_F_STACK_BUILD_ID);
> @@ -192,11 +238,11 @@ get_callchain_entry_for_task(struct task_struct *task, u32 max_depth)
>   {
>   #ifdef CONFIG_STACKTRACE
>   	struct perf_callchain_entry *entry;
> -	int rctx;
> -
> -	entry = get_callchain_entry(&rctx);
> +	int ret;
>   
> -	if (!entry)
> +	ret = bpf_get_perf_callchain_or_entry(&entry, NULL, false, false, 0, false, false,
> +					      false);
> +	if (ret)
>   		return NULL;
>   
>   	entry->nr = stack_trace_save_tsk(task, (unsigned long *)entry->ip,
> @@ -216,7 +262,7 @@ get_callchain_entry_for_task(struct task_struct *task, u32 max_depth)
>   			to[i] = (u64)(from[i]);
>   	}
>   
> -	put_callchain_entry(rctx);
> +	bpf_put_perf_callchain();

double-put issue here reported by AI, i will fix it.

>   
>   	return entry;
>   #else /* CONFIG_STACKTRACE */
> @@ -305,6 +351,7 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map,
>   	bool user = flags & BPF_F_USER_STACK;
>   	struct perf_callchain_entry *trace;
>   	bool kernel = !user;
> +	int err;
>   
>   	if (unlikely(flags & ~(BPF_F_SKIP_FIELD_MASK | BPF_F_USER_STACK |
>   			       BPF_F_FAST_STACK_CMP | BPF_F_REUSE_STACKID)))
> @@ -314,14 +361,15 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map,
>   	if (max_depth > sysctl_perf_event_max_stack)
>   		max_depth = sysctl_perf_event_max_stack;
>   
> -	trace = get_perf_callchain(regs, NULL, kernel, user, max_depth,
> -				   false, false);
> +	err = bpf_get_perf_callchain_or_entry(&trace, regs, kernel, user, max_depth,
> +					      false, false, true);
> +	if (err)
> +		return err;
>   
> -	if (unlikely(!trace))
> -		/* couldn't fetch the stack trace */
> -		return -EFAULT;
> +	err = __bpf_get_stackid(map, trace, flags);
> +	bpf_put_perf_callchain();
>   
> -	return __bpf_get_stackid(map, trace, flags);
> +	return err;
>   }
>   
>   const struct bpf_func_proto bpf_get_stackid_proto = {
> @@ -443,20 +491,23 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
>   	if (sysctl_perf_event_max_stack < max_depth)
>   		max_depth = sysctl_perf_event_max_stack;
>   
> -	if (may_fault)
> -		rcu_read_lock(); /* need RCU for perf's callchain below */
> -
>   	if (trace_in)
>   		trace = trace_in;
> -	else if (kernel && task)
> +	else if (kernel && task) {
>   		trace = get_callchain_entry_for_task(task, max_depth);
> -	else
> -		trace = get_perf_callchain(regs, NULL, kernel, user, max_depth,
> -					   crosstask, false);
> +	} else {
> +		err = bpf_get_perf_callchain_or_entry(&trace, regs, kernel, user, max_depth,
> +						      false, false, true);
> +		if (err)
> +			return err;
> +	}
> +
> +	if (unlikely(!trace))
> +		goto err_fault;
>   
> -	if (unlikely(!trace) || trace->nr < skip) {
> -		if (may_fault)
> -			rcu_read_unlock();
> +	if (trace->nr < skip) {
> +		if (!trace_in)
> +			bpf_put_perf_callchain();
>   		goto err_fault;
>   	}
>   
> @@ -475,9 +526,8 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
>   		memcpy(buf, ips, copy_len);
>   	}
>   
> -	/* trace/ips should not be dereferenced after this point */
> -	if (may_fault)
> -		rcu_read_unlock();
> +	if (!trace_in)
> +		bpf_put_perf_callchain();
>   
>   	if (user_build_id)
>   		stack_map_get_build_id_offset(buf, trace_nr, user, may_fault);


-- 
Best Regards
Tao Chen

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH bpf-next v3 2/2] bpf: Use per-cpu BPF callchain entry to save callchain
  2025-10-19 17:01 ` [PATCH bpf-next v3 2/2] bpf: Use per-cpu BPF callchain entry to save callchain Tao Chen
  2025-10-20  5:50   ` Tao Chen
@ 2025-10-20 11:03   ` Peter Zijlstra
  2025-10-22 15:59     ` Tao Chen
  1 sibling, 1 reply; 8+ messages in thread
From: Peter Zijlstra @ 2025-10-20 11:03 UTC (permalink / raw)
  To: Tao Chen
  Cc: mingo, acme, namhyung, mark.rutland, alexander.shishkin, jolsa,
	irogers, adrian.hunter, kan.liang, song, ast, daniel, andrii,
	martin.lau, eddyz87, yonghong.song, john.fastabend, kpsingh, sdf,
	haoluo, linux-perf-users, linux-kernel, bpf

On Mon, Oct 20, 2025 at 01:01:18AM +0800, Tao Chen wrote:
> As Alexei noted, get_perf_callchain() return values may be reused
> if a task is preempted after the BPF program enters migrate disable
> mode. Drawing on the per-cpu design of bpf_bprintf_buffers,
> per-cpu BPF callchain entry is used here.

And now you can only unwind 3 tasks, and then start failing. This is
acceptable, why?

> -	if (may_fault)
> -		rcu_read_lock(); /* need RCU for perf's callchain below */
> -

I know you propose to remove this code; but how was that correct? The
perf callchain code hard relies on non-preemptible context, RCU does not
imply such a thing.

>  	if (trace_in)
>  		trace = trace_in;
> -	else if (kernel && task)
>  		trace = get_callchain_entry_for_task(task, max_depth);
> -	else
> -		trace = get_perf_callchain(regs, NULL, kernel, user, max_depth,
> -					   crosstask, false);



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH bpf-next v3 1/2] perf: Use extern perf_callchain_entry for get_perf_callchain
  2025-10-19 17:01 ` [PATCH bpf-next v3 1/2] perf: Use extern perf_callchain_entry for get_perf_callchain Tao Chen
@ 2025-10-20 11:40   ` Peter Zijlstra
  2025-10-22 15:14     ` Tao Chen
  0 siblings, 1 reply; 8+ messages in thread
From: Peter Zijlstra @ 2025-10-20 11:40 UTC (permalink / raw)
  To: Tao Chen
  Cc: mingo, acme, namhyung, mark.rutland, alexander.shishkin, jolsa,
	irogers, adrian.hunter, kan.liang, song, ast, daniel, andrii,
	martin.lau, eddyz87, yonghong.song, john.fastabend, kpsingh, sdf,
	haoluo, linux-perf-users, linux-kernel, bpf

On Mon, Oct 20, 2025 at 01:01:17AM +0800, Tao Chen wrote:
> From bpf stack map, we want to use our own buffers to avoid unnecessary
> copy, so let us pass it directly. BPF will use this in the next patch.
> 
> Signed-off-by: Tao Chen <chen.dylane@linux.dev>
> ---

> diff --git a/kernel/events/callchain.c b/kernel/events/callchain.c
> index 808c0d7a31f..851e8f9d026 100644
> --- a/kernel/events/callchain.c
> +++ b/kernel/events/callchain.c
> @@ -217,8 +217,8 @@ static void fixup_uretprobe_trampoline_entries(struct perf_callchain_entry *entr
>  }
>  
>  struct perf_callchain_entry *
> -get_perf_callchain(struct pt_regs *regs, bool kernel, bool user,
> -		   u32 max_stack, bool crosstask, bool add_mark)
> +get_perf_callchain(struct pt_regs *regs, struct perf_callchain_entry *external_entry,
> +		   bool kernel, bool user, u32 max_stack, bool crosstask, bool add_mark)
>  {
>  	struct perf_callchain_entry *entry;
>  	struct perf_callchain_entry_ctx ctx;
> @@ -228,7 +228,11 @@ get_perf_callchain(struct pt_regs *regs, bool kernel, bool user,
>  	if (crosstask && user && !kernel)
>  		return NULL;
>  
> -	entry = get_callchain_entry(&rctx);
> +	if (external_entry)
> +		entry = external_entry;
> +	else
> +		entry = get_callchain_entry(&rctx);
> +
>  	if (!entry)
>  		return NULL;
>  
> @@ -260,7 +264,8 @@ get_perf_callchain(struct pt_regs *regs, bool kernel, bool user,
>  	}
>  
>  exit_put:
> -	put_callchain_entry(rctx);
> +	if (!external_entry)
> +		put_callchain_entry(rctx);
>  
>  	return entry;
>  }

Urgh.. How about something like the below, and then you fix up
__bpf_get_stack() a little like this:


diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index 4d53cdd1374c..8b85b49cecf7 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@ -303,8 +303,8 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map,
 	u32 max_depth = map->value_size / stack_map_data_size(map);
 	u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
 	bool user = flags & BPF_F_USER_STACK;
+	struct perf_callchain_entry_ctx ctx;
 	struct perf_callchain_entry *trace;
-	bool kernel = !user;
 
 	if (unlikely(flags & ~(BPF_F_SKIP_FIELD_MASK | BPF_F_USER_STACK |
 			       BPF_F_FAST_STACK_CMP | BPF_F_REUSE_STACKID)))
@@ -314,8 +314,13 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map,
 	if (max_depth > sysctl_perf_event_max_stack)
 		max_depth = sysctl_perf_event_max_stack;
 
-	trace = get_perf_callchain(regs, kernel, user, max_depth,
-				   false, false);
+	trace = your-stuff;
+
+	__init_perf_callchain_ctx(&ctx, trace, max_depth, false);
+	if (!user)
+		__get_perf_callchain_kernel(&ctx, regs);
+	else
+		__get_perf_callchain_user(&ctx, regs);
 
 	if (unlikely(!trace))
 		/* couldn't fetch the stack trace */



---
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index fd1d91017b99..14a382cad1dd 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -67,6 +67,7 @@ struct perf_callchain_entry_ctx {
 	u32				nr;
 	short				contexts;
 	bool				contexts_maxed;
+	bool				add_mark;
 };
 
 typedef unsigned long (*perf_copy_f)(void *dst, const void *src,
@@ -1718,9 +1719,17 @@ DECLARE_PER_CPU(struct perf_callchain_entry, perf_callchain_entry);
 
 extern void perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs);
 extern void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs);
+
+extern void __init_perf_callchain_ctx(struct perf_callchain_entry_ctx *ctx,
+				      struct perf_callchain_entry *entry,
+				      u32 max_stack, bool add_mark);
+
+extern void __get_perf_callchain_kernel(struct perf_callchain_entry_ctx *ctx, struct pt_regs *regs);
+extern void __get_perf_callchain_user(struct perf_callchain_entry_ctx *ctx, struct pt_regs *regs);
+
 extern struct perf_callchain_entry *
 get_perf_callchain(struct pt_regs *regs, bool kernel, bool user,
-		   u32 max_stack, bool crosstask, bool add_mark);
+		   u32 max_stack, bool crosstask);
 extern int get_callchain_buffers(int max_stack);
 extern void put_callchain_buffers(void);
 extern struct perf_callchain_entry *get_callchain_entry(int *rctx);
diff --git a/kernel/events/callchain.c b/kernel/events/callchain.c
index 808c0d7a31fa..edd76e3bb139 100644
--- a/kernel/events/callchain.c
+++ b/kernel/events/callchain.c
@@ -216,50 +216,70 @@ static void fixup_uretprobe_trampoline_entries(struct perf_callchain_entry *entr
 #endif
 }
 
+void __init_perf_callchain_ctx(struct perf_callchain_entry_ctx *ctx,
+			       struct perf_callchain_entry *entry,
+			       u32 max_stack, bool add_mark)
+
+{
+	ctx->entry		= entry;
+	ctx->max_stack		= max_stack;
+	ctx->nr			= entry->nr = 0;
+	ctx->contexts		= 0;
+	ctx->contexts_maxed	= false;
+	ctx->add_mark		= add_mark;
+}
+
+void __get_perf_callchain_kernel(struct perf_callchain_entry_ctx *ctx, struct pt_regs *regs)
+{
+	if (user_mode(regs))
+		return;
+
+	if (ctx->add_mark)
+		perf_callchain_store_context(&ctx, PERF_CONTEXT_KERNEL);
+	perf_callchain_kernel(ctx, regs);
+}
+
+void __get_perf_callchain_user(struct perf_callchain_entry_ctx *ctx, struct pt_regs *regs)
+{
+	int start_entry_idx;
+
+	if (!user_mode(regs)) {
+		if (current->flags & (PF_KTHREAD | PF_USER_WORKER))
+			return;
+		regs = task_pt_regs(current);
+	}
+
+	if (ctx->add_mark)
+		perf_callchain_store_context(ctx, PERF_CONTEXT_USER);
+
+	start_entry_idx = entry->nr;
+	perf_callchain_user(ctx, regs);
+	fixup_uretprobe_trampoline_entries(entry, start_entry_idx);
+}
+
 struct perf_callchain_entry *
 get_perf_callchain(struct pt_regs *regs, bool kernel, bool user,
-		   u32 max_stack, bool crosstask, bool add_mark)
+		   u32 max_stack, bool crosstask)
 {
-	struct perf_callchain_entry *entry;
 	struct perf_callchain_entry_ctx ctx;
-	int rctx, start_entry_idx;
+	struct perf_callchain_entry *entry;
+	int rctx;
 
 	/* crosstask is not supported for user stacks */
 	if (crosstask && user && !kernel)
 		return NULL;
 
-	entry = get_callchain_entry(&rctx);
+	entry = get_callchain_entry(&rctx, regs);
 	if (!entry)
 		return NULL;
 
-	ctx.entry		= entry;
-	ctx.max_stack		= max_stack;
-	ctx.nr			= entry->nr = 0;
-	ctx.contexts		= 0;
-	ctx.contexts_maxed	= false;
+	__init_perf_callchain_ctx(&ctx, entry, max_stack, true);
 
-	if (kernel && !user_mode(regs)) {
-		if (add_mark)
-			perf_callchain_store_context(&ctx, PERF_CONTEXT_KERNEL);
-		perf_callchain_kernel(&ctx, regs);
-	}
+	if (kernel)
+		__get_perf_callchain_kernel(&ctx, regs);
+	if (user && !crosstask)
+		__get_perf_callchain_user(&ctx, regs);
 
-	if (user && !crosstask) {
-		if (!user_mode(regs)) {
-			if (current->flags & (PF_KTHREAD | PF_USER_WORKER))
-				goto exit_put;
-			regs = task_pt_regs(current);
-		}
-
-		if (add_mark)
-			perf_callchain_store_context(&ctx, PERF_CONTEXT_USER);
-
-		start_entry_idx = entry->nr;
-		perf_callchain_user(&ctx, regs);
-		fixup_uretprobe_trampoline_entries(entry, start_entry_idx);
-	}
-
-exit_put:
 	put_callchain_entry(rctx);
 
 	return entry;
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 177e57c1a362..cbe073d761a8 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -8218,7 +8218,7 @@ perf_callchain(struct perf_event *event, struct pt_regs *regs)
 		return &__empty_callchain;
 
 	callchain = get_perf_callchain(regs, kernel, user,
-				       max_stack, crosstask, true);
+				       max_stack, crosstask);
 	return callchain ?: &__empty_callchain;
 }
 

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH bpf-next v3 1/2] perf: Use extern perf_callchain_entry for get_perf_callchain
  2025-10-20 11:40   ` Peter Zijlstra
@ 2025-10-22 15:14     ` Tao Chen
  0 siblings, 0 replies; 8+ messages in thread
From: Tao Chen @ 2025-10-22 15:14 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, acme, namhyung, mark.rutland, alexander.shishkin, jolsa,
	irogers, adrian.hunter, kan.liang, song, ast, daniel, andrii,
	martin.lau, eddyz87, yonghong.song, john.fastabend, kpsingh, sdf,
	haoluo, linux-perf-users, linux-kernel, bpf

在 2025/10/20 19:40, Peter Zijlstra 写道:

Hi, Peter

> On Mon, Oct 20, 2025 at 01:01:17AM +0800, Tao Chen wrote:
>>  From bpf stack map, we want to use our own buffers to avoid unnecessary
>> copy, so let us pass it directly. BPF will use this in the next patch.
>>
>> Signed-off-by: Tao Chen <chen.dylane@linux.dev>
>> ---
> 
>> diff --git a/kernel/events/callchain.c b/kernel/events/callchain.c
>> index 808c0d7a31f..851e8f9d026 100644
>> --- a/kernel/events/callchain.c
>> +++ b/kernel/events/callchain.c
>> @@ -217,8 +217,8 @@ static void fixup_uretprobe_trampoline_entries(struct perf_callchain_entry *entr
>>   }
>>   
>>   struct perf_callchain_entry *
>> -get_perf_callchain(struct pt_regs *regs, bool kernel, bool user,
>> -		   u32 max_stack, bool crosstask, bool add_mark)
>> +get_perf_callchain(struct pt_regs *regs, struct perf_callchain_entry *external_entry,
>> +		   bool kernel, bool user, u32 max_stack, bool crosstask, bool add_mark)
>>   {
>>   	struct perf_callchain_entry *entry;
>>   	struct perf_callchain_entry_ctx ctx;
>> @@ -228,7 +228,11 @@ get_perf_callchain(struct pt_regs *regs, bool kernel, bool user,
>>   	if (crosstask && user && !kernel)
>>   		return NULL;
>>   
>> -	entry = get_callchain_entry(&rctx);
>> +	if (external_entry)
>> +		entry = external_entry;
>> +	else
>> +		entry = get_callchain_entry(&rctx);
>> +
>>   	if (!entry)
>>   		return NULL;
>>   
>> @@ -260,7 +264,8 @@ get_perf_callchain(struct pt_regs *regs, bool kernel, bool user,
>>   	}
>>   
>>   exit_put:
>> -	put_callchain_entry(rctx);
>> +	if (!external_entry)
>> +		put_callchain_entry(rctx);
>>   
>>   	return entry;
>>   }
> 
> Urgh.. How about something like the below, and then you fix up
> __bpf_get_stack() a little like this:
> 

Your solution seems to be more scalable, i will develop based on 
yours,thanks a lot.

> 
> diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
> index 4d53cdd1374c..8b85b49cecf7 100644
> --- a/kernel/bpf/stackmap.c
> +++ b/kernel/bpf/stackmap.c
> @@ -303,8 +303,8 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map,
>   	u32 max_depth = map->value_size / stack_map_data_size(map);
>   	u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
>   	bool user = flags & BPF_F_USER_STACK;
> +	struct perf_callchain_entry_ctx ctx;
>   	struct perf_callchain_entry *trace;
> -	bool kernel = !user;
>   
>   	if (unlikely(flags & ~(BPF_F_SKIP_FIELD_MASK | BPF_F_USER_STACK |
>   			       BPF_F_FAST_STACK_CMP | BPF_F_REUSE_STACKID)))
> @@ -314,8 +314,13 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map,
>   	if (max_depth > sysctl_perf_event_max_stack)
>   		max_depth = sysctl_perf_event_max_stack;
>   
> -	trace = get_perf_callchain(regs, kernel, user, max_depth,
> -				   false, false);
> +	trace = your-stuff;
> +
> +	__init_perf_callchain_ctx(&ctx, trace, max_depth, false);
> +	if (!user)
> +		__get_perf_callchain_kernel(&ctx, regs);
> +	else
> +		__get_perf_callchain_user(&ctx, regs);
>   
>   	if (unlikely(!trace))
>   		/* couldn't fetch the stack trace */
> 
> 
> 
> ---
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index fd1d91017b99..14a382cad1dd 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -67,6 +67,7 @@ struct perf_callchain_entry_ctx {
>   	u32				nr;
>   	short				contexts;
>   	bool				contexts_maxed;
> +	bool				add_mark;
>   };
>   
>   typedef unsigned long (*perf_copy_f)(void *dst, const void *src,
> @@ -1718,9 +1719,17 @@ DECLARE_PER_CPU(struct perf_callchain_entry, perf_callchain_entry);
>   
>   extern void perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs);
>   extern void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs);
> +
> +extern void __init_perf_callchain_ctx(struct perf_callchain_entry_ctx *ctx,
> +				      struct perf_callchain_entry *entry,
> +				      u32 max_stack, bool add_mark);
> +
> +extern void __get_perf_callchain_kernel(struct perf_callchain_entry_ctx *ctx, struct pt_regs *regs);
> +extern void __get_perf_callchain_user(struct perf_callchain_entry_ctx *ctx, struct pt_regs *regs);
> +
>   extern struct perf_callchain_entry *
>   get_perf_callchain(struct pt_regs *regs, bool kernel, bool user,
> -		   u32 max_stack, bool crosstask, bool add_mark);
> +		   u32 max_stack, bool crosstask);
>   extern int get_callchain_buffers(int max_stack);
>   extern void put_callchain_buffers(void);
>   extern struct perf_callchain_entry *get_callchain_entry(int *rctx);
> diff --git a/kernel/events/callchain.c b/kernel/events/callchain.c
> index 808c0d7a31fa..edd76e3bb139 100644
> --- a/kernel/events/callchain.c
> +++ b/kernel/events/callchain.c
> @@ -216,50 +216,70 @@ static void fixup_uretprobe_trampoline_entries(struct perf_callchain_entry *entr
>   #endif
>   }
>   
> +void __init_perf_callchain_ctx(struct perf_callchain_entry_ctx *ctx,
> +			       struct perf_callchain_entry *entry,
> +			       u32 max_stack, bool add_mark)
> +
> +{
> +	ctx->entry		= entry;
> +	ctx->max_stack		= max_stack;
> +	ctx->nr			= entry->nr = 0;
> +	ctx->contexts		= 0;
> +	ctx->contexts_maxed	= false;
> +	ctx->add_mark		= add_mark;
> +}
> +
> +void __get_perf_callchain_kernel(struct perf_callchain_entry_ctx *ctx, struct pt_regs *regs)
> +{
> +	if (user_mode(regs))
> +		return;
> +
> +	if (ctx->add_mark)
> +		perf_callchain_store_context(&ctx, PERF_CONTEXT_KERNEL);
> +	perf_callchain_kernel(ctx, regs);
> +}
> +
> +void __get_perf_callchain_user(struct perf_callchain_entry_ctx *ctx, struct pt_regs *regs)
> +{
> +	int start_entry_idx;
> +
> +	if (!user_mode(regs)) {
> +		if (current->flags & (PF_KTHREAD | PF_USER_WORKER))
> +			return;
> +		regs = task_pt_regs(current);
> +	}
> +
> +	if (ctx->add_mark)
> +		perf_callchain_store_context(ctx, PERF_CONTEXT_USER);
> +
> +	start_entry_idx = entry->nr;
> +	perf_callchain_user(ctx, regs);
> +	fixup_uretprobe_trampoline_entries(entry, start_entry_idx);
> +}
> +
>   struct perf_callchain_entry *
>   get_perf_callchain(struct pt_regs *regs, bool kernel, bool user,
> -		   u32 max_stack, bool crosstask, bool add_mark)
> +		   u32 max_stack, bool crosstask)
>   {
> -	struct perf_callchain_entry *entry;
>   	struct perf_callchain_entry_ctx ctx;
> -	int rctx, start_entry_idx;
> +	struct perf_callchain_entry *entry;
> +	int rctx;
>   
>   	/* crosstask is not supported for user stacks */
>   	if (crosstask && user && !kernel)
>   		return NULL;
>   
> -	entry = get_callchain_entry(&rctx);
> +	entry = get_callchain_entry(&rctx, regs);
>   	if (!entry)
>   		return NULL;
>   
> -	ctx.entry		= entry;
> -	ctx.max_stack		= max_stack;
> -	ctx.nr			= entry->nr = 0;
> -	ctx.contexts		= 0;
> -	ctx.contexts_maxed	= false;
> +	__init_perf_callchain_ctx(&ctx, entry, max_stack, true);
>   
> -	if (kernel && !user_mode(regs)) {
> -		if (add_mark)
> -			perf_callchain_store_context(&ctx, PERF_CONTEXT_KERNEL);
> -		perf_callchain_kernel(&ctx, regs);
> -	}
> +	if (kernel)
> +		__get_perf_callchain_kernel(&ctx, regs);
> +	if (user && !crosstask)
> +		__get_perf_callchain_user(&ctx, regs);
>   
> -	if (user && !crosstask) {
> -		if (!user_mode(regs)) {
> -			if (current->flags & (PF_KTHREAD | PF_USER_WORKER))
> -				goto exit_put;
> -			regs = task_pt_regs(current);
> -		}
> -
> -		if (add_mark)
> -			perf_callchain_store_context(&ctx, PERF_CONTEXT_USER);
> -
> -		start_entry_idx = entry->nr;
> -		perf_callchain_user(&ctx, regs);
> -		fixup_uretprobe_trampoline_entries(entry, start_entry_idx);
> -	}
> -
> -exit_put:
>   	put_callchain_entry(rctx);
>   
>   	return entry;
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 177e57c1a362..cbe073d761a8 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -8218,7 +8218,7 @@ perf_callchain(struct perf_event *event, struct pt_regs *regs)
>   		return &__empty_callchain;
>   
>   	callchain = get_perf_callchain(regs, kernel, user,
> -				       max_stack, crosstask, true);
> +				       max_stack, crosstask);
>   	return callchain ?: &__empty_callchain;
>   }
>   


-- 
Best Regards
Tao Chen

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH bpf-next v3 2/2] bpf: Use per-cpu BPF callchain entry to save callchain
  2025-10-20 11:03   ` Peter Zijlstra
@ 2025-10-22 15:59     ` Tao Chen
  0 siblings, 0 replies; 8+ messages in thread
From: Tao Chen @ 2025-10-22 15:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, acme, namhyung, mark.rutland, alexander.shishkin, jolsa,
	irogers, adrian.hunter, kan.liang, song, ast, daniel, andrii,
	martin.lau, eddyz87, yonghong.song, john.fastabend, kpsingh, sdf,
	haoluo, linux-perf-users, linux-kernel, bpf

在 2025/10/20 19:03, Peter Zijlstra 写道:
> On Mon, Oct 20, 2025 at 01:01:18AM +0800, Tao Chen wrote:
>> As Alexei noted, get_perf_callchain() return values may be reused
>> if a task is preempted after the BPF program enters migrate disable
>> mode. Drawing on the per-cpu design of bpf_bprintf_buffers,
>> per-cpu BPF callchain entry is used here.
> 
> And now you can only unwind 3 tasks, and then start failing. This is
> acceptable, why?

Yes it is, if we use per-cpu-bpf-callchain-entry like 
bpf_bprintf_buffers, this is a proposal from Andrii and Alexei,
In my understanding, is it a low-probability event to be preempted three 
times in a row in the same cpu?

> 
>> -	if (may_fault)
>> -		rcu_read_lock(); /* need RCU for perf's callchain below */
>> -
> 
> I know you propose to remove this code; but how was that correct? The
> perf callchain code hard relies on non-preemptible context, RCU does not
> imply such a thing.
>

Alexei mentioned this rcu-lock issue before,
It seems we need preemption protection.
https://lore.kernel.org/bpf/CAADnVQ+s8B7-fvR1TNO-bniSyKv57cH_ihRszmZV7pQDyV=VDQ@mail.gmail.com

>>   	if (trace_in)
>>   		trace = trace_in;
>> -	else if (kernel && task)
>>   		trace = get_callchain_entry_for_task(task, max_depth);
>> -	else
>> -		trace = get_perf_callchain(regs, NULL, kernel, user, max_depth,
>> -					   crosstask, false);
> 
> 


-- 
Best Regards
Tao Chen

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-10-22 16:00 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-19 17:01 [RFC PATCH bpf-next v3 0/2] Pass external callchain entry to get_perf_callchain Tao Chen
2025-10-19 17:01 ` [PATCH bpf-next v3 1/2] perf: Use extern perf_callchain_entry for get_perf_callchain Tao Chen
2025-10-20 11:40   ` Peter Zijlstra
2025-10-22 15:14     ` Tao Chen
2025-10-19 17:01 ` [PATCH bpf-next v3 2/2] bpf: Use per-cpu BPF callchain entry to save callchain Tao Chen
2025-10-20  5:50   ` Tao Chen
2025-10-20 11:03   ` Peter Zijlstra
2025-10-22 15:59     ` Tao Chen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).