From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-172.mta0.migadu.com (out-172.mta0.migadu.com [91.218.175.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 554153101A6 for ; Wed, 12 Nov 2025 16:32:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762965167; cv=none; b=Sg6m/ne1UmoE30KtlUaop/xsjkf5JLfSlaZoi5UlD9xyNGsVfG7rka5h8Rp85KO7+dJBbJN4/IEgSUXNjBQW6/6sD8BwoH/9QZKMcrkzPdfGtJSXjvDucuFg/9Y9rOXxDaCDWWPerydIrVJ+GW8Sq5tte1+koL4iXZ+ErmE1vMY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762965167; c=relaxed/simple; bh=RMRa8Q6ayawYi21qORb4H3++uTkFrXR1cUC5uQ7UhUo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gmi6pPsmPSCjv3kItNsx3OwXFDrLUSQvc+h6YQ2ewmyHaZ+4L+aXCaBC6jjwPbl/chFm+PjZEeAt88yhlzkSs5TH6Ez0nLcVf/QC0y0Fj5Z4KOLzer1QQT/qHbIyBS8fhjuDz1M4XRHhXEec5zGUzocBONX6cHc/MzCqLD0ViXA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=Hc2A6lJ7; arc=none smtp.client-ip=91.218.175.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="Hc2A6lJ7" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1762965163; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mtXUOdznhUmZcR4XZKkageB0d5MGuAnQnq36wg+kOAg=; b=Hc2A6lJ7AX0YAWwEH4hpV5gqPnSP+43TbPSYyaDnMauGYA5fZG/lbLB7qYx57o8TGDO3s2 OGXdAXhqWF/u0PrpI8ZSQjCuFSRkfu29mL7cPJ0hWu+VMpsn5kKF0Nt1jgKIqH46m0yIRa KJdGaySHfDKBJgCt+gskfaSr9F2b46E= From: Tao Chen To: peterz@infradead.org, mingo@redhat.com, acme@kernel.org, namhyung@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, irogers@google.com, adrian.hunter@intel.com, kan.liang@linux.intel.com Cc: linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Tao Chen Subject: [PATCH bpf-next v6 2/2] bpf: Hold the perf callchain entry until used completely Date: Thu, 13 Nov 2025 00:31:48 +0800 Message-ID: <20251112163148.100949-3-chen.dylane@linux.dev> In-Reply-To: <20251112163148.100949-1-chen.dylane@linux.dev> References: <20251112163148.100949-1-chen.dylane@linux.dev> Precedence: bulk X-Mailing-List: linux-perf-users@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT As Alexei noted, get_perf_callchain() return values may be reused if a task is preempted after the BPF program enters migrate disable mode. The perf_callchain_entres has a small stack of entries, and we can reuse it as follows: 1. get the perf callchain entry 2. BPF use... 3. put the perf callchain entry And Peter suggested that get_recursion_context used with preemption disabled, so we should disable preemption at BPF side. Signed-off-by: Tao Chen --- kernel/bpf/stackmap.c | 67 +++++++++++++++++++++++++++++++++++-------- 1 file changed, 55 insertions(+), 12 deletions(-) diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c index 2365541c81d..64ace4ed50e 100644 --- a/kernel/bpf/stackmap.c +++ b/kernel/bpf/stackmap.c @@ -210,13 +210,14 @@ static void stack_map_get_build_id_offset(struct bpf_stack_build_id *id_offs, } static struct perf_callchain_entry * -get_callchain_entry_for_task(struct task_struct *task, u32 max_depth) +get_callchain_entry_for_task(int *rctx, struct task_struct *task, u32 max_depth) { #ifdef CONFIG_STACKTRACE struct perf_callchain_entry *entry; - int rctx; - entry = get_callchain_entry(&rctx); + preempt_disable(); + entry = get_callchain_entry(rctx); + preempt_enable(); if (!entry) return NULL; @@ -238,8 +239,6 @@ get_callchain_entry_for_task(struct task_struct *task, u32 max_depth) to[i] = (u64)(from[i]); } - put_callchain_entry(rctx); - return entry; #else /* CONFIG_STACKTRACE */ return NULL; @@ -320,6 +319,34 @@ static long __bpf_get_stackid(struct bpf_map *map, return id; } +static struct perf_callchain_entry * +bpf_get_perf_callchain(int *rctx, struct pt_regs *regs, bool kernel, bool user, + int max_stack, bool crosstask) +{ + struct perf_callchain_entry_ctx ctx; + struct perf_callchain_entry *entry; + + preempt_disable(); + entry = get_callchain_entry(rctx); + preempt_enable(); + + if (unlikely(!entry)) + return NULL; + + __init_perf_callchain_ctx(&ctx, entry, max_stack, false); + if (kernel) + __get_perf_callchain_kernel(&ctx, regs); + if (user && !crosstask) + __get_perf_callchain_user(&ctx, regs); + + return entry; +} + +static void bpf_put_perf_callchain(int rctx) +{ + put_callchain_entry(rctx); +} + BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map, u64, flags) { @@ -328,20 +355,24 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map, struct perf_callchain_entry *trace; bool kernel = !user; u32 max_depth; + int rctx, ret; if (unlikely(flags & ~(BPF_F_SKIP_FIELD_MASK | BPF_F_USER_STACK | BPF_F_FAST_STACK_CMP | BPF_F_REUSE_STACKID))) return -EINVAL; max_depth = stack_map_calculate_max_depth(map->value_size, elem_size, flags); - trace = get_perf_callchain(regs, kernel, user, max_depth, - false, false); + trace = bpf_get_perf_callchain(&rctx, regs, kernel, user, max_depth, + false); if (unlikely(!trace)) /* couldn't fetch the stack trace */ return -EFAULT; - return __bpf_get_stackid(map, trace, flags); + ret = __bpf_get_stackid(map, trace, flags); + bpf_put_perf_callchain(rctx); + + return ret; } const struct bpf_func_proto bpf_get_stackid_proto = { @@ -435,6 +466,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task, bool kernel = !user; int err = -EINVAL; u64 *ips; + int rctx; if (unlikely(flags & ~(BPF_F_SKIP_FIELD_MASK | BPF_F_USER_STACK | BPF_F_USER_BUILD_ID))) @@ -467,18 +499,26 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task, trace = trace_in; trace->nr = min_t(u32, trace->nr, max_depth); } else if (kernel && task) { - trace = get_callchain_entry_for_task(task, max_depth); + trace = get_callchain_entry_for_task(&rctx, task, max_depth); } else { - trace = get_perf_callchain(regs, kernel, user, max_depth, - crosstask, false); + trace = bpf_get_perf_callchain(&rctx, regs, kernel, user, max_depth, + crosstask); } - if (unlikely(!trace) || trace->nr < skip) { + if (unlikely(!trace)) { if (may_fault) rcu_read_unlock(); goto err_fault; } + if (trace->nr < skip) { + if (may_fault) + rcu_read_unlock(); + if (!trace_in) + bpf_put_perf_callchain(rctx); + goto err_fault; + } + trace_nr = trace->nr - skip; copy_len = trace_nr * elem_size; @@ -497,6 +537,9 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task, if (may_fault) rcu_read_unlock(); + if (!trace_in) + bpf_put_perf_callchain(rctx); + if (user_build_id) stack_map_get_build_id_offset(buf, trace_nr, user, may_fault); -- 2.48.1