From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-182.mta0.migadu.com (out-182.mta0.migadu.com [91.218.175.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6AA4B2853FD for ; Tue, 28 Oct 2025 16:26:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.182 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761668789; cv=none; b=K/r33Nti14PkPGi3uI08UhzJTJtwHJ6wH1v8Tu41qswfihrbAcXokeQGS8nl2YDB6SQW99esSI8M0fzQ+Zt03JddiywANJhv9MKyJLg2wZlOgNwyaUIld+nFz5rKsmqhsRBAZc9pbTwKHNQkzuTrE0AJ0nCACtYLeerYXmYAXWU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761668789; c=relaxed/simple; bh=BQViUPn+u5zjWSZfMnTlGnR0LRIp2JIy4M+1QdsYK80=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rbllY8xAK14muneGBcPuAAf129ThSEK4QJI+K9ZbCco8TxiCTrw+PwE3YlF+c3nlIEVr5oD7+OXsr9GLQDeSX5FomajKMfVO/AgRI+UC0GY9ARrgp5BKMHJaQ3jFLSdk7eA5rkUblAXe8clmzCaCjIKwJlNCu7aoK63C+Vqyr9E= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=PKOkI+TC; arc=none smtp.client-ip=91.218.175.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="PKOkI+TC" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761668785; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=X9aa5LzEKv70dI9kQnayHWqFQkX2go620sKtqwLgl0I=; b=PKOkI+TCAZc8XiA3yxOOOj7Z33alqSkOW5YWU0lMQSWD9jQHIjIjyO2mUJHa1pM6DDdVEC cRkPyaTF5I2IX8Ts9zRUSrG5tpPF0axKG2uyqgtOyb8oA/AI/N8Uio4PrO6RMseEpyVdUz +SGPCR6lD9RNgYyDVIA8sBLbFQuZ5Yk= From: Tao Chen To: peterz@infradead.org, mingo@redhat.com, acme@kernel.org, namhyung@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, irogers@google.com, adrian.hunter@intel.com, kan.liang@linux.intel.com, song@kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, eddyz87@gmail.com, yonghong.song@linux.dev, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me, haoluo@google.com Cc: linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Tao Chen Subject: [PATCH bpf-next v4 1/2] perf: Refactor get_perf_callchain Date: Wed, 29 Oct 2025 00:25:01 +0800 Message-ID: <20251028162502.3418817-2-chen.dylane@linux.dev> In-Reply-To: <20251028162502.3418817-1-chen.dylane@linux.dev> References: <20251028162502.3418817-1-chen.dylane@linux.dev> Precedence: bulk X-Mailing-List: linux-perf-users@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT >From BPF stack map, we want to use our own buffers to avoid unnecessary copy and ensure that the buffer will not be overwritten by other preemptive tasks. Peter suggested provide more flexible stack-sampling APIs, which can be used in BPF, and we can still use the perf callchain entry with the help of these APIs. The next patch will modify the BPF part. Signed-off-by: Peter Zijlstra Signed-off-by: Tao Chen --- include/linux/perf_event.h | 11 +++++- kernel/bpf/stackmap.c | 4 +- kernel/events/callchain.c | 75 ++++++++++++++++++++++++-------------- kernel/events/core.c | 2 +- 4 files changed, 61 insertions(+), 31 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index fd1d91017b9..14a382cad1d 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -67,6 +67,7 @@ struct perf_callchain_entry_ctx { u32 nr; short contexts; bool contexts_maxed; + bool add_mark; }; typedef unsigned long (*perf_copy_f)(void *dst, const void *src, @@ -1718,9 +1719,17 @@ DECLARE_PER_CPU(struct perf_callchain_entry, perf_callchain_entry); extern void perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs); extern void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs); + +extern void __init_perf_callchain_ctx(struct perf_callchain_entry_ctx *ctx, + struct perf_callchain_entry *entry, + u32 max_stack, bool add_mark); + +extern void __get_perf_callchain_kernel(struct perf_callchain_entry_ctx *ctx, struct pt_regs *regs); +extern void __get_perf_callchain_user(struct perf_callchain_entry_ctx *ctx, struct pt_regs *regs); + extern struct perf_callchain_entry * get_perf_callchain(struct pt_regs *regs, bool kernel, bool user, - u32 max_stack, bool crosstask, bool add_mark); + u32 max_stack, bool crosstask); extern int get_callchain_buffers(int max_stack); extern void put_callchain_buffers(void); extern struct perf_callchain_entry *get_callchain_entry(int *rctx); diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c index 4d53cdd1374..e28b35c7e0b 100644 --- a/kernel/bpf/stackmap.c +++ b/kernel/bpf/stackmap.c @@ -315,7 +315,7 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map, max_depth = sysctl_perf_event_max_stack; trace = get_perf_callchain(regs, kernel, user, max_depth, - false, false); + false); if (unlikely(!trace)) /* couldn't fetch the stack trace */ @@ -452,7 +452,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task, trace = get_callchain_entry_for_task(task, max_depth); else trace = get_perf_callchain(regs, kernel, user, max_depth, - crosstask, false); + crosstask); if (unlikely(!trace) || trace->nr < skip) { if (may_fault) diff --git a/kernel/events/callchain.c b/kernel/events/callchain.c index 808c0d7a31f..2c36e490625 100644 --- a/kernel/events/callchain.c +++ b/kernel/events/callchain.c @@ -216,13 +216,54 @@ static void fixup_uretprobe_trampoline_entries(struct perf_callchain_entry *entr #endif } +void __init_perf_callchain_ctx(struct perf_callchain_entry_ctx *ctx, + struct perf_callchain_entry *entry, + u32 max_stack, bool add_mark) + +{ + ctx->entry = entry; + ctx->max_stack = max_stack; + ctx->nr = entry->nr = 0; + ctx->contexts = 0; + ctx->contexts_maxed = false; + ctx->add_mark = add_mark; +} + +void __get_perf_callchain_kernel(struct perf_callchain_entry_ctx *ctx, struct pt_regs *regs) +{ + if (user_mode(regs)) + return; + + if (ctx->add_mark) + perf_callchain_store_context(ctx, PERF_CONTEXT_KERNEL); + perf_callchain_kernel(ctx, regs); +} + +void __get_perf_callchain_user(struct perf_callchain_entry_ctx *ctx, struct pt_regs *regs) +{ + int start_entry_idx; + + if (!user_mode(regs)) { + if (current->flags & (PF_KTHREAD | PF_USER_WORKER)) + return; + regs = task_pt_regs(current); + } + + if (ctx->add_mark) + perf_callchain_store_context(ctx, PERF_CONTEXT_USER); + + start_entry_idx = ctx->nr; + perf_callchain_user(ctx, regs); + fixup_uretprobe_trampoline_entries(ctx->entry, start_entry_idx); +} + struct perf_callchain_entry * get_perf_callchain(struct pt_regs *regs, bool kernel, bool user, - u32 max_stack, bool crosstask, bool add_mark) + u32 max_stack, bool crosstask) { struct perf_callchain_entry *entry; struct perf_callchain_entry_ctx ctx; - int rctx, start_entry_idx; + int rctx; /* crosstask is not supported for user stacks */ if (crosstask && user && !kernel) @@ -232,34 +273,14 @@ get_perf_callchain(struct pt_regs *regs, bool kernel, bool user, if (!entry) return NULL; - ctx.entry = entry; - ctx.max_stack = max_stack; - ctx.nr = entry->nr = 0; - ctx.contexts = 0; - ctx.contexts_maxed = false; + __init_perf_callchain_ctx(&ctx, entry, max_stack, true); - if (kernel && !user_mode(regs)) { - if (add_mark) - perf_callchain_store_context(&ctx, PERF_CONTEXT_KERNEL); - perf_callchain_kernel(&ctx, regs); - } - - if (user && !crosstask) { - if (!user_mode(regs)) { - if (current->flags & (PF_KTHREAD | PF_USER_WORKER)) - goto exit_put; - regs = task_pt_regs(current); - } + if (kernel) + __get_perf_callchain_kernel(&ctx, regs); - if (add_mark) - perf_callchain_store_context(&ctx, PERF_CONTEXT_USER); - - start_entry_idx = entry->nr; - perf_callchain_user(&ctx, regs); - fixup_uretprobe_trampoline_entries(entry, start_entry_idx); - } + if (user && !crosstask) + __get_perf_callchain_user(&ctx, regs); -exit_put: put_callchain_entry(rctx); return entry; diff --git a/kernel/events/core.c b/kernel/events/core.c index 7541f6f85fc..eb0f110593d 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -8218,7 +8218,7 @@ perf_callchain(struct perf_event *event, struct pt_regs *regs) return &__empty_callchain; callchain = get_perf_callchain(regs, kernel, user, - max_stack, crosstask, true); + max_stack, crosstask); return callchain ?: &__empty_callchain; } -- 2.48.1