public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Tao Chen <chen.dylane@linux.dev>
To: bot+bpf-ci@kernel.org, peterz@infradead.org, mingo@redhat.com,
	acme@kernel.org, namhyung@kernel.org, mark.rutland@arm.com,
	alexander.shishkin@linux.intel.com, jolsa@kernel.org,
	irogers@google.com, adrian.hunter@intel.com,
	kan.liang@linux.intel.com
Cc: linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org,
	bpf@vger.kernel.org, ast@kernel.org, andrii@kernel.org,
	daniel@iogearbox.net, martin.lau@kernel.org, eddyz87@gmail.com,
	yonghong.song@linux.dev, clm@meta.com, ihor.solodrai@linux.dev
Subject: Re: [PATCH bpf-next v5 1/3] perf: Refactor get_perf_callchain
Date: Mon, 10 Nov 2025 17:31:19 +0800	[thread overview]
Message-ID: <9ebb8b72-4b73-4cf5-9054-9134daf16d0d@linux.dev> (raw)
In-Reply-To: <886b17f3dd9f9836dd87b9249a01e737088fe52174b8d513ad6350c6bb19ff87@mail.kernel.org>

在 2025/11/10 00:58, bot+bpf-ci@kernel.org 写道:
>> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
>> index fd1d91017..edd3058e4 100644
>> --- a/include/linux/perf_event.h
>> +++ b/include/linux/perf_event.h
>> @@ -67,6 +67,7 @@ struct perf_callchain_entry_ctx {
>>   	u32				nr;
>>   	short				contexts;
>>   	bool				contexts_maxed;
>> +	bool				add_mark;
>>   };
> 
> [ ... ]
> 
>> diff --git a/kernel/events/callchain.c b/kernel/events/callchain.c
>> index 808c0d7a3..fb1f26be2 100644
>> --- a/kernel/events/callchain.c
>> +++ b/kernel/events/callchain.c
> 
> [ ... ]
> 
>> +void __get_perf_callchain_user(struct perf_callchain_entry_ctx *ctx, struct pt_regs *regs)
>> +{
>> +	int start_entry_idx;
>> +
>> +	if (!user_mode(regs)) {
>> +		if (current->flags & (PF_KTHREAD | PF_USER_WORKER))
>> +			return;
>> +		regs = task_pt_regs(current);
>> +	}
>> +
>> +	if (ctx->add_mark)
>> +		perf_callchain_store_context(ctx, PERF_CONTEXT_USER);
>> +
>> +	start_entry_idx = ctx->nr;
>                            ^^^^^^^^
>> +	perf_callchain_user(ctx, regs);
>> +	fixup_uretprobe_trampoline_entries(ctx->entry, start_entry_idx);
>> +}
> 
> Can this break uprobe trampoline fixups? The original code used
> entry->nr for start_entry_idx, but the refactored version uses ctx->nr.
> 
> These two counters track different things:
> - entry->nr counts ALL entries (incremented by both
>    perf_callchain_store_context() and perf_callchain_store())
> - ctx->nr only counts regular callchain entries (incremented only by
>    perf_callchain_store(), not by perf_callchain_store_context())
> 
> When add_mark is true, perf_callchain_store_context() increments
> entry->nr but not ctx->nr, causing them to become out of sync.
> 
> For example, if kernel callchain collection runs first with add_mark
> set:
> 
>    Initial: ctx->nr = entry->nr = 0
> 
>    In __get_perf_callchain_kernel():
>      perf_callchain_store_context() stores PERF_CONTEXT_KERNEL marker
>        -> entry->nr becomes 1, ctx->nr still 0
>      perf_callchain_kernel() stores N kernel frames
>        -> entry->nr becomes 1+N, ctx->nr becomes N
> 
>    In __get_perf_callchain_user():
>      perf_callchain_store_context() stores PERF_CONTEXT_USER marker
>        -> entry->nr becomes 2+N, ctx->nr still N
>      start_entry_idx = ctx->nr assigns N (should be 2+N)
>      fixup_uretprobe_trampoline_entries() scans from wrong index
> 
> This causes fixup_uretprobe_trampoline_entries() to scan kernel
> callchain entries instead of user callchain entries, missing the
> uretprobe trampolines that need to be fixed up.
> 
> Should this use ctx->entry->nr instead of ctx->nr?
> 

Use ctx->entry->nr looks better, will change it.
> 
> ---
> AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
> See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
> 
> CI run summary: https://github.com/kernel-patches/bpf/actions/runs/19211481431


-- 
Best Regards
Tao Chen

  reply	other threads:[~2025-11-10  9:31 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-09 16:35 [PATCH bpf-next v5 0/3] Pass external callchain entry to get_perf_callchain Tao Chen
2025-11-09 16:35 ` [PATCH bpf-next v5 1/3] perf: Refactor get_perf_callchain Tao Chen
2025-11-09 16:58   ` bot+bpf-ci
2025-11-10  9:31     ` Tao Chen [this message]
2025-11-09 16:35 ` [PATCH bpf-next v5 2/3] perf: Add atomic operation in get_recursion_context Tao Chen
2025-11-10  8:52   ` Peter Zijlstra
2025-11-10  9:26     ` Tao Chen
2025-11-09 16:35 ` [PATCH bpf-next v5 3/3] bpf: Hold the perf callchain entry until used completely Tao Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9ebb8b72-4b73-4cf5-9054-9134daf16d0d@linux.dev \
    --to=chen.dylane@linux.dev \
    --cc=acme@kernel.org \
    --cc=adrian.hunter@intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bot+bpf-ci@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=clm@meta.com \
    --cc=daniel@iogearbox.net \
    --cc=eddyz87@gmail.com \
    --cc=ihor.solodrai@linux.dev \
    --cc=irogers@google.com \
    --cc=jolsa@kernel.org \
    --cc=kan.liang@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=martin.lau@kernel.org \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    --cc=yonghong.song@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox