Linux Perf Users
 help / color / mirror / Atom feed
From: sashiko-bot@kernel.org
To: "Minwoo Ahn" <mwahn402@gmail.com>
Cc: linux-perf-users@vger.kernel.org
Subject: Re: [PATCH v4] perf/core: Fix sampling period inconsistency across CPU migration
Date: Wed, 29 Apr 2026 10:51:09 +0000	[thread overview]
Message-ID: <20260429105110.6ECD6C19425@smtp.kernel.org> (raw)
In-Reply-To: <20260429095134.10049-1-mwahn402@gmail.com>

Thank you for your contribution! Sashiko AI review found 3 potential issue(s) to consider:
- [High] Flawed event matching heuristic falsely groups distinct sampling sessions, causing them to corrupt each other's sampling period.
- [Medium] TOCTOU race in `perf_get_task_ctxp` prevents context sharing for concurrently created per-CPU events.
- [Medium] Plain lockless read of `iter->owner` risks data races with concurrent task exits.
--

commit fcbb40982fea3f4e611342ab82ef8efe806efcc9
Author: Minwoo Ahn <mwahn402@gmail.com>

perf/core: Fix sampling period inconsistency across CPU migration

This commit introduces struct perf_task_context to preserve period_left
across CPU migrations for per-task software sampling events. A shared
context is used to prevent period resets when a task migrates across CPUs.

> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 6d1f8bad7e1c..eec0e822ef6e 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c

[ ... ]

> @@ -13259,6 +13291,45 @@ static void account_event(struct perf_event *event)
>  	account_pmu_sb_event(event);
>  }
>  
> +static struct perf_task_context *
> +perf_get_task_ctxp(struct perf_event *event, struct task_struct *task,
> +		   struct perf_event *parent_event)
> +{
> +	struct perf_task_context *ctxp = NULL;
> +	struct perf_event_context *ctx;
> +	struct perf_event *iter;
> +	unsigned long flags;
> +
> +	ctx = perf_lock_task_context(task, &flags);
> +	if (ctx) {
> +		list_for_each_entry(iter, &ctx->event_list, event_entry) {
> +			if (iter->perf_task_ctxp &&
> +			    (iter->owner == current ||

The event owner field is cleared asynchronously when the owner task exits
or releases the event descriptor, such as via smp_store_release() in
perf_event_release_kernel().

Could this plain read of iter->owner race with a concurrent task exit?
Should this be accessed using READ_ONCE() to avoid data races?

> +			     (parent_event && !iter->owner &&
> +			      iter->parent &&
> +			      iter->parent->perf_task_ctxp ==
> +			      parent_event->perf_task_ctxp)) &&
> +			    perf_event_equal_task_ctx(&iter->attr,
> +						     &event->attr)) {

The perf_event_equal_task_ctx() macro only compares the config and
sample_period attributes. If a user opens multiple independent software events
for the same task with identical configs and periods (for example,
perf record -e task-clock,task-clock), or if the events differ in unchecked
attributes like exclude_user or exclude_kernel, will they incorrectly share
the same perf_task_context?

If these independent events share the same context, when the task is scheduled
out, both events would sequentially write their hw.period_left to the shared
ctxp->period_left, clobbering the first write. Upon schedule in, both events
would restore from this clobbered value.

Could this entangle and corrupt their respective sampling periodicities?

> +				ctxp = iter->perf_task_ctxp;
> +				refcount_inc(&ctxp->refcount);
> +				break;
> +			}
> +		}
> +		raw_spin_unlock_irqrestore(&ctx->lock, flags);
> +		put_ctx(ctx);
> +	}
> +
> +	if (!ctxp) {
> +		ctxp = kzalloc_obj(struct perf_task_context);

If a userspace profiling application concurrently calls sys_perf_event_open()
from multiple threads to create per-CPU events for the same task, both threads
could perform the locked search above before either event reaches
perf_install_in_context().

Since neither event is in ctx->event_list yet, will both threads fail to find
a match, independently allocate new perf_task_context structures here, and
fail to share the context?

Does this race condition leave the migration period inconsistency unfixed for
concurrently created events?

> +		if (!ctxp)
> +			return NULL;
> +		refcount_set(&ctxp->refcount, 1);
> +	}
> +
> +	return ctxp;
> +}

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260429095134.10049-1-mwahn402@gmail.com?part=1

  reply	other threads:[~2026-04-29 10:51 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-29  9:51 [PATCH v4] perf/core: Fix sampling period inconsistency across CPU migration Minwoo Ahn
2026-04-29 10:51 ` sashiko-bot [this message]
2026-05-04  8:08 ` Peter Zijlstra
2026-05-04 13:52   ` Minwoo Ahn

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260429105110.6ECD6C19425@smtp.kernel.org \
    --to=sashiko-bot@kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mwahn402@gmail.com \
    --cc=sashiko@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox