public inbox for linux-perf-users@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] uprobes: clear extra_consumers before pooling return instances
@ 2026-04-08 10:02 Keenan Dong
  2026-04-08 10:38 ` sashiko-bot
  0 siblings, 1 reply; 2+ messages in thread
From: Keenan Dong @ 2026-04-08 10:02 UTC (permalink / raw)
  To: peterz, mingo, acme, namhyung, mhiramat, oleg
  Cc: mark.rutland, alexander.shishkin, jolsa, irogers, adrian.hunter,
	james.clark, andrii, linux-perf-users, linux-kernel,
	linux-trace-kernel, Keenan Dong

ri_pool_push() returns a return_instance to the per-task pool for
later reuse. The pool reset clears cons_cnt, but it leaves
extra_consumers behind.

A reused return_instance can later grow a fresh extra_consumers
array and then reach the cleanup path with a stale pointer from its
previous lifetime, leading to a double free of the recycled object.

Free and clear extra_consumers before putting the instance back into
the pool so every reused entry starts from a clean state.

Fixes: 8622e45b5da1 ("uprobes: Reuse return_instances between multiple uretprobes within task")
Reported-by: Keenan Dong <keenanat2000@gmail.com>
Signed-off-by: Keenan Dong <keenanat2000@gmail.com>
---
 kernel/events/uprobes.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 923b24b321cc..24b9884a2667 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -1945,6 +1945,8 @@ unsigned long uprobe_get_trap_addr(struct pt_regs *regs)
 
 static void ri_pool_push(struct uprobe_task *utask, struct return_instance *ri)
 {
+	kfree(ri->extra_consumers);
+	ri->extra_consumers = NULL;
 	ri->cons_cnt = 0;
 	ri->next = utask->ri_pool;
 	utask->ri_pool = ri;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] uprobes: clear extra_consumers before pooling return instances
  2026-04-08 10:02 [PATCH] uprobes: clear extra_consumers before pooling return instances Keenan Dong
@ 2026-04-08 10:38 ` sashiko-bot
  0 siblings, 0 replies; 2+ messages in thread
From: sashiko-bot @ 2026-04-08 10:38 UTC (permalink / raw)
  To: Keenan Dong; +Cc: linux-perf-users

Sashiko AI review found 1 potential issue(s):
- [Critical] Missing full memory barrier in `ri_timer` allows Store-Load reordering, causing a race condition with `free_ret_instance` that can lead to Use-After-Free or uprobe corruption.
--

commit 892d89a5acfb2bb4da3888666af1d58594ed9e7d
Author: Keenan Dong <keenanat2000@gmail.com>

uprobes: clear extra_consumers before pooling return instances

This commit clears the extra_consumers array and frees its memory before
pushing a return_instance back to the per-task pool. This prevents a
potential double free if a recycled object retains a stale pointer from
its previous lifetime.

> diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> index 4084e926e2844..4695cb812fe26 100644
> --- a/kernel/events/uprobes.c
> +++ b/kernel/events/uprobes.c
[ ... ]
> @@ -1945,6 +1945,8 @@ unsigned long uprobe_get_trap_addr(struct pt_regs *regs)
>  
>  static void ri_pool_push(struct uprobe_task *utask, struct return_instance *ri)
>  {
> +	kfree(ri->extra_consumers);
> +	ri->extra_consumers = NULL;
>  	ri->cons_cnt = 0;
>  	ri->next = utask->ri_pool;
>  	utask->ri_pool = ri;

While looking at how return instances are recycled into the pool here, I
noticed a potential race condition in the pre-existing code involving
ri_timer() and free_ret_instance().

In uprobes.c, ri_seqcount is used in a two-way synchronization pattern
between free_ret_instance() and ri_timer().

In free_ret_instance(), the task unlinks ri, executes hprobe_consume()
(which contains a full memory barrier via xchg()), and then reads
ri_seqcount via raw_seqcount_try_begin().

In ri_timer(), the timer asserts its presence by incrementing the seqcount
via raw_write_seqcount_begin(), and then iterates over
utask->return_instances.

For this mutual exclusion to work, both sides require a full memory barrier.
However, raw_write_seqcount_begin() only provides a write barrier
(smp_wmb()), which does not prevent Store-Load reordering.

Could this allow the timer CPU to execute the load of
utask->return_instances before the store to ri_seqcount.sequence becomes
globally visible?

If this reordering occurs, the following sequence seems possible:

1. ri_timer() reads a pointer to ri and stalls.
2. The task unlinks ri, executes its full barrier, reads the even seqcount,
   and pushes ri to ri_pool.
3. The task then quickly reuses ri for a new uretprobe, setting
   hprobe->state = HPROBE_LEASED.
4. ri_timer() resumes and executes try_cmpxchg() on hprobe->state.

Because the state is HPROBE_LEASED again, the cmpxchg succeeds. It
transitions the new uprobe's state to STABLE, dropping the SRCU lock.
When the task later consumes the new uprobe, it will call put_uprobe()
without a matching get_uprobe().

Can this sequence lead to a premature refcount drop and use-after-free
of the new uprobe?

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260408100247.2065245-1-keenanat2000@gmail.com?part=1

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-04-08 10:38 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-08 10:02 [PATCH] uprobes: clear extra_consumers before pooling return instances Keenan Dong
2026-04-08 10:38 ` sashiko-bot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox