public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: K Prateek Nayak <kprateek.nayak@amd.com>
To: John Stultz <jstultz@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Vineeth Pillai <vineethrp@google.com>,
	"Sonam Sanju" <sonam.sanju@intel.com>,
	Sean Christopherson <seanjc@google.com>,
	"Kunwu Chan" <kunwu.chan@linux.dev>, Tejun Heo <tj@kernel.org>,
	Joel Fernandes <joelagnelf@nvidia.com>,
	Qais Yousef <qyousef@layalina.io>, Ingo Molnar <mingo@redhat.com>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Valentin Schneider <vschneid@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Will Deacon <will@kernel.org>, Waiman Long <longman@redhat.com>,
	Boqun Feng <boqun.feng@gmail.com>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	Metin Kaya <Metin.Kaya@arm.com>,
	Xuewen Yan <xuewen.yan94@gmail.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	"Daniel Lezcano" <daniel.lezcano@linaro.org>,
	Suleiman Souhlal <suleiman@google.com>,
	kuyo chang <kuyo.chang@mediatek.com>, hupu <hupu.gm@gmail.com>,
	<kernel-team@android.com>
Subject: Re: [PATCH v2 1/2] sched: proxy-exec: Close race causing workqueue work being delayed
Date: Tue, 5 May 2026 10:07:34 +0530	[thread overview]
Message-ID: <7482ee30-5a50-41bb-9545-67cca5bd4cf2@amd.com> (raw)
In-Reply-To: <CANDhNCrXhumJUzJLdHOABaRVg4hxpdnphrtOG37syYzoTuCCKg@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 6419 bytes --]

Hello John,

On 5/5/2026 9:02 AM, John Stultz wrote:
> On Sun, May 3, 2026 at 10:37 PM K Prateek Nayak <kprateek.nayak@amd.com> wrote:
>> On 5/4/2026 12:12 AM, K Prateek Nayak wrote:
>>> So when looking at all of this, I realized we probably don't need
>>> PROXY_WAKING anymore if we have the "is_blocked" state in task_struct.
>>> The owner can simply clear the blocked_on and move along and the
>>> waiter's "is_blocked" state will handle the sched bits.
>>>
>>> (p->is_blocked && !p->blocked_on) can then be interpreted as
>>> PROXY_WAKING and that task should explore return migration in
>>> find_proxy_task().
>>>
>>> Would something like below be more amenable from a backport standpoint
>>> instead of marking the config broken?
>>>
>> @@ -6535,8 +6548,10 @@ static bool try_to_block_task(struct rq *rq, struct task_struct *p,
>>          * blocked on a mutex, and we want to keep it on the runqueue
>>          * to be selectable for proxy-execution.
>>          */
>> -       if (!should_block)
>> +       if (!should_block) {
>> +               sched_set_task_is_blocked(p);
>>                 return false;
>> +       }
>>
> 
> So digging a bit more into this, it seems is_blocked in your patch is
> semantically different from what Peter was proposing.
> 
> Peter seemed to be suggesting is_blocked would be more generic then
> just for proxy-exec, getting set in try_to_block_task() regardless if
> we actually blocked the task or not, and then clearing it in
> ttwu_do_wakeup() when we go RUNNABLE.  Pretty much independent of
> blocked_on.

Something very similar to Peter's suggestion like that is attached
towards the end if that is more favorable but it doesn't always clear
"is_blocked" at ttwu_do_wakeup() currently - that would require the
return bits in ttwu_runnable() before it can be moved to
ttwu_do_wakeup() safely.

> 
> Where as your patch is still having is_blocked very much tied with
> blocked_on (since with yours we only set is_blocked if we avoid
> blocking the task in try_to_block_task(), and clear it only from
> find_proxy_task()).

Ack! That is the main difference - we can clear it during ttwu too
once we have proxy_needs_return() but with the set of changes we
have committed, it is done selectively for blocked tasks in
find_proxy_task(). 

> In a way I can map your approach utilizing is_blocked as conceptually
> sort of separating the latch bit from my last approach, (if we also
> re-worked PROXY_WAKING to be the value 1 (!blocked_on + latch) instead
> of -1).  So your approach seems workable (I've got it about half way
> integrated with my full series - hitting a little bit of trouble with
> the sleeping owner enquing at the moment),

So, this new state is synchronized by task's rq_lock() when p->on_rq
(even for the ttwu bits) but from what I can tell, sleeping owner really
depended on the blocked_lock based synchronization so perhaps that is
the difference?

Would grabbing blocked_lock when setting and clearing the "is_blocked"
help in case you've not already explored it?

> but I'm not sure this is what Peter is looking for.

Well this was just an option in case we don't want to backport super
invasive changes.

That said, we can easily do the following on top to fit what Peter
originally suggested (although it'll probably require a bit effort to
integrate with the sleeping owner bits):

  (Lightly tested as usual :-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 30672390e6f99..e88f5b7a02b3e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3675,6 +3675,8 @@ ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags,
 	}
 }
 
+static inline void sched_clear_task_is_blocked(struct task_struct *p);
+
 /*
  * Consider @p being inside a wait loop:
  *
@@ -3709,8 +3711,19 @@ static int ttwu_runnable(struct task_struct *p, int wake_flags)
 	rq = __task_rq_lock(p, &rf);
 	if (task_on_rq_queued(p)) {
 		update_rq_clock(rq);
-		if (p->se.sched_delayed)
+		if (p->se.sched_delayed) {
+			/*
+			 * Task was fully blocked (not retained as proxy) and
+			 * is runnable again. Clear "is_blocked" indicator.
+			 * For all other cases, the task has either not set
+			 * "is_blocked" since ttwu_runnable() won against
+			 * schedule(), or the task was retained as proxy and
+			 * expects find_proxy_task() to handle the clearing of
+			 * "is_blocked" state.
+			 */
+			sched_clear_task_is_blocked(p);
 			enqueue_task(rq, p, ENQUEUE_NOCLOCK | ENQUEUE_DELAYED);
+		}
 		if (!task_on_cpu(rq, p)) {
 			/*
 			 * When on_rq && !on_cpu the task is preempted, see if
@@ -4190,6 +4203,13 @@ int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
 		 */
 		WRITE_ONCE(p->__state, TASK_WAKING);
 
+		/*
+		 * If ttwu_runnable() did not win, task is fully blocked (!p->on_rq) and
+		 * requires a full wakeup. Clear task_is_blocked() before attempting
+		 * ttwu_queue_wakelist().
+		 */
+		sched_clear_task_is_blocked(p);
+
 		/*
 		 * If the owning (remote) CPU is still in the middle of schedule() with
 		 * this task as prev, considering queueing p on the remote CPUs wake_list
@@ -6541,6 +6561,12 @@ static bool try_to_block_task(struct rq *rq, struct task_struct *p,
 		return false;
 	}
 
+	/*
+	 * Task is considered fully blocked at this point and requires
+	 * a wakeup to be runnable again including delayed task.
+	 */
+	sched_set_task_is_blocked(p);
+
 	/*
 	 * We check should_block after signal_pending because we
 	 * will want to wake the task in that case. But if
@@ -6548,10 +6574,8 @@ static bool try_to_block_task(struct rq *rq, struct task_struct *p,
 	 * blocked on a mutex, and we want to keep it on the runqueue
 	 * to be selectable for proxy-execution.
 	 */
-	if (!should_block) {
-		sched_set_task_is_blocked(p);
+	if (!should_block)
 		return false;
-	}
 
 	p->sched_contributes_to_load =
 		(task_state & TASK_UNINTERRUPTIBLE) &&
@@ -6942,6 +6966,7 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
 }
 #else /* SCHED_PROXY_EXEC */
 static inline void sched_set_task_is_blocked(struct task_struct *p) {}
+static inline void sched_clear_task_is_blocked(struct task_struct *p) {}
 
 static inline bool task_should_block(struct task_struct *p)
 {
---

Attached is full diff as proxy.diff on top of tip:sched/core for
convenience. I'll let Peter comment further if he likes this
approach or not :-)

-- 
Thanks and Regards,
Prateek

[-- Attachment #2: proxy.diff --]
[-- Type: text/plain, Size: 12452 bytes --]

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 8ec3b6d7d718b..7be5e1faf56a1 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -846,7 +846,11 @@ struct task_struct {
 	struct alloc_tag		*alloc_tag;
 #endif
 
-	int				on_cpu;
+	u8				on_cpu;
+	u8				on_rq;
+	u8				is_blocked;
+	u8				__pad;
+
 	struct __call_single_node	wake_entry;
 	unsigned int			wakee_flips;
 	unsigned long			wakee_flip_decay_ts;
@@ -861,7 +865,6 @@ struct task_struct {
 	 */
 	int				recent_used_cpu;
 	int				wake_cpu;
-	int				on_rq;
 
 	int				prio;
 	int				static_prio;
@@ -2181,19 +2184,10 @@ extern int __cond_resched_rwlock_write(rwlock_t *lock) __must_hold(lock);
 
 #ifndef CONFIG_PREEMPT_RT
 
-/*
- * With proxy exec, if a task has been proxy-migrated, it may be a donor
- * on a cpu that it can't actually run on. Thus we need a special state
- * to denote that the task is being woken, but that it needs to be
- * evaluated for return-migration before it is run. So if the task is
- * blocked_on PROXY_WAKING, return migrate it before running it.
- */
-#define PROXY_WAKING ((struct mutex *)(-1L))
-
 static inline struct mutex *__get_task_blocked_on(struct task_struct *p)
 {
 	lockdep_assert_held_once(&p->blocked_lock);
-	return p->blocked_on == PROXY_WAKING ? NULL : p->blocked_on;
+	return p->blocked_on;
 }
 
 static inline void __set_task_blocked_on(struct task_struct *p, struct mutex *m)
@@ -2221,7 +2215,7 @@ static inline void __clear_task_blocked_on(struct task_struct *p, struct mutex *
 	 * blocked_on relationships, but make sure we are not
 	 * clearing the relationship with a different lock.
 	 */
-	WARN_ON_ONCE(m && p->blocked_on && p->blocked_on != m && p->blocked_on != PROXY_WAKING);
+	WARN_ON_ONCE(m && p->blocked_on && p->blocked_on != m);
 	p->blocked_on = NULL;
 }
 
@@ -2231,34 +2225,6 @@ static inline void clear_task_blocked_on(struct task_struct *p, struct mutex *m)
 	__clear_task_blocked_on(p, m);
 }
 
-static inline void __set_task_blocked_on_waking(struct task_struct *p, struct mutex *m)
-{
-	/* Currently we serialize blocked_on under the task::blocked_lock */
-	lockdep_assert_held_once(&p->blocked_lock);
-
-	if (!sched_proxy_exec()) {
-		__clear_task_blocked_on(p, m);
-		return;
-	}
-
-	/* Don't set PROXY_WAKING if blocked_on was already cleared */
-	if (!p->blocked_on)
-		return;
-	/*
-	 * There may be cases where we set PROXY_WAKING on tasks that were
-	 * already set to waking, but make sure we are not changing
-	 * the relationship with a different lock.
-	 */
-	WARN_ON_ONCE(m && p->blocked_on != m && p->blocked_on != PROXY_WAKING);
-	p->blocked_on = PROXY_WAKING;
-}
-
-static inline void set_task_blocked_on_waking(struct task_struct *p, struct mutex *m)
-{
-	guard(raw_spinlock_irqsave)(&p->blocked_lock);
-	__set_task_blocked_on_waking(p, m);
-}
-
 #else
 static inline void __clear_task_blocked_on(struct task_struct *p, struct rt_mutex *m)
 {
@@ -2267,14 +2233,6 @@ static inline void __clear_task_blocked_on(struct task_struct *p, struct rt_mute
 static inline void clear_task_blocked_on(struct task_struct *p, struct rt_mutex *m)
 {
 }
-
-static inline void __set_task_blocked_on_waking(struct task_struct *p, struct rt_mutex *m)
-{
-}
-
-static inline void set_task_blocked_on_waking(struct task_struct *p, struct rt_mutex *m)
-{
-}
 #endif /* !CONFIG_PREEMPT_RT */
 
 static __always_inline bool need_resched(void)
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 7d359647156df..4aa79bcab08c7 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -983,7 +983,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
 		next = waiter->task;
 
 		debug_mutex_wake_waiter(lock, waiter);
-		set_task_blocked_on_waking(next, lock);
+		clear_task_blocked_on(next, lock);
 		wake_q_add(&wake_q, next);
 	}
 
diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
index 5cd9dfa4b31e6..522fe045eb1b2 100644
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -285,11 +285,11 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter,
 		debug_mutex_wake_waiter(lock, waiter);
 #endif
 		/*
-		 * When waking up the task to die, be sure to set the
-		 * blocked_on to PROXY_WAKING. Otherwise we can see
-		 * circular blocked_on relationships that can't resolve.
+		 * When waking up the task to die, be sure to clear the
+		 * blocked_on. Otherwise we can see circular blocked_on
+		 * relationships that can't resolve.
 		 */
-		set_task_blocked_on_waking(waiter->task, lock);
+		clear_task_blocked_on(waiter->task, lock);
 		wake_q_add(wake_q, waiter->task);
 	}
 
@@ -340,14 +340,14 @@ static bool __ww_mutex_wound(struct MUTEX *lock,
 		if (owner != current) {
 			/*
 			 * When waking up the task to wound, be sure to set the
-			 * blocked_on to PROXY_WAKING. Otherwise we can see
-			 * circular blocked_on relationships that can't resolve.
+			 * clear blocked_on. Otherwise we can see circular
+			 * blocked_on relationships that can't resolve.
 			 *
 			 * NOTE: We pass NULL here instead of lock, because we
 			 * are waking the mutex owner, who may be currently
 			 * blocked on a different mutex.
 			 */
-			set_task_blocked_on_waking(owner, NULL);
+			clear_task_blocked_on(owner, NULL);
 			wake_q_add(wake_q, owner);
 		}
 		return true;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 49cd5d2171613..e88f5b7a02b3e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3675,6 +3675,8 @@ ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags,
 	}
 }
 
+static inline void sched_clear_task_is_blocked(struct task_struct *p);
+
 /*
  * Consider @p being inside a wait loop:
  *
@@ -3709,8 +3711,19 @@ static int ttwu_runnable(struct task_struct *p, int wake_flags)
 	rq = __task_rq_lock(p, &rf);
 	if (task_on_rq_queued(p)) {
 		update_rq_clock(rq);
-		if (p->se.sched_delayed)
+		if (p->se.sched_delayed) {
+			/*
+			 * Task was fully blocked (not retained as proxy) and
+			 * is runnable again. Clear "is_blocked" indicator.
+			 * For all otehr cases, the task has either not set
+			 * "is_blocked" since ttwu_runnable() won against
+			 * schedule(), or the task was retained as proxy and
+			 * expects find_proxy_task() to handle the clearing of
+			 * "is_blocked" state.
+			 */
+			sched_clear_task_is_blocked(p);
 			enqueue_task(rq, p, ENQUEUE_NOCLOCK | ENQUEUE_DELAYED);
+		}
 		if (!task_on_cpu(rq, p)) {
 			/*
 			 * When on_rq && !on_cpu the task is preempted, see if
@@ -4190,6 +4203,13 @@ int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
 		 */
 		WRITE_ONCE(p->__state, TASK_WAKING);
 
+		/*
+		 * If ttwu_runnable() did not win, task is fully blocked (!p->on_rq) and
+		 * requires a full wakeup. Clear task_is_blocked() before attempting
+		 * ttwu_queue_wakelist().
+		 */
+		sched_clear_task_is_blocked(p);
+
 		/*
 		 * If the owning (remote) CPU is still in the middle of schedule() with
 		 * this task as prev, considering queueing p on the remote CPUs wake_list
@@ -6495,6 +6515,8 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 
 #endif /* !CONFIG_SCHED_CORE */
 
+static inline void sched_set_task_is_blocked(struct task_struct *p);
+
 /*
  * Constants for the sched_mode argument of __schedule().
  *
@@ -6523,11 +6545,28 @@ static bool try_to_block_task(struct rq *rq, struct task_struct *p,
 	if (signal_pending_state(task_state, p)) {
 		WRITE_ONCE(p->__state, TASK_RUNNING);
 		*task_state_p = TASK_RUNNING;
-		set_task_blocked_on_waking(p, NULL);
+
+		/*
+		 * Clear blocked_on relation if we were planning to
+		 * retain the task as proxy donor since it is runnable
+		 * again as a result of pending signal.
+		 *
+		 * Since only the running task can set the blocked_on
+		 * relation for itself, do not unnecessarily grab the
+		 * blocked_lock if blocked_on is not set.
+		 */
+		if (!should_block)
+			clear_task_blocked_on(p, NULL);
 
 		return false;
 	}
 
+	/*
+	 * Task is considered fully blocked at this point and requires
+	 * a wakeup to be runnable again including the delayed task.
+	 */
+	sched_set_task_is_blocked(p);
+
 	/*
 	 * We check should_block after signal_pending because we
 	 * will want to wake the task in that case. But if
@@ -6562,6 +6601,27 @@ static bool try_to_block_task(struct rq *rq, struct task_struct *p,
 }
 
 #ifdef CONFIG_SCHED_PROXY_EXEC
+static inline void sched_set_task_is_blocked(struct task_struct *p)
+{
+	if (!sched_proxy_exec())
+		return;
+
+	p->is_blocked = 1;
+}
+
+static inline void sched_clear_task_is_blocked(struct task_struct *p)
+{
+	p->is_blocked = 0;
+}
+
+static inline bool task_should_block(struct task_struct *p)
+{
+	if (!sched_proxy_exec())
+		return true;
+
+	return !p->blocked_on;
+}
+
 static inline void proxy_set_task_cpu(struct task_struct *p, int cpu)
 {
 	unsigned int wake_cpu;
@@ -6602,6 +6662,7 @@ static bool proxy_deactivate(struct rq *rq, struct task_struct *donor)
 	 * need to be changed from next *before* we deactivate.
 	 */
 	proxy_resched_idle(rq);
+	sched_clear_task_is_blocked(donor);
 	return try_to_block_task(rq, donor, &state, true);
 }
 
@@ -6732,7 +6793,7 @@ static void proxy_force_return(struct rq *rq, struct rq_flags *rf,
 		cpu = select_task_rq(p, p->wake_cpu, &wake_flag);
 		set_task_cpu(p, cpu);
 		target_rq = cpu_rq(cpu);
-		clear_task_blocked_on(p, NULL);
+		sched_clear_task_is_blocked(p);
 	}
 
 	if (target_rq)
@@ -6765,15 +6826,16 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
 	bool curr_in_chain = false;
 	int this_cpu = cpu_of(rq);
 	struct task_struct *p;
-	struct mutex *mutex;
 	int owner_cpu;
 
 	/* Follow blocked_on chain. */
-	for (p = donor; (mutex = p->blocked_on); p = owner) {
-		/* if its PROXY_WAKING, do return migration or run if current */
-		if (mutex == PROXY_WAKING) {
+	for (p = donor; task_is_blocked(p); p = owner) {
+		struct mutex *mutex = p->blocked_on;
+
+		/* If task is no longer blocked, do return migration or run if current */
+		if (!mutex) {
 			if (task_current(rq, p)) {
-				clear_task_blocked_on(p, PROXY_WAKING);
+				sched_clear_task_is_blocked(p);
 				return p;
 			}
 			goto force_return;
@@ -6807,8 +6869,9 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
 			 * and return p (if it is current and safe to
 			 * just run on this rq), or return-migrate the task.
 			 */
+			__clear_task_blocked_on(p, mutex);
 			if (task_current(rq, p)) {
-				__clear_task_blocked_on(p, NULL);
+				sched_clear_task_is_blocked(p);
 				return p;
 			}
 			goto force_return;
@@ -6902,6 +6965,14 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
 	return NULL;
 }
 #else /* SCHED_PROXY_EXEC */
+static inline void sched_set_task_is_blocked(struct task_struct *p) {}
+static inline void sched_clear_task_is_blocked(struct task_struct *p) {}
+
+static inline bool task_should_block(struct task_struct *p)
+{
+	return true;
+}
+
 static struct task_struct *
 find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
 {
@@ -7026,13 +7097,13 @@ static void __sched notrace __schedule(int sched_mode)
 		}
 	} else if (!preempt && prev_state) {
 		/*
-		 * We pass task_is_blocked() as the should_block arg
+		 * We pass task_should_block() as the should_block arg
 		 * in order to keep mutex-blocked tasks on the runqueue
 		 * for slection with proxy-exec (without proxy-exec
-		 * task_is_blocked() will always be false).
+		 * task_should_block() will always be true).
 		 */
 		try_to_block_task(rq, prev, &prev_state,
-				  !task_is_blocked(prev));
+				  task_should_block(prev));
 		switch_count = &prev->nvcsw;
 	}
 
@@ -7044,7 +7115,7 @@ static void __sched notrace __schedule(int sched_mode)
 		struct task_struct *prev_donor = rq->donor;
 
 		rq_set_donor(rq, next);
-		if (unlikely(next->blocked_on)) {
+		if (unlikely(task_is_blocked(next))) {
 			next = find_proxy_task(rq, next, &rf);
 			if (!next) {
 				zap_balance_callbacks(rq);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index c95584191d58f..5c1085f260ad4 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2390,7 +2390,7 @@ static inline bool task_is_blocked(struct task_struct *p)
 	if (!sched_proxy_exec())
 		return false;
 
-	return !!p->blocked_on;
+	return !!p->is_blocked;
 }
 
 static inline int task_on_cpu(struct rq *rq, struct task_struct *p)

  reply	other threads:[~2026-05-05  4:37 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-30 21:50 [PATCH v2 0/2] Proxy Execution fixes for v7.1-rc John Stultz
2026-04-30 21:50 ` [PATCH v2 1/2] sched: proxy-exec: Close race causing workqueue work being delayed John Stultz
2026-04-30 23:53   ` John Stultz
2026-05-01  6:39   ` K Prateek Nayak
2026-05-01  7:11     ` John Stultz
2026-05-01 13:21   ` Peter Zijlstra
2026-05-01 15:55     ` K Prateek Nayak
2026-05-01 18:59       ` Peter Zijlstra
2026-05-01 22:26         ` John Stultz
2026-05-03 18:42           ` K Prateek Nayak
2026-05-04  5:37             ` K Prateek Nayak
2026-05-05  3:32               ` John Stultz
2026-05-05  4:37                 ` K Prateek Nayak [this message]
2026-05-04 21:33             ` John Stultz
2026-04-30 21:50 ` [PATCH v2 2/2] locking: mutex: Fix proxy-exec potentially deactivating tasks marked TASK_RUNNING John Stultz
2026-05-01  6:57   ` K Prateek Nayak
2026-05-04 22:30   ` kernel test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7482ee30-5a50-41bb-9545-67cca5bd4cf2@amd.com \
    --to=kprateek.nayak@amd.com \
    --cc=Metin.Kaya@arm.com \
    --cc=boqun.feng@gmail.com \
    --cc=daniel.lezcano@linaro.org \
    --cc=dietmar.eggemann@arm.com \
    --cc=hupu.gm@gmail.com \
    --cc=joelagnelf@nvidia.com \
    --cc=jstultz@google.com \
    --cc=juri.lelli@redhat.com \
    --cc=kernel-team@android.com \
    --cc=kunwu.chan@linux.dev \
    --cc=kuyo.chang@mediatek.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=mingo@redhat.com \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=qyousef@layalina.io \
    --cc=rostedt@goodmis.org \
    --cc=seanjc@google.com \
    --cc=sonam.sanju@intel.com \
    --cc=suleiman@google.com \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=vincent.guittot@linaro.org \
    --cc=vineethrp@google.com \
    --cc=vschneid@redhat.com \
    --cc=will@kernel.org \
    --cc=xuewen.yan94@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox