public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: K Prateek Nayak <kprateek.nayak@amd.com>
To: John Stultz <jstultz@google.com>, Peter Zijlstra <peterz@infradead.org>
Cc: LKML <linux-kernel@vger.kernel.org>,
	Vineeth Pillai <vineethrp@google.com>,
	Sonam Sanju <sonam.sanju@intel.com>,
	"Sean Christopherson" <seanjc@google.com>,
	Kunwu Chan <kunwu.chan@linux.dev>, "Tejun Heo" <tj@kernel.org>,
	Joel Fernandes <joelagnelf@nvidia.com>,
	Qais Yousef <qyousef@layalina.io>, Ingo Molnar <mingo@redhat.com>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Valentin Schneider <vschneid@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Will Deacon <will@kernel.org>, Waiman Long <longman@redhat.com>,
	Boqun Feng <boqun.feng@gmail.com>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	Metin Kaya <Metin.Kaya@arm.com>,
	Xuewen Yan <xuewen.yan94@gmail.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	"Suleiman Souhlal" <suleiman@google.com>,
	kuyo chang <kuyo.chang@mediatek.com>, hupu <hupu.gm@gmail.com>,
	<kernel-team@android.com>
Subject: Re: [PATCH v2 1/2] sched: proxy-exec: Close race causing workqueue work being delayed
Date: Mon, 4 May 2026 11:07:05 +0530	[thread overview]
Message-ID: <46ce422c-e796-4280-8165-b7c163928c68@amd.com> (raw)
In-Reply-To: <e53e952b-fc02-4aac-8e1e-e6ae2b5b38b6@amd.com>

On 5/4/2026 12:12 AM, K Prateek Nayak wrote:
> So when looking at all of this, I realized we probably don't need
> PROXY_WAKING anymore if we have the "is_blocked" state in task_struct.
> The owner can simply clear the blocked_on and move along and the
> waiter's "is_blocked" state will handle the sched bits.
> 
> (p->is_blocked && !p->blocked_on) can then be interpreted as
> PROXY_WAKING and that task should explore return migration in
> find_proxy_task().
> 
> Would something like below be more amenable from a backport standpoint
> instead of marking the config broken?
> 
>   (Lightly tested; Based on tip:sched/core)

... and I missed this hunk for try_to_block_task():

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 49cd5d2171613..ee89d751b9594 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7026,13 +7074,13 @@ static void __sched notrace __schedule(int sched_mode)
 		}
 	} else if (!preempt && prev_state) {
 		/*
-		 * We pass task_is_blocked() as the should_block arg
+		 * We pass task_should_block() as the should_block arg
 		 * in order to keep mutex-blocked tasks on the runqueue
 		 * for slection with proxy-exec (without proxy-exec
-		 * task_is_blocked() will always be false).
+		 * task_should_block() will always be true).
 		 */
 		try_to_block_task(rq, prev, &prev_state,
-				  !task_is_blocked(prev));
+				  task_should_block(prev));
 		switch_count = &prev->nvcsw;
 	}
 
---

Sorry about that and sorry for the noise! Final diffstat looks like:

  include/linux/sched.h     | 56 ++++---------------------
  kernel/locking/mutex.c    |  2 +-
  kernel/locking/ww_mutex.h | 14 +++----
  kernel/sched/core.c       | 72 +++++++++++++++++++++++++++------
  kernel/sched/sched.h      |  2 +-
  5 files changed, 75 insertions(+), 71 deletions(-)

It mostly relocates the PROXY_WAKING bits from linux/sched.h to internal
"is_blocked" helper in core.c. Pasting the full diff again with some
cleanups for convenience:

  (Lightly tested with test-ww_mutex and sched-messaging)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 8ec3b6d7d718b..7be5e1faf56a1 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -846,7 +846,11 @@ struct task_struct {
 	struct alloc_tag		*alloc_tag;
 #endif
 
-	int				on_cpu;
+	u8				on_cpu;
+	u8				on_rq;
+	u8				is_blocked;
+	u8				__pad;
+
 	struct __call_single_node	wake_entry;
 	unsigned int			wakee_flips;
 	unsigned long			wakee_flip_decay_ts;
@@ -861,7 +865,6 @@ struct task_struct {
 	 */
 	int				recent_used_cpu;
 	int				wake_cpu;
-	int				on_rq;
 
 	int				prio;
 	int				static_prio;
@@ -2181,19 +2184,10 @@ extern int __cond_resched_rwlock_write(rwlock_t *lock) __must_hold(lock);
 
 #ifndef CONFIG_PREEMPT_RT
 
-/*
- * With proxy exec, if a task has been proxy-migrated, it may be a donor
- * on a cpu that it can't actually run on. Thus we need a special state
- * to denote that the task is being woken, but that it needs to be
- * evaluated for return-migration before it is run. So if the task is
- * blocked_on PROXY_WAKING, return migrate it before running it.
- */
-#define PROXY_WAKING ((struct mutex *)(-1L))
-
 static inline struct mutex *__get_task_blocked_on(struct task_struct *p)
 {
 	lockdep_assert_held_once(&p->blocked_lock);
-	return p->blocked_on == PROXY_WAKING ? NULL : p->blocked_on;
+	return p->blocked_on;
 }
 
 static inline void __set_task_blocked_on(struct task_struct *p, struct mutex *m)
@@ -2221,7 +2215,7 @@ static inline void __clear_task_blocked_on(struct task_struct *p, struct mutex *
 	 * blocked_on relationships, but make sure we are not
 	 * clearing the relationship with a different lock.
 	 */
-	WARN_ON_ONCE(m && p->blocked_on && p->blocked_on != m && p->blocked_on != PROXY_WAKING);
+	WARN_ON_ONCE(m && p->blocked_on && p->blocked_on != m);
 	p->blocked_on = NULL;
 }
 
@@ -2231,34 +2225,6 @@ static inline void clear_task_blocked_on(struct task_struct *p, struct mutex *m)
 	__clear_task_blocked_on(p, m);
 }
 
-static inline void __set_task_blocked_on_waking(struct task_struct *p, struct mutex *m)
-{
-	/* Currently we serialize blocked_on under the task::blocked_lock */
-	lockdep_assert_held_once(&p->blocked_lock);
-
-	if (!sched_proxy_exec()) {
-		__clear_task_blocked_on(p, m);
-		return;
-	}
-
-	/* Don't set PROXY_WAKING if blocked_on was already cleared */
-	if (!p->blocked_on)
-		return;
-	/*
-	 * There may be cases where we set PROXY_WAKING on tasks that were
-	 * already set to waking, but make sure we are not changing
-	 * the relationship with a different lock.
-	 */
-	WARN_ON_ONCE(m && p->blocked_on != m && p->blocked_on != PROXY_WAKING);
-	p->blocked_on = PROXY_WAKING;
-}
-
-static inline void set_task_blocked_on_waking(struct task_struct *p, struct mutex *m)
-{
-	guard(raw_spinlock_irqsave)(&p->blocked_lock);
-	__set_task_blocked_on_waking(p, m);
-}
-
 #else
 static inline void __clear_task_blocked_on(struct task_struct *p, struct rt_mutex *m)
 {
@@ -2267,14 +2233,6 @@ static inline void __clear_task_blocked_on(struct task_struct *p, struct rt_mute
 static inline void clear_task_blocked_on(struct task_struct *p, struct rt_mutex *m)
 {
 }
-
-static inline void __set_task_blocked_on_waking(struct task_struct *p, struct rt_mutex *m)
-{
-}
-
-static inline void set_task_blocked_on_waking(struct task_struct *p, struct rt_mutex *m)
-{
-}
 #endif /* !CONFIG_PREEMPT_RT */
 
 static __always_inline bool need_resched(void)
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 7d359647156df..4aa79bcab08c7 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -983,7 +983,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
 		next = waiter->task;
 
 		debug_mutex_wake_waiter(lock, waiter);
-		set_task_blocked_on_waking(next, lock);
+		clear_task_blocked_on(next, lock);
 		wake_q_add(&wake_q, next);
 	}
 
diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
index 5cd9dfa4b31e6..522fe045eb1b2 100644
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -285,11 +285,11 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter,
 		debug_mutex_wake_waiter(lock, waiter);
 #endif
 		/*
-		 * When waking up the task to die, be sure to set the
-		 * blocked_on to PROXY_WAKING. Otherwise we can see
-		 * circular blocked_on relationships that can't resolve.
+		 * When waking up the task to die, be sure to clear the
+		 * blocked_on. Otherwise we can see circular blocked_on
+		 * relationships that can't resolve.
 		 */
-		set_task_blocked_on_waking(waiter->task, lock);
+		clear_task_blocked_on(waiter->task, lock);
 		wake_q_add(wake_q, waiter->task);
 	}
 
@@ -340,14 +340,14 @@ static bool __ww_mutex_wound(struct MUTEX *lock,
 		if (owner != current) {
 			/*
 			 * When waking up the task to wound, be sure to set the
-			 * blocked_on to PROXY_WAKING. Otherwise we can see
-			 * circular blocked_on relationships that can't resolve.
+			 * clear blocked_on. Otherwise we can see circular
+			 * blocked_on relationships that can't resolve.
 			 *
 			 * NOTE: We pass NULL here instead of lock, because we
 			 * are waking the mutex owner, who may be currently
 			 * blocked on a different mutex.
 			 */
-			set_task_blocked_on_waking(owner, NULL);
+			clear_task_blocked_on(owner, NULL);
 			wake_q_add(wake_q, owner);
 		}
 		return true;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 49cd5d2171613..30672390e6f99 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6495,6 +6495,8 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 
 #endif /* !CONFIG_SCHED_CORE */
 
+static inline void sched_set_task_is_blocked(struct task_struct *p);
+
 /*
  * Constants for the sched_mode argument of __schedule().
  *
@@ -6523,7 +6525,18 @@ static bool try_to_block_task(struct rq *rq, struct task_struct *p,
 	if (signal_pending_state(task_state, p)) {
 		WRITE_ONCE(p->__state, TASK_RUNNING);
 		*task_state_p = TASK_RUNNING;
-		set_task_blocked_on_waking(p, NULL);
+
+		/*
+		 * Clear blocked_on relation if we were planning to
+		 * retain the task as proxy donor since it is runnable
+		 * again as a result of pending signal.
+		 *
+		 * Since only the running task can set the blocked_on
+		 * relation for itself, do not unnecessarily grab the
+		 * blocked_lock if blocked_on is not set.
+		 */
+		if (!should_block)
+			clear_task_blocked_on(p, NULL);
 
 		return false;
 	}
@@ -6535,8 +6548,10 @@ static bool try_to_block_task(struct rq *rq, struct task_struct *p,
 	 * blocked on a mutex, and we want to keep it on the runqueue
 	 * to be selectable for proxy-execution.
 	 */
-	if (!should_block)
+	if (!should_block) {
+		sched_set_task_is_blocked(p);
 		return false;
+	}
 
 	p->sched_contributes_to_load =
 		(task_state & TASK_UNINTERRUPTIBLE) &&
@@ -6562,6 +6577,27 @@ static bool try_to_block_task(struct rq *rq, struct task_struct *p,
 }
 
 #ifdef CONFIG_SCHED_PROXY_EXEC
+static inline void sched_set_task_is_blocked(struct task_struct *p)
+{
+	if (!sched_proxy_exec())
+		return;
+
+	p->is_blocked = 1;
+}
+
+static inline void sched_clear_task_is_blocked(struct task_struct *p)
+{
+	p->is_blocked = 0;
+}
+
+static inline bool task_should_block(struct task_struct *p)
+{
+	if (!sched_proxy_exec())
+		return true;
+
+	return !p->blocked_on;
+}
+
 static inline void proxy_set_task_cpu(struct task_struct *p, int cpu)
 {
 	unsigned int wake_cpu;
@@ -6602,6 +6638,7 @@ static bool proxy_deactivate(struct rq *rq, struct task_struct *donor)
 	 * need to be changed from next *before* we deactivate.
 	 */
 	proxy_resched_idle(rq);
+	sched_clear_task_is_blocked(donor);
 	return try_to_block_task(rq, donor, &state, true);
 }
 
@@ -6732,7 +6769,7 @@ static void proxy_force_return(struct rq *rq, struct rq_flags *rf,
 		cpu = select_task_rq(p, p->wake_cpu, &wake_flag);
 		set_task_cpu(p, cpu);
 		target_rq = cpu_rq(cpu);
-		clear_task_blocked_on(p, NULL);
+		sched_clear_task_is_blocked(p);
 	}
 
 	if (target_rq)
@@ -6765,15 +6802,16 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
 	bool curr_in_chain = false;
 	int this_cpu = cpu_of(rq);
 	struct task_struct *p;
-	struct mutex *mutex;
 	int owner_cpu;
 
 	/* Follow blocked_on chain. */
-	for (p = donor; (mutex = p->blocked_on); p = owner) {
-		/* if its PROXY_WAKING, do return migration or run if current */
-		if (mutex == PROXY_WAKING) {
+	for (p = donor; task_is_blocked(p); p = owner) {
+		struct mutex *mutex = p->blocked_on;
+
+		/* If task is no longer blocked, do return migration or run if current */
+		if (!mutex) {
 			if (task_current(rq, p)) {
-				clear_task_blocked_on(p, PROXY_WAKING);
+				sched_clear_task_is_blocked(p);
 				return p;
 			}
 			goto force_return;
@@ -6807,8 +6845,9 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
 			 * and return p (if it is current and safe to
 			 * just run on this rq), or return-migrate the task.
 			 */
+			__clear_task_blocked_on(p, mutex);
 			if (task_current(rq, p)) {
-				__clear_task_blocked_on(p, NULL);
+				sched_clear_task_is_blocked(p);
 				return p;
 			}
 			goto force_return;
@@ -6902,6 +6941,13 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
 	return NULL;
 }
 #else /* SCHED_PROXY_EXEC */
+static inline void sched_set_task_is_blocked(struct task_struct *p) {}
+
+static inline bool task_should_block(struct task_struct *p)
+{
+	return true;
+}
+
 static struct task_struct *
 find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
 {
@@ -7026,13 +7072,13 @@ static void __sched notrace __schedule(int sched_mode)
 		}
 	} else if (!preempt && prev_state) {
 		/*
-		 * We pass task_is_blocked() as the should_block arg
+		 * We pass task_should_block() as the should_block arg
 		 * in order to keep mutex-blocked tasks on the runqueue
 		 * for slection with proxy-exec (without proxy-exec
-		 * task_is_blocked() will always be false).
+		 * task_should_block() will always be true).
 		 */
 		try_to_block_task(rq, prev, &prev_state,
-				  !task_is_blocked(prev));
+				  task_should_block(prev));
 		switch_count = &prev->nvcsw;
 	}
 
@@ -7044,7 +7090,7 @@ static void __sched notrace __schedule(int sched_mode)
 		struct task_struct *prev_donor = rq->donor;
 
 		rq_set_donor(rq, next);
-		if (unlikely(next->blocked_on)) {
+		if (unlikely(task_is_blocked(next))) {
 			next = find_proxy_task(rq, next, &rf);
 			if (!next) {
 				zap_balance_callbacks(rq);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index c95584191d58f..5c1085f260ad4 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2390,7 +2390,7 @@ static inline bool task_is_blocked(struct task_struct *p)
 	if (!sched_proxy_exec())
 		return false;
 
-	return !!p->blocked_on;
+	return !!p->is_blocked;
 }
 
 static inline int task_on_cpu(struct rq *rq, struct task_struct *p)
-- 
Thanks and Regards,
Prateek


  reply	other threads:[~2026-05-04  5:37 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-30 21:50 [PATCH v2 0/2] Proxy Execution fixes for v7.1-rc John Stultz
2026-04-30 21:50 ` [PATCH v2 1/2] sched: proxy-exec: Close race causing workqueue work being delayed John Stultz
2026-04-30 23:53   ` John Stultz
2026-05-01  6:39   ` K Prateek Nayak
2026-05-01  7:11     ` John Stultz
2026-05-01 13:21   ` Peter Zijlstra
2026-05-01 15:55     ` K Prateek Nayak
2026-05-01 18:59       ` Peter Zijlstra
2026-05-01 22:26         ` John Stultz
2026-05-03 18:42           ` K Prateek Nayak
2026-05-04  5:37             ` K Prateek Nayak [this message]
2026-05-05  3:32               ` John Stultz
2026-05-05  4:37                 ` K Prateek Nayak
2026-05-04 21:33             ` John Stultz
2026-04-30 21:50 ` [PATCH v2 2/2] locking: mutex: Fix proxy-exec potentially deactivating tasks marked TASK_RUNNING John Stultz
2026-05-01  6:57   ` K Prateek Nayak
2026-05-04 22:30   ` kernel test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=46ce422c-e796-4280-8165-b7c163928c68@amd.com \
    --to=kprateek.nayak@amd.com \
    --cc=Metin.Kaya@arm.com \
    --cc=boqun.feng@gmail.com \
    --cc=daniel.lezcano@linaro.org \
    --cc=dietmar.eggemann@arm.com \
    --cc=hupu.gm@gmail.com \
    --cc=joelagnelf@nvidia.com \
    --cc=jstultz@google.com \
    --cc=juri.lelli@redhat.com \
    --cc=kernel-team@android.com \
    --cc=kunwu.chan@linux.dev \
    --cc=kuyo.chang@mediatek.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=mingo@redhat.com \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=qyousef@layalina.io \
    --cc=rostedt@goodmis.org \
    --cc=seanjc@google.com \
    --cc=sonam.sanju@intel.com \
    --cc=suleiman@google.com \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=vincent.guittot@linaro.org \
    --cc=vineethrp@google.com \
    --cc=vschneid@redhat.com \
    --cc=will@kernel.org \
    --cc=xuewen.yan94@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox