From: K Prateek Nayak <kprateek.nayak@amd.com>
To: John Stultz <jstultz@google.com>, LKML <linux-kernel@vger.kernel.org>
Cc: Joel Fernandes <joelagnelf@nvidia.com>,
Qais Yousef <qyousef@layalina.io>, Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
"Juri Lelli" <juri.lelli@redhat.com>,
Vincent Guittot <vincent.guittot@linaro.org>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Valentin Schneider <vschneid@redhat.com>,
Steven Rostedt <rostedt@goodmis.org>,
Ben Segall <bsegall@google.com>,
Zimuzo Ezeozue <zezeozue@google.com>,
Mel Gorman <mgorman@suse.de>, Will Deacon <will@kernel.org>,
Waiman Long <longman@redhat.com>,
Boqun Feng <boqun.feng@gmail.com>,
"Paul E. McKenney" <paulmck@kernel.org>,
Metin Kaya <Metin.Kaya@arm.com>,
Xuewen Yan <xuewen.yan94@gmail.com>,
Thomas Gleixner <tglx@linutronix.de>,
"Daniel Lezcano" <daniel.lezcano@linaro.org>,
Suleiman Souhlal <suleiman@google.com>,
kuyo chang <kuyo.chang@mediatek.com>, hupu <hupu.gm@gmail.com>,
<kernel-team@android.com>
Subject: Re: [PATCH v24 06/11] sched: Handle blocked-waiter migration (and return migration)
Date: Tue, 30 Dec 2025 11:03:54 +0530 [thread overview]
Message-ID: <df4e667a-e322-4124-a9c1-57d6f58e6f0a@amd.com> (raw)
In-Reply-To: <20251124223111.3616950-7-jstultz@google.com>
Hello John,
On 11/25/2025 4:00 AM, John Stultz wrote:
> -static struct task_struct *proxy_deactivate(struct rq *rq, struct task_struct *donor)
> +/*
> + * If the blocked-on relationship crosses CPUs, migrate @p to the
> + * owner's CPU.
> + *
> + * This is because we must respect the CPU affinity of execution
> + * contexts (owner) but we can ignore affinity for scheduling
> + * contexts (@p). So we have to move scheduling contexts towards
> + * potential execution contexts.
> + *
> + * Note: The owner can disappear, but simply migrate to @target_cpu
> + * and leave that CPU to sort things out.
> + */
> +static void proxy_migrate_task(struct rq *rq, struct rq_flags *rf,
> + struct task_struct *p, int target_cpu)
> {
> - if (!__proxy_deactivate(rq, donor)) {
> - /*
> - * XXX: For now, if deactivation failed, set donor
> - * as unblocked, as we aren't doing proxy-migrations
> - * yet (more logic will be needed then).
> - */
> - clear_task_blocked_on(donor, NULL);
> + struct rq *target_rq = cpu_rq(target_cpu);
> +
> + lockdep_assert_rq_held(rq);
> +
> + /*
> + * Since we're going to drop @rq, we have to put(@rq->donor) first,
> + * otherwise we have a reference that no longer belongs to us.
> + *
> + * Additionally, as we put_prev_task(prev) earlier, its possible that
> + * prev will migrate away as soon as we drop the rq lock, however we
> + * still have it marked as rq->curr, as we've not yet switched tasks.
> + *
> + * So call proxy_resched_idle() to let go of the references before
> + * we release the lock.
> + */
> + proxy_resched_idle(rq);
> +
> + WARN_ON(p == rq->curr);
> +
> + deactivate_task(rq, p, DEQUEUE_NOCLOCK);
> + proxy_set_task_cpu(p, target_cpu);
> +
> + /*
> + * We have to zap callbacks before unlocking the rq
> + * as another CPU may jump in and call sched_balance_rq
> + * which can trip the warning in rq_pin_lock() if we
> + * leave callbacks set.
> + */
> + zap_balance_callbacks(rq);
> + rq_unpin_lock(rq, rf);
> + raw_spin_rq_unlock(rq);
> +
> + raw_spin_rq_lock(target_rq);
> + activate_task(target_rq, p, 0);
> + wakeup_preempt(target_rq, p, 0);
> + raw_spin_rq_unlock(target_rq);
I'll leave my suggestion for this on Patch 11 but for now ...
> +
> + raw_spin_rq_lock(rq);
> + rq_repin_lock(rq, rf);
> + update_rq_clock(rq);
I'm tempted to suggest extracting this pattern into a guard. We seem
to always do the zap + unpin + unlock somewhere in the middle and
lock + repin + update clock at the end of a function so how about:
(Build and boot tested with a test-ww_mutex run; Open-coded pattern
inspired from the one for perf_ctx_lock in kernel/events/core.c)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 4c5493b0ad21..52dfa63327fc 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6600,6 +6600,44 @@ static bool try_to_block_task(struct rq *rq, struct task_struct *p,
}
#ifdef CONFIG_SCHED_PROXY_EXEC
+typedef struct {
+ struct rq *rq;
+ struct rq_flags *rf;
+} class_proxy_drop_reacquire_rq_lock_t;
+
+static void __proxy_drop_rq_lock(struct rq *rq, struct rq_flags *rf)
+{
+ /*
+ * We have to zap callbacks before unlocking the rq
+ * as another CPU may jump in and call sched_balance_rq
+ * which can trip the warning in rq_pin_lock() if we
+ * leave callbacks set.
+ */
+ zap_balance_callbacks(rq);
+ rq_unpin_lock(rq, rf);
+ raw_spin_rq_unlock(rq);
+}
+
+static void __proxy_reacquire_rq_lock(struct rq *rq, struct rq_flags *rf)
+{
+ raw_spin_rq_lock(rq);
+ rq_repin_lock(rq, rf);
+ update_rq_clock(rq);
+}
+
+static inline void
+class_proxy_drop_reacquire_rq_lock_destructor(class_proxy_drop_reacquire_rq_lock_t *_T)
+{
+ __proxy_reacquire_rq_lock(_T->rq, _T->rf);
+}
+
+static inline class_proxy_drop_reacquire_rq_lock_t
+class_proxy_drop_reacquire_rq_lock_constructor(struct rq *rq, struct rq_flags *rf)
+{
+ __proxy_drop_rq_lock(rq, rf);
+ return (class_proxy_drop_reacquire_rq_lock_t){ rq, rf };
+}
+
static inline struct task_struct *proxy_resched_idle(struct rq *rq)
{
put_prev_set_next_task(rq, rq->donor, rq->idle);
@@ -6666,23 +6704,15 @@ static void proxy_migrate_task(struct rq *rq, struct rq_flags *rf,
proxy_set_task_cpu(p, target_cpu);
/*
- * We have to zap callbacks before unlocking the rq
- * as another CPU may jump in and call sched_balance_rq
- * which can trip the warning in rq_pin_lock() if we
- * leave callbacks set.
+ * Zap balance callbacks and drop the rq_lock
+ * until the end of the scope.
*/
- zap_balance_callbacks(rq);
- rq_unpin_lock(rq, rf);
- raw_spin_rq_unlock(rq);
+ guard(proxy_drop_reacquire_rq_lock)(rq, rf);
raw_spin_rq_lock(target_rq);
activate_task(target_rq, p, 0);
wakeup_preempt(target_rq, p, 0);
raw_spin_rq_unlock(target_rq);
-
- raw_spin_rq_lock(rq);
- rq_repin_lock(rq, rf);
- update_rq_clock(rq);
}
static void proxy_force_return(struct rq *rq, struct rq_flags *rf,
---
If the guard obfuscates the flow too much, I'm happy with the current
approach as well but having the guard + a comment on top of its usage is
lot more soothing on the eye IMHO :-)
It also simplifies proxy_force_return() if my next set of comments aren't
too horrible. I'll leave it to you and Peter to decide which is best.
> +}
> +
> +static void proxy_force_return(struct rq *rq, struct rq_flags *rf,
> + struct task_struct *p)
> +{
> + struct rq *this_rq, *target_rq;
> + struct rq_flags this_rf;
> + int cpu, wake_flag = 0;
Should we pretend this is a wakeup with WF_TTWU? There are some
affine wakeup considerations that select_task_rq_fair() adds with
WF_TTWU. Without it, we'll end up selecting the wake_cpu's LLC to
search for the wakeup target.
> +
> + lockdep_assert_rq_held(rq);
> + WARN_ON(p == rq->curr);
> +
> + get_task_struct(p);
Since we are IRQs disabled (equivalent of RCU read-side), I don't think
we need to grab an extra reference here since the task_struct cannot
disappear from under us and we take the necessary locks to confirm the
task's rq associations so we shouldn't race with anything either.
> +
> + /*
> + * We have to zap callbacks before unlocking the rq
> + * as another CPU may jump in and call sched_balance_rq
> + * which can trip the warning in rq_pin_lock() if we
> + * leave callbacks set.
> + */
> + zap_balance_callbacks(rq);
> + rq_unpin_lock(rq, rf);
> + raw_spin_rq_unlock(rq);
> +
> + /*
> + * We drop the rq lock, and re-grab task_rq_lock to get
> + * the pi_lock (needed for select_task_rq) as well.
> + */
> + this_rq = task_rq_lock(p, &this_rf);
So I'm failing to see why we need to drop the rq_lock, re-grab it via
task_rq_lock() when we are eventually going to do a deactivate_task().
Once we deactivate, wakeup path will stall at ttwu_runnable() since it
cannot grab the __task_rq_lock() with task_on_rq_migrating() so we
should be able to safely move the task around.
Maybe my naive eyes are missing something (and perhaps there are some
details that arrive with sleeping owner stuff) but the following
boots fine and survives a sched-messaging and test-ww_mutex run
without any splat on my machine at this point:
(Includes bits from comments above to give those a spin.)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 52dfa63327fc..2a4e73c807e3 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6718,86 +6718,31 @@ static void proxy_migrate_task(struct rq *rq, struct rq_flags *rf,
static void proxy_force_return(struct rq *rq, struct rq_flags *rf,
struct task_struct *p)
{
- struct rq *this_rq, *target_rq;
- struct rq_flags this_rf;
- int cpu, wake_flag = 0;
+ int target_cpu, wake_flag = WF_TTWU;
+ struct rq *target_rq;
lockdep_assert_rq_held(rq);
WARN_ON(p == rq->curr);
- get_task_struct(p);
-
- /*
- * We have to zap callbacks before unlocking the rq
- * as another CPU may jump in and call sched_balance_rq
- * which can trip the warning in rq_pin_lock() if we
- * leave callbacks set.
- */
- zap_balance_callbacks(rq);
- rq_unpin_lock(rq, rf);
- raw_spin_rq_unlock(rq);
-
- /*
- * We drop the rq lock, and re-grab task_rq_lock to get
- * the pi_lock (needed for select_task_rq) as well.
- */
- this_rq = task_rq_lock(p, &this_rf);
+ proxy_resched_idle(rq);
+ deactivate_task(rq, p, DEQUEUE_NOCLOCK);
- /*
- * Since we let go of the rq lock, the task may have been
- * woken or migrated to another rq before we got the
- * task_rq_lock. So re-check we're on the same RQ. If
- * not, the task has already been migrated and that CPU
- * will handle any futher migrations.
- */
- if (this_rq != rq)
- goto err_out;
+ target_cpu = select_task_rq(p, p->wake_cpu, &wake_flag);
+ target_rq = cpu_rq(target_cpu);
+ set_task_cpu(p, target_cpu);
- /* Similarly, if we've been dequeued, someone else will wake us */
- if (!task_on_rq_queued(p))
- goto err_out;
+ clear_task_blocked_on(p, NULL);
/*
- * Since we should only be calling here from __schedule()
- * -> find_proxy_task(), no one else should have
- * assigned current out from under us. But check and warn
- * if we see this, then bail.
+ * Zap balance callbacks and drop the rq_lock
+ * until the end of the scope.
*/
- if (task_current(this_rq, p) || task_on_cpu(this_rq, p)) {
- WARN_ONCE(1, "%s rq: %i current/on_cpu task %s %d on_cpu: %i\n",
- __func__, cpu_of(this_rq),
- p->comm, p->pid, p->on_cpu);
- goto err_out;
- }
-
- update_rq_clock(this_rq);
- proxy_resched_idle(this_rq);
- deactivate_task(this_rq, p, DEQUEUE_NOCLOCK);
- cpu = select_task_rq(p, p->wake_cpu, &wake_flag);
- set_task_cpu(p, cpu);
- target_rq = cpu_rq(cpu);
- clear_task_blocked_on(p, NULL);
- task_rq_unlock(this_rq, p, &this_rf);
+ guard(proxy_drop_reacquire_rq_lock)(rq, rf);
- /* Drop this_rq and grab target_rq for activation */
raw_spin_rq_lock(target_rq);
activate_task(target_rq, p, 0);
wakeup_preempt(target_rq, p, 0);
- put_task_struct(p);
raw_spin_rq_unlock(target_rq);
-
- /* Finally, re-grab the origianl rq lock and return to pick-again */
- raw_spin_rq_lock(rq);
- rq_repin_lock(rq, rf);
- update_rq_clock(rq);
- return;
-
-err_out:
- task_rq_unlock(this_rq, p, &this_rf);
- put_task_struct(p);
- raw_spin_rq_lock(rq);
- rq_repin_lock(rq, rf);
- update_rq_clock(rq);
}
/*
---
Thoughts?
Also, we may want to just move the attach_one_task() helper to sched.h
and use it in both proxy_migrate_task() and proxy_force_return().
> +
> + /*
> + * Since we let go of the rq lock, the task may have been
> + * woken or migrated to another rq before we got the
> + * task_rq_lock. So re-check we're on the same RQ. If
> + * not, the task has already been migrated and that CPU
> + * will handle any futher migrations.
> + */
> + if (this_rq != rq)
> + goto err_out;
> +
> + /* Similarly, if we've been dequeued, someone else will wake us */
> + if (!task_on_rq_queued(p))
> + goto err_out;
> +
> + /*
> + * Since we should only be calling here from __schedule()
> + * -> find_proxy_task(), no one else should have
> + * assigned current out from under us. But check and warn
> + * if we see this, then bail.
> + */
> + if (task_current(this_rq, p) || task_on_cpu(this_rq, p)) {
> + WARN_ONCE(1, "%s rq: %i current/on_cpu task %s %d on_cpu: %i\n",
> + __func__, cpu_of(this_rq),
> + p->comm, p->pid, p->on_cpu);
> + goto err_out;
> }
> - return NULL;
> +
> + update_rq_clock(this_rq);
> + proxy_resched_idle(this_rq);
> + deactivate_task(this_rq, p, DEQUEUE_NOCLOCK);
> + cpu = select_task_rq(p, p->wake_cpu, &wake_flag);
> + set_task_cpu(p, cpu);
> + target_rq = cpu_rq(cpu);
> + clear_task_blocked_on(p, NULL);
> + task_rq_unlock(this_rq, p, &this_rf);
> +
> + /* Drop this_rq and grab target_rq for activation */
> + raw_spin_rq_lock(target_rq);
> + activate_task(target_rq, p, 0);
> + wakeup_preempt(target_rq, p, 0);
> + put_task_struct(p);
> + raw_spin_rq_unlock(target_rq);
> +
> + /* Finally, re-grab the origianl rq lock and return to pick-again */
> + raw_spin_rq_lock(rq);
> + rq_repin_lock(rq, rf);
> + update_rq_clock(rq);
> + return;
> +
> +err_out:
> + task_rq_unlock(this_rq, p, &this_rf);
> + put_task_struct(p);
> + raw_spin_rq_lock(rq);
> + rq_repin_lock(rq, rf);
> + update_rq_clock(rq);
> }
>
> /*
> @@ -6650,11 +6789,13 @@ static struct task_struct *proxy_deactivate(struct rq *rq, struct task_struct *d
> static struct task_struct *
> find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
> {
> - enum { FOUND, DEACTIVATE_DONOR } action = FOUND;
> + enum { FOUND, DEACTIVATE_DONOR, MIGRATE, NEEDS_RETURN } action = FOUND;
> struct task_struct *owner = NULL;
> + bool curr_in_chain = false;
> int this_cpu = cpu_of(rq);
> struct task_struct *p;
> struct mutex *mutex;
> + int owner_cpu;
>
> /* Follow blocked_on chain. */
> for (p = donor; task_is_blocked(p); p = owner) {
> @@ -6663,9 +6804,15 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
> if (!mutex)
> return NULL;
>
> - /* if its PROXY_WAKING, resched_idle so ttwu can complete */
> - if (mutex == PROXY_WAKING)
> - return proxy_resched_idle(rq);
> + /* if its PROXY_WAKING, do return migration or run if current */
> + if (mutex == PROXY_WAKING) {
> + if (task_current(rq, p)) {
> + clear_task_blocked_on(p, PROXY_WAKING);
> + return p;
> + }
> + action = NEEDS_RETURN;
> + break;
> + }
>
> /*
> * By taking mutex->wait_lock we hold off concurrent mutex_unlock()
> @@ -6685,26 +6832,41 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
> return NULL;
> }
>
> + if (task_current(rq, p))
> + curr_in_chain = true;
> +
> owner = __mutex_owner(mutex);
> if (!owner) {
> /*
> - * If there is no owner, clear blocked_on
> - * and return p so it can run and try to
> - * acquire the lock
> + * If there is no owner, either clear blocked_on
> + * and return p (if it is current and safe to
> + * just run on this rq), or return-migrate the task.
> */
> - __clear_task_blocked_on(p, mutex);
> - return p;
> + if (task_current(rq, p)) {
> + __clear_task_blocked_on(p, NULL);
If my suggestion with proxy_force_return() isn't downright horrible, we
can unconditionally clear the p->blocked_on relation on both the
NEEDS_RETURN paths since we won't drop the rq_lock before deactivate
(jury still out on that).
> + return p;
> + }
> + action = NEEDS_RETURN;
> + break;
> }
>
--
Thanks and Regards,
Prateek
next prev parent reply other threads:[~2025-12-30 5:34 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-24 22:30 [PATCH v24 00/11] Donor Migration for Proxy Execution (v24) John Stultz
2025-11-24 22:30 ` [PATCH v24 01/11] locking: Add task::blocked_lock to serialize blocked_on state John Stultz
2025-11-24 22:30 ` [PATCH v24 02/11] sched: Fix modifying donor->blocked on without proper locking John Stultz
2025-11-24 22:30 ` [PATCH v24 03/11] sched/locking: Add special p->blocked_on==PROXY_WAKING value for proxy return-migration John Stultz
2025-11-24 22:30 ` [PATCH v24 04/11] sched: Add assert_balance_callbacks_empty helper John Stultz
2025-11-24 22:30 ` [PATCH v24 05/11] sched: Add logic to zap balance callbacks if we pick again John Stultz
2025-11-24 22:30 ` [PATCH v24 06/11] sched: Handle blocked-waiter migration (and return migration) John Stultz
2025-12-30 5:33 ` K Prateek Nayak [this message]
2026-03-07 6:48 ` John Stultz
2026-03-10 4:17 ` K Prateek Nayak
2025-11-24 22:30 ` [PATCH v24 07/11] sched: Rework pick_next_task() and prev_balance() to avoid stale prev references John Stultz
2025-11-24 22:31 ` [PATCH v24 08/11] sched: Avoid donor->sched_class->yield_task() null traversal John Stultz
2025-12-30 6:01 ` K Prateek Nayak
2025-12-30 9:52 ` K Prateek Nayak
2026-01-01 7:04 ` K Prateek Nayak
2025-11-24 22:31 ` [PATCH v24 09/11] sched: Have try_to_wake_up() handle return-migration for PROXY_WAKING case John Stultz
2026-01-01 9:53 ` K Prateek Nayak
2026-03-11 6:45 ` John Stultz
2026-03-11 9:25 ` K Prateek Nayak
2025-11-24 22:31 ` [PATCH v24 10/11] sched: Add blocked_donor link to task for smarter mutex handoffs John Stultz
2025-11-24 22:31 ` [PATCH v24 11/11] sched: Migrate whole chain in proxy_migrate_task() John Stultz
2025-12-30 6:46 ` K Prateek Nayak
2026-03-07 7:07 ` John Stultz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=df4e667a-e322-4124-a9c1-57d6f58e6f0a@amd.com \
--to=kprateek.nayak@amd.com \
--cc=Metin.Kaya@arm.com \
--cc=boqun.feng@gmail.com \
--cc=bsegall@google.com \
--cc=daniel.lezcano@linaro.org \
--cc=dietmar.eggemann@arm.com \
--cc=hupu.gm@gmail.com \
--cc=joelagnelf@nvidia.com \
--cc=jstultz@google.com \
--cc=juri.lelli@redhat.com \
--cc=kernel-team@android.com \
--cc=kuyo.chang@mediatek.com \
--cc=linux-kernel@vger.kernel.org \
--cc=longman@redhat.com \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=qyousef@layalina.io \
--cc=rostedt@goodmis.org \
--cc=suleiman@google.com \
--cc=tglx@linutronix.de \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
--cc=will@kernel.org \
--cc=xuewen.yan94@gmail.com \
--cc=zezeozue@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox