* [RFC][PATCH v20 0/6] Donor Migration for Proxy Execution (v20)
@ 2025-07-22 7:05 John Stultz
2025-07-22 7:05 ` [RFC][PATCH v20 1/6] locking: Add task::blocked_lock to serialize blocked_on state John Stultz
` (6 more replies)
0 siblings, 7 replies; 9+ messages in thread
From: John Stultz @ 2025-07-22 7:05 UTC (permalink / raw)
To: LKML
Cc: John Stultz, Joel Fernandes, Qais Yousef, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Valentin Schneider, Steven Rostedt, Ben Segall, Zimuzo Ezeozue,
Mel Gorman, Will Deacon, Waiman Long, Boqun Feng,
Paul E. McKenney, Metin Kaya, Xuewen Yan, K Prateek Nayak,
Thomas Gleixner, Daniel Lezcano, Suleiman Souhlal, kuyo chang,
hupu, kernel-team
Hey All,
As Peter just queued the Single-RQ portion of the Proxy
Execution series, I wanted to start getting some initial review
feedback for the next chunk of the series: Donor Migration
v20 is not very different from v19 of the whole series that
I’ve shared previously, I’ve only rebased it upon Peter’s
sched/core branch, dropping the queued changes, resolving
trivial conflicts and making some small tweaks to drop
CONFIG_SMP conditionals that have been removed in -tip tree,
along with a few minor cleanups.
I’m trying to submit this larger work in smallish digestible
pieces, so in this portion of the series, I’m only submitting
for review and consideration the logic that allows us to do
donor(blocked waiter) migration, allowing us to proxy-execute
lock owners that might be on other cpu runqueues. This requires
some additional changes to locking and extra state tracking to
ensure we don’t accidentally run a migrated donor on a cpu it
isn’t affined to, as well as some extra handling to deal with
balance callback state that needs to be reset when we decide to
pick a different task after doing donor migration.
I’d love to get some initial feedback on any place where these
patches are confusing, or could use additional clarifications.
Also you can find the full proxy-exec series here:
https://github.com/johnstultz-work/linux-dev/commits/proxy-exec-v20-peterz-sched-core/
https://github.com/johnstultz-work/linux-dev.git proxy-exec-v20-peterz-sched-core
Issues still to address with the full series:
* There’s a new quirk from recent changes for dl_server that
is causing the ksched_football test in the full series to hang
at boot. I’ve bisected and reverted the change for now, but I
need to better understand what’s going wrong.
* I spent some more time thinking about Peter’s suggestion to
avoid using the blocked_on_state == BO_WAKING check to protect
against running proxy-migrated tasks on cpus out of their
affinity mask. His suggestion to just dequeue the task prior
to the wakeup in the unlock-wakeup path is more elegant, but
this would be insufficient to protect from other wakeup paths
that don’t dequeue. I’m still thinking if there is a clean way
around this, but I’ve not yet found it.
* Need to sort out what is needed for sched_ext to be ok with
proxy-execution enabled.
* K Prateek Nayak did some testing about a bit over a year ago
with an earlier version of the series and saw ~3-5%
regressions in some cases. Need to re-evaluate this with the
proxy-migration avoidance optimization Suleiman suggested now
implemented.
* The chain migration functionality needs further iterations and
better validation to ensure it truly maintains the RT/DL load
balancing invariants (despite this being broken in vanilla
upstream with RT_PUSH_IPI currently)
I’d really appreciate any feedback or review thoughts on the
full series as well. I’m trying to keep the chunks small,
reviewable and iteratively testable, but if you have any
suggestions on how to improve the series, I’m all ears.
Credit/Disclaimer:
--------------------
As always, this Proxy Execution series has a long history with
lots of developers that deserve credit:
First described in a paper[1] by Watkins, Straub, Niehaus, then
from patches from Peter Zijlstra, extended with lots of work by
Juri Lelli, Valentin Schneider, and Connor O'Brien. (and thank
you to Steven Rostedt for providing additional details here!)
So again, many thanks to those above, as all the credit for this
series really is due to them - while the mistakes are likely
mine.
Thanks so much!
-john
[1] https://static.lwn.net/images/conf/rtlws11/papers/proc/p38.pdf
Cc: Joel Fernandes <joelagnelf@nvidia.com>
Cc: Qais Yousef <qyousef@layalina.io>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Zimuzo Ezeozue <zezeozue@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Metin Kaya <Metin.Kaya@arm.com>
Cc: Xuewen Yan <xuewen.yan94@gmail.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Suleiman Souhlal <suleiman@google.com>
Cc: kuyo chang <kuyo.chang@mediatek.com>
Cc: hupu <hupu.gm@gmail.com>
Cc: kernel-team@android.com
John Stultz (5):
locking: Add task::blocked_lock to serialize blocked_on state
kernel/locking: Add blocked_on_state to provide necessary tri-state
for return migration
sched: Add logic to zap balance callbacks if we pick again
sched: Handle blocked-waiter migration (and return migration)
sched: Migrate whole chain in proxy_migrate_task()
Peter Zijlstra (1):
sched: Add blocked_donor link to task for smarter mutex handoffs
include/linux/sched.h | 107 ++++++++-----
init/init_task.c | 4 +
kernel/fork.c | 4 +
kernel/locking/mutex.c | 80 +++++++--
kernel/locking/ww_mutex.h | 17 +-
kernel/sched/core.c | 329 +++++++++++++++++++++++++++++++++++---
kernel/sched/fair.c | 3 +-
kernel/sched/sched.h | 2 +-
8 files changed, 459 insertions(+), 87 deletions(-)
--
2.50.0.727.gbf7dc18ff4-goog
^ permalink raw reply [flat|nested] 9+ messages in thread
* [RFC][PATCH v20 1/6] locking: Add task::blocked_lock to serialize blocked_on state
2025-07-22 7:05 [RFC][PATCH v20 0/6] Donor Migration for Proxy Execution (v20) John Stultz
@ 2025-07-22 7:05 ` John Stultz
2025-07-22 7:05 ` [RFC][PATCH v20 2/6] kernel/locking: Add blocked_on_state to provide necessary tri-state for return migration John Stultz
` (5 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: John Stultz @ 2025-07-22 7:05 UTC (permalink / raw)
To: LKML
Cc: John Stultz, Joel Fernandes, Qais Yousef, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Valentin Schneider, Steven Rostedt, Ben Segall, Zimuzo Ezeozue,
Mel Gorman, Will Deacon, Waiman Long, Boqun Feng,
Paul E. McKenney, Metin Kaya, Xuewen Yan, K Prateek Nayak,
Thomas Gleixner, Daniel Lezcano, Suleiman Souhlal, kuyo chang,
hupu, kernel-team
So far, we have been able to utilize the mutex::wait_lock
for serializing the blocked_on state, but when we move to
proxying across runqueues, we will need to add more state
and a way to serialize changes to this state in contexts
where we don't hold the mutex::wait_lock.
So introduce the task::blocked_lock, which nests under the
mutex::wait_lock in the locking order, and rework the locking
to use it.
Signed-off-by: John Stultz <jstultz@google.com>
---
v15:
* Split back out into later in the series
v16:
* Fixups to mark tasks unblocked before sleeping in
mutex_optimistic_spin()
* Rework to use guard() as suggested by Peter
v19:
* Rework logic for PREEMPT_RT issues reported by
K Prateek Nayak
Cc: Joel Fernandes <joelagnelf@nvidia.com>
Cc: Qais Yousef <qyousef@layalina.io>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Zimuzo Ezeozue <zezeozue@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Metin Kaya <Metin.Kaya@arm.com>
Cc: Xuewen Yan <xuewen.yan94@gmail.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Suleiman Souhlal <suleiman@google.com>
Cc: kuyo chang <kuyo.chang@mediatek.com>
Cc: hupu <hupu.gm@gmail.com>
Cc: kernel-team@android.com
---
include/linux/sched.h | 25 ++++++++++++++++++-------
init/init_task.c | 1 +
kernel/fork.c | 1 +
kernel/locking/mutex.c | 34 ++++++++++++++++++++++------------
kernel/locking/ww_mutex.h | 6 ++++--
kernel/sched/core.c | 4 +++-
6 files changed, 49 insertions(+), 22 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 5b4e1cd52e27a..a6654948d264f 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1232,6 +1232,7 @@ struct task_struct {
#endif
struct mutex *blocked_on; /* lock we're blocked on */
+ raw_spinlock_t blocked_lock;
#ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER
/*
@@ -2145,8 +2146,8 @@ static inline void __set_task_blocked_on(struct task_struct *p, struct mutex *m)
WARN_ON_ONCE(!m);
/* The task should only be setting itself as blocked */
WARN_ON_ONCE(p != current);
- /* Currently we serialize blocked_on under the mutex::wait_lock */
- lockdep_assert_held_once(&m->wait_lock);
+ /* Currently we serialize blocked_on under the task::blocked_lock */
+ lockdep_assert_held_once(&p->blocked_lock);
/*
* Check ensure we don't overwrite existing mutex value
* with a different mutex. Note, setting it to the same
@@ -2158,15 +2159,14 @@ static inline void __set_task_blocked_on(struct task_struct *p, struct mutex *m)
static inline void set_task_blocked_on(struct task_struct *p, struct mutex *m)
{
- guard(raw_spinlock_irqsave)(&m->wait_lock);
+ guard(raw_spinlock_irqsave)(&p->blocked_lock);
__set_task_blocked_on(p, m);
}
static inline void __clear_task_blocked_on(struct task_struct *p, struct mutex *m)
{
- WARN_ON_ONCE(!m);
- /* Currently we serialize blocked_on under the mutex::wait_lock */
- lockdep_assert_held_once(&m->wait_lock);
+ /* Currently we serialize blocked_on under the task::blocked_lock */
+ lockdep_assert_held_once(&p->blocked_lock);
/*
* There may be cases where we re-clear already cleared
* blocked_on relationships, but make sure we are not
@@ -2178,8 +2178,15 @@ static inline void __clear_task_blocked_on(struct task_struct *p, struct mutex *
static inline void clear_task_blocked_on(struct task_struct *p, struct mutex *m)
{
- guard(raw_spinlock_irqsave)(&m->wait_lock);
+ guard(raw_spinlock_irqsave)(&p->blocked_lock);
+ __clear_task_blocked_on(p, m);
+}
+
+static inline void clear_task_blocked_on_nested(struct task_struct *p, struct mutex *m)
+{
+ raw_spin_lock_nested(&p->blocked_lock, SINGLE_DEPTH_NESTING);
__clear_task_blocked_on(p, m);
+ raw_spin_unlock(&p->blocked_lock);
}
#else
static inline void __clear_task_blocked_on(struct task_struct *p, struct rt_mutex *m)
@@ -2189,6 +2196,10 @@ static inline void __clear_task_blocked_on(struct task_struct *p, struct rt_mute
static inline void clear_task_blocked_on(struct task_struct *p, struct rt_mutex *m)
{
}
+
+static inline void clear_task_blocked_on_nested(struct task_struct *p, struct rt_mutex *m)
+{
+}
#endif /* !CONFIG_PREEMPT_RT */
static __always_inline bool need_resched(void)
diff --git a/init/init_task.c b/init/init_task.c
index e557f622bd906..7e29d86153d9f 100644
--- a/init/init_task.c
+++ b/init/init_task.c
@@ -140,6 +140,7 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
.journal_info = NULL,
INIT_CPU_TIMERS(init_task)
.pi_lock = __RAW_SPIN_LOCK_UNLOCKED(init_task.pi_lock),
+ .blocked_lock = __RAW_SPIN_LOCK_UNLOCKED(init_task.blocked_lock),
.timer_slack_ns = 50000, /* 50 usec default slack */
.thread_pid = &init_struct_pid,
.thread_node = LIST_HEAD_INIT(init_signals.thread_head),
diff --git a/kernel/fork.c b/kernel/fork.c
index 5f87f05aff4a0..6a294e6ee105d 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2025,6 +2025,7 @@ __latent_entropy struct task_struct *copy_process(
ftrace_graph_init_task(p);
rt_mutex_init_task(p);
+ raw_spin_lock_init(&p->blocked_lock);
lockdep_assert_irqs_enabled();
#ifdef CONFIG_PROVE_LOCKING
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 80d778fedd605..2ab6d291696e8 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -614,6 +614,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
}
raw_spin_lock_irqsave(&lock->wait_lock, flags);
+ raw_spin_lock(¤t->blocked_lock);
/*
* After waiting to acquire the wait_lock, try again.
*/
@@ -657,7 +658,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
* the handoff.
*/
if (__mutex_trylock(lock))
- goto acquired;
+ break;
/*
* Check for signals and kill conditions while holding
@@ -675,18 +676,21 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
goto err;
}
+ raw_spin_unlock(¤t->blocked_lock);
raw_spin_unlock_irqrestore_wake(&lock->wait_lock, flags, &wake_q);
schedule_preempt_disabled();
first = __mutex_waiter_is_first(lock, &waiter);
+ raw_spin_lock_irqsave(&lock->wait_lock, flags);
+ raw_spin_lock(¤t->blocked_lock);
/*
* As we likely have been woken up by task
* that has cleared our blocked_on state, re-set
* it to the lock we are trying to acquire.
*/
- set_task_blocked_on(current, lock);
+ __set_task_blocked_on(current, lock);
set_current_state(state);
/*
* Here we order against unlock; we must either see it change
@@ -697,23 +701,27 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
break;
if (first) {
- trace_contention_begin(lock, LCB_F_MUTEX | LCB_F_SPIN);
+ bool opt_acquired;
+
/*
* mutex_optimistic_spin() can call schedule(), so
- * clear blocked on so we don't become unselectable
+ * we need to release these locks before calling it,
+ * and clear blocked on so we don't become unselectable
* to run.
*/
- clear_task_blocked_on(current, lock);
- if (mutex_optimistic_spin(lock, ww_ctx, &waiter))
+ __clear_task_blocked_on(current, lock);
+ raw_spin_unlock(¤t->blocked_lock);
+ raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
+ trace_contention_begin(lock, LCB_F_MUTEX | LCB_F_SPIN);
+ opt_acquired = mutex_optimistic_spin(lock, ww_ctx, &waiter);
+ raw_spin_lock_irqsave(&lock->wait_lock, flags);
+ raw_spin_lock(¤t->blocked_lock);
+ __set_task_blocked_on(current, lock);
+ if (opt_acquired)
break;
- set_task_blocked_on(current, lock);
trace_contention_begin(lock, LCB_F_MUTEX);
}
-
- raw_spin_lock_irqsave(&lock->wait_lock, flags);
}
- raw_spin_lock_irqsave(&lock->wait_lock, flags);
-acquired:
__clear_task_blocked_on(current, lock);
__set_current_state(TASK_RUNNING);
@@ -739,6 +747,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
if (ww_ctx)
ww_mutex_lock_acquired(ww, ww_ctx);
+ raw_spin_unlock(¤t->blocked_lock);
raw_spin_unlock_irqrestore_wake(&lock->wait_lock, flags, &wake_q);
preempt_enable();
return 0;
@@ -750,6 +759,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
err_early_kill:
WARN_ON(__get_task_blocked_on(current));
trace_contention_end(lock, ret);
+ raw_spin_unlock(¤t->blocked_lock);
raw_spin_unlock_irqrestore_wake(&lock->wait_lock, flags, &wake_q);
debug_mutex_free_waiter(&waiter);
mutex_release(&lock->dep_map, ip);
@@ -959,7 +969,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
next = waiter->task;
debug_mutex_wake_waiter(lock, waiter);
- __clear_task_blocked_on(next, lock);
+ clear_task_blocked_on(next, lock);
wake_q_add(&wake_q, next);
}
diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
index 086fd5487ca77..bf13039fb2a04 100644
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -289,7 +289,8 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter,
* blocked_on pointer. Otherwise we can see circular
* blocked_on relationships that can't resolve.
*/
- __clear_task_blocked_on(waiter->task, lock);
+ /* nested as we should hold current->blocked_lock already */
+ clear_task_blocked_on_nested(waiter->task, lock);
wake_q_add(wake_q, waiter->task);
}
@@ -343,7 +344,8 @@ static bool __ww_mutex_wound(struct MUTEX *lock,
* blocked_on pointer. Otherwise we can see circular
* blocked_on relationships that can't resolve.
*/
- __clear_task_blocked_on(owner, lock);
+ /* nested as we should hold current->blocked_lock already */
+ clear_task_blocked_on_nested(owner, lock);
wake_q_add(wake_q, owner);
}
return true;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index f7f576ad9b223..52c0f16aab101 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6633,6 +6633,7 @@ static struct task_struct *proxy_deactivate(struct rq *rq, struct task_struct *d
* p->pi_lock
* rq->lock
* mutex->wait_lock
+ * p->blocked_lock
*
* Returns the task that is going to be used as execution context (the one
* that is actually going to be run on cpu_of(rq)).
@@ -6656,8 +6657,9 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
* and ensure @owner sticks around.
*/
guard(raw_spinlock)(&mutex->wait_lock);
+ guard(raw_spinlock)(&p->blocked_lock);
- /* Check again that p is blocked with wait_lock held */
+ /* Check again that p is blocked with blocked_lock held */
if (mutex != __get_task_blocked_on(p)) {
/*
* Something changed in the blocked_on chain and
--
2.50.0.727.gbf7dc18ff4-goog
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [RFC][PATCH v20 2/6] kernel/locking: Add blocked_on_state to provide necessary tri-state for return migration
2025-07-22 7:05 [RFC][PATCH v20 0/6] Donor Migration for Proxy Execution (v20) John Stultz
2025-07-22 7:05 ` [RFC][PATCH v20 1/6] locking: Add task::blocked_lock to serialize blocked_on state John Stultz
@ 2025-07-22 7:05 ` John Stultz
2025-07-22 7:05 ` [RFC][PATCH v20 3/6] sched: Add logic to zap balance callbacks if we pick again John Stultz
` (4 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: John Stultz @ 2025-07-22 7:05 UTC (permalink / raw)
To: LKML
Cc: John Stultz, Joel Fernandes, Qais Yousef, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Valentin Schneider, Steven Rostedt, Ben Segall, Zimuzo Ezeozue,
Mel Gorman, Will Deacon, Waiman Long, Boqun Feng,
Paul E. McKenney, Metin Kaya, Xuewen Yan, K Prateek Nayak,
Thomas Gleixner, Daniel Lezcano, Suleiman Souhlal, kuyo chang,
hupu, kernel-team
As we add functionality to proxy execution, we may migrate a
donor task to a runqueue where it can't run due to cpu affinity.
Thus, we must be careful to ensure we return-migrate the task
back to a cpu in its cpumask when it becomes unblocked.
Thus we need more then just a binary concept of the task being
blocked on a mutex or not.
So add a blocked_on_state value to the task, that allows the
task to move through BO_RUNNING -> BO_BLOCKED -> BO_WAKING
and back to BO_RUNNING. This provides a guard state in
BO_WAKING so we can know the task is no longer blocked
but we don't want to run it until we have potentially
done return migration, back to a usable cpu.
Signed-off-by: John Stultz <jstultz@google.com>
---
v15:
* Split blocked_on_state into its own patch later in the
series, as the tri-state isn't necessary until we deal
with proxy/return migrations
v16:
* Handle case where task in the chain is being set as
BO_WAKING by another cpu (usually via ww_mutex die code).
Make sure we release the rq lock so the wakeup can
complete.
* Rework to use guard() in find_proxy_task() as suggested
by Peter
v18:
* Add initialization of blocked_on_state for init_task
v19:
* PREEMPT_RT build fixups and rework suggested by
K Prateek Nayak
v20:
* Simplify one of the blocked_on_state changes to avoid extra
PREMEPT_RT conditionals
Cc: Joel Fernandes <joelagnelf@nvidia.com>
Cc: Qais Yousef <qyousef@layalina.io>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Zimuzo Ezeozue <zezeozue@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Metin Kaya <Metin.Kaya@arm.com>
Cc: Xuewen Yan <xuewen.yan94@gmail.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Suleiman Souhlal <suleiman@google.com>
Cc: kuyo chang <kuyo.chang@mediatek.com>
Cc: hupu <hupu.gm@gmail.com>
Cc: kernel-team@android.com
---
include/linux/sched.h | 100 ++++++++++++++++++++++----------------
init/init_task.c | 1 +
kernel/fork.c | 1 +
kernel/locking/mutex.c | 15 +++---
kernel/locking/ww_mutex.h | 17 +++----
kernel/sched/core.c | 26 +++++++++-
kernel/sched/sched.h | 2 +-
7 files changed, 100 insertions(+), 62 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index a6654948d264f..ced001f889519 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -812,6 +812,12 @@ struct kmap_ctrl {
#endif
};
+enum blocked_on_state {
+ BO_RUNNABLE,
+ BO_BLOCKED,
+ BO_WAKING,
+};
+
struct task_struct {
#ifdef CONFIG_THREAD_INFO_IN_TASK
/*
@@ -1231,6 +1237,7 @@ struct task_struct {
struct rt_mutex_waiter *pi_blocked_on;
#endif
+ enum blocked_on_state blocked_on_state;
struct mutex *blocked_on; /* lock we're blocked on */
raw_spinlock_t blocked_lock;
@@ -2131,76 +2138,83 @@ extern int __cond_resched_rwlock_write(rwlock_t *lock);
__cond_resched_rwlock_write(lock); \
})
-#ifndef CONFIG_PREEMPT_RT
-static inline struct mutex *__get_task_blocked_on(struct task_struct *p)
+static inline void __force_blocked_on_runnable(struct task_struct *p)
{
- struct mutex *m = p->blocked_on;
+ lockdep_assert_held(&p->blocked_lock);
+ p->blocked_on_state = BO_RUNNABLE;
+}
- if (m)
- lockdep_assert_held_once(&m->wait_lock);
- return m;
+static inline void force_blocked_on_runnable(struct task_struct *p)
+{
+ guard(raw_spinlock_irqsave)(&p->blocked_lock);
+ __force_blocked_on_runnable(p);
}
-static inline void __set_task_blocked_on(struct task_struct *p, struct mutex *m)
+static inline void __set_blocked_on_runnable(struct task_struct *p)
{
- WARN_ON_ONCE(!m);
- /* The task should only be setting itself as blocked */
- WARN_ON_ONCE(p != current);
- /* Currently we serialize blocked_on under the task::blocked_lock */
- lockdep_assert_held_once(&p->blocked_lock);
- /*
- * Check ensure we don't overwrite existing mutex value
- * with a different mutex. Note, setting it to the same
- * lock repeatedly is ok.
- */
- WARN_ON_ONCE(p->blocked_on && p->blocked_on != m);
- p->blocked_on = m;
+ lockdep_assert_held(&p->blocked_lock);
+
+ if (p->blocked_on_state == BO_WAKING)
+ p->blocked_on_state = BO_RUNNABLE;
}
-static inline void set_task_blocked_on(struct task_struct *p, struct mutex *m)
+static inline void set_blocked_on_runnable(struct task_struct *p)
{
+ if (!sched_proxy_exec())
+ return;
+
guard(raw_spinlock_irqsave)(&p->blocked_lock);
- __set_task_blocked_on(p, m);
+ __set_blocked_on_runnable(p);
}
-static inline void __clear_task_blocked_on(struct task_struct *p, struct mutex *m)
+static inline void __set_blocked_on_waking(struct task_struct *p)
{
- /* Currently we serialize blocked_on under the task::blocked_lock */
- lockdep_assert_held_once(&p->blocked_lock);
- /*
- * There may be cases where we re-clear already cleared
- * blocked_on relationships, but make sure we are not
- * clearing the relationship with a different lock.
- */
- WARN_ON_ONCE(m && p->blocked_on && p->blocked_on != m);
- p->blocked_on = NULL;
+ lockdep_assert_held(&p->blocked_lock);
+
+ if (p->blocked_on_state == BO_BLOCKED)
+ p->blocked_on_state = BO_WAKING;
}
-static inline void clear_task_blocked_on(struct task_struct *p, struct mutex *m)
+static inline struct mutex *__get_task_blocked_on(struct task_struct *p)
{
- guard(raw_spinlock_irqsave)(&p->blocked_lock);
- __clear_task_blocked_on(p, m);
+ lockdep_assert_held_once(&p->blocked_lock);
+ return p->blocked_on;
}
-static inline void clear_task_blocked_on_nested(struct task_struct *p, struct mutex *m)
+static inline void set_blocked_on_waking_nested(struct task_struct *p)
{
raw_spin_lock_nested(&p->blocked_lock, SINGLE_DEPTH_NESTING);
- __clear_task_blocked_on(p, m);
+ __set_blocked_on_waking(p);
raw_spin_unlock(&p->blocked_lock);
}
-#else
-static inline void __clear_task_blocked_on(struct task_struct *p, struct rt_mutex *m)
-{
-}
-static inline void clear_task_blocked_on(struct task_struct *p, struct rt_mutex *m)
+static inline void __set_task_blocked_on(struct task_struct *p, struct mutex *m)
{
+ WARN_ON_ONCE(!m);
+ /* The task should only be setting itself as blocked */
+ WARN_ON_ONCE(p != current);
+ /* Currently we serialize blocked_on under the task::blocked_lock */
+ lockdep_assert_held_once(&p->blocked_lock);
+ /*
+ * Check ensure we don't overwrite existing mutex value
+ * with a different mutex.
+ */
+ WARN_ON_ONCE(p->blocked_on);
+ p->blocked_on = m;
+ p->blocked_on_state = BO_BLOCKED;
}
-static inline void clear_task_blocked_on_nested(struct task_struct *p, struct rt_mutex *m)
+static inline void __clear_task_blocked_on(struct task_struct *p, struct mutex *m)
{
+ /* The task should only be clearing itself */
+ WARN_ON_ONCE(p != current);
+ /* Currently we serialize blocked_on under the task::blocked_lock */
+ lockdep_assert_held_once(&p->blocked_lock);
+ /* Make sure we are clearing the relationship with the right lock */
+ WARN_ON_ONCE(p->blocked_on != m);
+ p->blocked_on = NULL;
+ p->blocked_on_state = BO_RUNNABLE;
}
-#endif /* !CONFIG_PREEMPT_RT */
static __always_inline bool need_resched(void)
{
diff --git a/init/init_task.c b/init/init_task.c
index 7e29d86153d9f..6d72ec23410a6 100644
--- a/init/init_task.c
+++ b/init/init_task.c
@@ -174,6 +174,7 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
.mems_allowed_seq = SEQCNT_SPINLOCK_ZERO(init_task.mems_allowed_seq,
&init_task.alloc_lock),
#endif
+ .blocked_on_state = BO_RUNNABLE,
#ifdef CONFIG_RT_MUTEXES
.pi_waiters = RB_ROOT_CACHED,
.pi_top_task = NULL,
diff --git a/kernel/fork.c b/kernel/fork.c
index 6a294e6ee105d..5eacb25a0c5ab 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2124,6 +2124,7 @@ __latent_entropy struct task_struct *copy_process(
lockdep_init_task(p);
#endif
+ p->blocked_on_state = BO_RUNNABLE;
p->blocked_on = NULL; /* not blocked yet */
#ifdef CONFIG_BCACHE
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 2ab6d291696e8..b5145ddaec242 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -686,11 +686,9 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
raw_spin_lock_irqsave(&lock->wait_lock, flags);
raw_spin_lock(¤t->blocked_lock);
/*
- * As we likely have been woken up by task
- * that has cleared our blocked_on state, re-set
- * it to the lock we are trying to acquire.
+ * Re-set blocked_on_state as unlock path set it to WAKING/RUNNABLE
*/
- __set_task_blocked_on(current, lock);
+ current->blocked_on_state = BO_BLOCKED;
set_current_state(state);
/*
* Here we order against unlock; we must either see it change
@@ -709,14 +707,14 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
* and clear blocked on so we don't become unselectable
* to run.
*/
- __clear_task_blocked_on(current, lock);
+ current->blocked_on_state = BO_RUNNABLE;
raw_spin_unlock(¤t->blocked_lock);
raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
trace_contention_begin(lock, LCB_F_MUTEX | LCB_F_SPIN);
opt_acquired = mutex_optimistic_spin(lock, ww_ctx, &waiter);
raw_spin_lock_irqsave(&lock->wait_lock, flags);
raw_spin_lock(¤t->blocked_lock);
- __set_task_blocked_on(current, lock);
+ current->blocked_on_state = BO_BLOCKED;
if (opt_acquired)
break;
trace_contention_begin(lock, LCB_F_MUTEX);
@@ -968,8 +966,11 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
next = waiter->task;
+ raw_spin_lock(&next->blocked_lock);
debug_mutex_wake_waiter(lock, waiter);
- clear_task_blocked_on(next, lock);
+ WARN_ON_ONCE(__get_task_blocked_on(next) != lock);
+ __set_blocked_on_waking(next);
+ raw_spin_unlock(&next->blocked_lock);
wake_q_add(&wake_q, next);
}
diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
index bf13039fb2a04..44eceffd79b35 100644
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -285,12 +285,12 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter,
debug_mutex_wake_waiter(lock, waiter);
#endif
/*
- * When waking up the task to die, be sure to clear the
- * blocked_on pointer. Otherwise we can see circular
- * blocked_on relationships that can't resolve.
+ * When waking up the task to die, be sure to set the
+ * blocked_on_state to BO_WAKING. Otherwise we can see
+ * circular blocked_on relationships that can't resolve.
*/
/* nested as we should hold current->blocked_lock already */
- clear_task_blocked_on_nested(waiter->task, lock);
+ set_blocked_on_waking_nested(waiter->task);
wake_q_add(wake_q, waiter->task);
}
@@ -340,12 +340,11 @@ static bool __ww_mutex_wound(struct MUTEX *lock,
*/
if (owner != current) {
/*
- * When waking up the task to wound, be sure to clear the
- * blocked_on pointer. Otherwise we can see circular
- * blocked_on relationships that can't resolve.
+ * When waking up the task to wound, be sure to set the
+ * blocked_on_state to BO_WAKING. Otherwise we can see
+ * circular blocked_on relationships that can't resolve.
*/
- /* nested as we should hold current->blocked_lock already */
- clear_task_blocked_on_nested(owner, lock);
+ set_blocked_on_waking_nested(owner);
wake_q_add(wake_q, owner);
}
return true;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 52c0f16aab101..7ae5f2d257eb5 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4322,6 +4322,7 @@ int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
ttwu_queue(p, cpu, wake_flags);
}
out:
+ set_blocked_on_runnable(p);
if (success)
ttwu_stat(p, task_cpu(p), wake_flags);
@@ -6617,7 +6618,7 @@ static struct task_struct *proxy_deactivate(struct rq *rq, struct task_struct *d
* as unblocked, as we aren't doing proxy-migrations
* yet (more logic will be needed then).
*/
- donor->blocked_on = NULL;
+ donor->blocked_on_state = BO_RUNNABLE;
}
return NULL;
}
@@ -6670,9 +6671,30 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
return NULL;
}
+ /*
+ * If a ww_mutex hits the die/wound case, it marks the task as
+ * BO_WAKING and calls try_to_wake_up(), so that the mutex
+ * cycle can be broken and we avoid a deadlock.
+ *
+ * However, if at that moment, we are here on the cpu which the
+ * die/wounded task is enqueued, we might loop on the cycle as
+ * BO_WAKING still causes task_is_blocked() to return true
+ * (since we want return migration to occur before we run the
+ * task).
+ *
+ * Unfortunately since we hold the rq lock, it will block
+ * try_to_wake_up from completing and doing the return
+ * migration.
+ *
+ * So when we hit a !BO_BLOCKED task briefly schedule idle
+ * so we release the rq and let the wakeup complete.
+ */
+ if (p->blocked_on_state != BO_BLOCKED)
+ return proxy_resched_idle(rq);
+
owner = __mutex_owner(mutex);
if (!owner) {
- __clear_task_blocked_on(p, mutex);
+ __force_blocked_on_runnable(p);
return p;
}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index d3f33d10c58c9..d27e8a260e89d 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2267,7 +2267,7 @@ static inline bool task_is_blocked(struct task_struct *p)
if (!sched_proxy_exec())
return false;
- return !!p->blocked_on;
+ return !!p->blocked_on && p->blocked_on_state != BO_RUNNABLE;
}
static inline int task_on_cpu(struct rq *rq, struct task_struct *p)
--
2.50.0.727.gbf7dc18ff4-goog
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [RFC][PATCH v20 3/6] sched: Add logic to zap balance callbacks if we pick again
2025-07-22 7:05 [RFC][PATCH v20 0/6] Donor Migration for Proxy Execution (v20) John Stultz
2025-07-22 7:05 ` [RFC][PATCH v20 1/6] locking: Add task::blocked_lock to serialize blocked_on state John Stultz
2025-07-22 7:05 ` [RFC][PATCH v20 2/6] kernel/locking: Add blocked_on_state to provide necessary tri-state for return migration John Stultz
@ 2025-07-22 7:05 ` John Stultz
2025-07-22 7:05 ` [RFC][PATCH v20 4/6] sched: Handle blocked-waiter migration (and return migration) John Stultz
` (3 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: John Stultz @ 2025-07-22 7:05 UTC (permalink / raw)
To: LKML
Cc: John Stultz, Joel Fernandes, Qais Yousef, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Valentin Schneider, Steven Rostedt, Ben Segall, Zimuzo Ezeozue,
Mel Gorman, Will Deacon, Waiman Long, Boqun Feng,
Paul E. McKenney, Metin Kaya, Xuewen Yan, K Prateek Nayak,
Thomas Gleixner, Daniel Lezcano, Suleiman Souhlal, kuyo chang,
hupu, kernel-team
With proxy-exec, a task is selected to run via pick_next_task(),
and then if it is a mutex blocked task, we call find_proxy_task()
to find a runnable owner. If the runnable owner is on another
cpu, we will need to migrate the selected donor task away, after
which we will pick_again can call pick_next_task() to choose
something else.
However, in the first call to pick_next_task(), we may have
had a balance_callback setup by the class scheduler. After we
pick again, its possible pick_next_task_fair() will be called
which calls sched_balance_newidle() and sched_balance_rq().
This will throw a warning:
[ 8.796467] rq->balance_callback && rq->balance_callback != &balance_push_callback
[ 8.796467] WARNING: CPU: 32 PID: 458 at kernel/sched/sched.h:1750 sched_balance_rq+0xe92/0x1250
...
[ 8.796467] Call Trace:
[ 8.796467] <TASK>
[ 8.796467] ? __warn.cold+0xb2/0x14e
[ 8.796467] ? sched_balance_rq+0xe92/0x1250
[ 8.796467] ? report_bug+0x107/0x1a0
[ 8.796467] ? handle_bug+0x54/0x90
[ 8.796467] ? exc_invalid_op+0x17/0x70
[ 8.796467] ? asm_exc_invalid_op+0x1a/0x20
[ 8.796467] ? sched_balance_rq+0xe92/0x1250
[ 8.796467] sched_balance_newidle+0x295/0x820
[ 8.796467] pick_next_task_fair+0x51/0x3f0
[ 8.796467] __schedule+0x23a/0x14b0
[ 8.796467] ? lock_release+0x16d/0x2e0
[ 8.796467] schedule+0x3d/0x150
[ 8.796467] worker_thread+0xb5/0x350
[ 8.796467] ? __pfx_worker_thread+0x10/0x10
[ 8.796467] kthread+0xee/0x120
[ 8.796467] ? __pfx_kthread+0x10/0x10
[ 8.796467] ret_from_fork+0x31/0x50
[ 8.796467] ? __pfx_kthread+0x10/0x10
[ 8.796467] ret_from_fork_asm+0x1a/0x30
[ 8.796467] </TASK>
This is because if a RT task was originally picked, it will
setup the rq->balance_callback with push_rt_tasks() via
set_next_task_rt().
Once the task is migrated away and we pick again, we haven't
processed any balance callbacks, so rq->balance_callback is not
in the same state as it was the first time pick_next_task was
called.
To handle this, add a zap_balance_callbacks() helper function
which cleans up the blance callbacks without running them. This
should be ok, as we are effectively undoing the state set in
the first call to pick_next_task(), and when we pick again,
the new callback can be configured for the donor task actually
selected.
Signed-off-by: John Stultz <jstultz@google.com>
---
v20:
* Tweaked to avoid build issues with different configs
Cc: Joel Fernandes <joelagnelf@nvidia.com>
Cc: Qais Yousef <qyousef@layalina.io>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Zimuzo Ezeozue <zezeozue@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Metin Kaya <Metin.Kaya@arm.com>
Cc: Xuewen Yan <xuewen.yan94@gmail.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Suleiman Souhlal <suleiman@google.com>
Cc: kuyo chang <kuyo.chang@mediatek.com>
Cc: hupu <hupu.gm@gmail.com>
Cc: kernel-team@android.com
---
kernel/sched/core.c | 39 ++++++++++++++++++++++++++++++++++++++-
1 file changed, 38 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7ae5f2d257eb5..30e676c2d582b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4990,6 +4990,40 @@ static inline void finish_task(struct task_struct *prev)
smp_store_release(&prev->on_cpu, 0);
}
+#ifdef CONFIG_SCHED_PROXY_EXEC
+/*
+ * Only called from __schedule context
+ *
+ * There are some cases where we are going to re-do the action
+ * that added the balance callbacks. We may not be in a state
+ * where we can run them, so just zap them so they can be
+ * properly re-added on the next time around. This is similar
+ * handling to running the callbacks, except we just don't call
+ * them.
+ */
+static void zap_balance_callbacks(struct rq *rq)
+{
+ struct balance_callback *next, *head;
+ bool found = false;
+
+ lockdep_assert_rq_held(rq);
+
+ head = rq->balance_callback;
+ while (head) {
+ if (head == &balance_push_callback)
+ found = true;
+ next = head->next;
+ head->next = NULL;
+ head = next;
+ }
+ rq->balance_callback = found ? &balance_push_callback : NULL;
+}
+#else
+static inline void zap_balance_callbacks(struct rq *rq)
+{
+}
+#endif
+
static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
{
void (*func)(struct rq *rq);
@@ -6920,8 +6954,11 @@ static void __sched notrace __schedule(int sched_mode)
rq_set_donor(rq, next);
if (unlikely(task_is_blocked(next))) {
next = find_proxy_task(rq, next, &rf);
- if (!next)
+ if (!next) {
+ /* zap the balance_callbacks before picking again */
+ zap_balance_callbacks(rq);
goto pick_again;
+ }
if (next == rq->idle)
goto keep_resched;
}
--
2.50.0.727.gbf7dc18ff4-goog
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [RFC][PATCH v20 4/6] sched: Handle blocked-waiter migration (and return migration)
2025-07-22 7:05 [RFC][PATCH v20 0/6] Donor Migration for Proxy Execution (v20) John Stultz
` (2 preceding siblings ...)
2025-07-22 7:05 ` [RFC][PATCH v20 3/6] sched: Add logic to zap balance callbacks if we pick again John Stultz
@ 2025-07-22 7:05 ` John Stultz
2025-07-22 7:05 ` [RFC][PATCH v20 5/6] sched: Add blocked_donor link to task for smarter mutex handoffs John Stultz
` (2 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: John Stultz @ 2025-07-22 7:05 UTC (permalink / raw)
To: LKML
Cc: John Stultz, Joel Fernandes, Qais Yousef, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Valentin Schneider, Steven Rostedt, Ben Segall, Zimuzo Ezeozue,
Mel Gorman, Will Deacon, Waiman Long, Boqun Feng,
Paul E. McKenney, Metin Kaya, Xuewen Yan, K Prateek Nayak,
Thomas Gleixner, Daniel Lezcano, Suleiman Souhlal, kuyo chang,
hupu, kernel-team
Add logic to handle migrating a blocked waiter to a remote
cpu where the lock owner is runnable.
Additionally, as the blocked task may not be able to run
on the remote cpu, add logic to handle return migration once
the waiting task is given the mutex.
Because tasks may get migrated to where they cannot run, also
modify the scheduling classes to avoid sched class migrations on
mutex blocked tasks, leaving find_proxy_task() and related logic
to do the migrations and return migrations.
This was split out from the larger proxy patch, and
significantly reworked.
Credits for the original patch go to:
Peter Zijlstra (Intel) <peterz@infradead.org>
Juri Lelli <juri.lelli@redhat.com>
Valentin Schneider <valentin.schneider@arm.com>
Connor O'Brien <connoro@google.com>
NOTE: With this patch I've hit a few cases where we seem to miss
a BO_WAKING->BO_RUNNING transition (and return migration) that
I'd expect to happen in ttwu(). So I have logic in
find_proxy_task() to detect and to handle the return migration
later. However I'm quite not happy with that as it shouldn't be
necessary, and am still trying to understand where I'm losing
the wakeup & return migration.
Signed-off-by: John Stultz <jstultz@google.com>
---
v6:
* Integrated sched_proxy_exec() check in proxy_return_migration()
* Minor cleanups to diff
* Unpin the rq before calling __balance_callbacks()
* Tweak proxy migrate to migrate deeper task in chain, to avoid
tasks pingponging between rqs
v7:
* Fixup for unused function arguments
* Switch from that_rq -> target_rq, other minor tweaks, and typo
fixes suggested by Metin Kaya
* Switch back to doing return migration in the ttwu path, which
avoids nasty lock juggling and performance issues
* Fixes for UP builds
v8:
* More simplifications from Metin Kaya
* Fixes for null owner case, including doing return migration
* Cleanup proxy_needs_return logic
v9:
* Narrow logic in ttwu that sets BO_RUNNABLE, to avoid missed
return migrations
* Switch to using zap_balance_callbacks rathern then running
them when we are dropping rq locks for proxy_migration.
* Drop task_is_blocked check in sched_submit_work as suggested
by Metin (may re-add later if this causes trouble)
* Do return migration when we're not on wake_cpu. This avoids
bad task placement caused by proxy migrations raised by
Xuewen Yan
* Fix to call set_next_task(rq->curr) prior to dropping rq lock
to avoid rq->curr getting migrated before we have actually
switched from it
* Cleanup to re-use proxy_resched_idle() instead of open coding
it in proxy_migrate_task()
* Fix return migration not to use DEQUEUE_SLEEP, so that we
properly see the task as task_on_rq_migrating() after it is
dequeued but before set_task_cpu() has been called on it
* Fix to broaden find_proxy_task() checks to avoid race where
a task is dequeued off the rq due to return migration, but
set_task_cpu() and the enqueue on another rq happened after
we checked task_cpu(owner). This ensures we don't proxy
using a task that is not actually on our runqueue.
* Cleanup to avoid the locked BO_WAKING->BO_RUNNABLE transition
in try_to_wake_up() if proxy execution isn't enabled.
* Cleanup to improve comment in proxy_migrate_task() explaining
the set_next_task(rq->curr) logic
* Cleanup deadline.c change to stylistically match rt.c change
* Numerous cleanups suggested by Metin
v10:
* Drop WARN_ON(task_is_blocked(p)) in ttwu current case
v11:
* Include proxy_set_task_cpu from later in the series to this
change so we can use it, rather then reworking logic later
in the series.
* Fix problem with return migration, where affinity was changed
and wake_cpu was left outside the affinity mask.
* Avoid reading the owner's cpu twice (as it might change inbetween)
to avoid occasional migration-to-same-cpu edge cases
* Add extra WARN_ON checks for wake_cpu and return migration
edge cases.
* Typo fix from Metin
v13:
* As we set ret, return it, not just NULL (pulling this change
in from later patch)
* Avoid deadlock between try_to_wake_up() and find_proxy_task() when
blocked_on cycle with ww_mutex is trying a mid-chain wakeup.
* Tweaks to use new __set_blocked_on_runnable() helper
* Potential fix for incorrectly updated task->dl_server issues
* Minor comment improvements
* Add logic to handle missed wakeups, in that case doing return
migration from the find_proxy_task() path
* Minor cleanups
v14:
* Improve edge cases where we wouldn't set the task as BO_RUNNABLE
v15:
* Added comment to better describe proxy_needs_return() as suggested
by Qais
* Build fixes for !CONFIG_SMP reported by
Maciej Żenczykowski <maze@google.com>
* Adds fix for re-evaluating proxy_needs_return when
sched_proxy_exec() is disabled, reported and diagnosed by:
kuyo chang <kuyo.chang@mediatek.com>
v16:
* Larger rework of needs_return logic in find_proxy_task, in
order to avoid problems with cpuhotplug
* Rework to use guard() as suggested by Peter
v18:
* Integrate optimization suggested by Suleiman to do the checks
for sleeping owners before checking if the task_cpu is this_cpu,
so that we can avoid needlessly proxy-migrating tasks to only
then dequeue them. Also check if migrating last.
* Improve comments around guard locking
* Include tweak to ttwu_runnable() as suggested by
hupu <hupu.gm@gmail.com>
* Rework the logic releasing the rq->donor reference before letting
go of the rqlock. Just use rq->idle.
* Go back to doing return migration on BO_WAKING owners, as I was
hitting some softlockups caused by running tasks not making
it out of BO_WAKING.
v19:
* Fixed proxy_force_return() logic for !SMP cases
Cc: Joel Fernandes <joelagnelf@nvidia.com>
Cc: Qais Yousef <qyousef@layalina.io>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Zimuzo Ezeozue <zezeozue@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Metin Kaya <Metin.Kaya@arm.com>
Cc: Xuewen Yan <xuewen.yan94@gmail.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Suleiman Souhlal <suleiman@google.com>
Cc: kuyo chang <kuyo.chang@mediatek.com>
Cc: hupu <hupu.gm@gmail.com>
Cc: kernel-team@android.com
fix for proxy migration logic on !SMP
---
kernel/sched/core.c | 251 ++++++++++++++++++++++++++++++++++++++++----
kernel/sched/fair.c | 3 +-
2 files changed, 230 insertions(+), 24 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 30e676c2d582b..1c249d1d62f5a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3151,6 +3151,14 @@ static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
__do_set_cpus_allowed(p, ctx);
+ /*
+ * It might be that the p->wake_cpu is no longer
+ * allowed, so set it to the dest_cpu so return
+ * migration doesn't send it to an invalid cpu
+ */
+ if (!is_cpu_allowed(p, p->wake_cpu))
+ p->wake_cpu = dest_cpu;
+
return affine_move_task(rq, p, rf, dest_cpu, ctx->flags);
out:
@@ -3711,6 +3719,67 @@ static inline void ttwu_do_wakeup(struct task_struct *p)
trace_sched_wakeup(p);
}
+#ifdef CONFIG_SCHED_PROXY_EXEC
+static inline void proxy_set_task_cpu(struct task_struct *p, int cpu)
+{
+ unsigned int wake_cpu;
+
+ /*
+ * Since we are enqueuing a blocked task on a cpu it may
+ * not be able to run on, preserve wake_cpu when we
+ * __set_task_cpu so we can return the task to where it
+ * was previously runnable.
+ */
+ wake_cpu = p->wake_cpu;
+ __set_task_cpu(p, cpu);
+ p->wake_cpu = wake_cpu;
+}
+
+static bool proxy_task_runnable_but_waking(struct task_struct *p)
+{
+ if (!sched_proxy_exec())
+ return false;
+ return (READ_ONCE(p->__state) == TASK_RUNNING &&
+ READ_ONCE(p->blocked_on_state) == BO_WAKING);
+}
+#else /* !CONFIG_SCHED_PROXY_EXEC */
+static bool proxy_task_runnable_but_waking(struct task_struct *p)
+{
+ return false;
+}
+#endif /* CONFIG_SCHED_PROXY_EXEC */
+
+/*
+ * Checks to see if task p has been proxy-migrated to another rq
+ * and needs to be returned. If so, we deactivate the task here
+ * so that it can be properly woken up on the p->wake_cpu
+ * (or whichever cpu select_task_rq() picks at the bottom of
+ * try_to_wake_up()
+ */
+static inline bool proxy_needs_return(struct rq *rq, struct task_struct *p)
+{
+ bool ret = false;
+
+ if (!sched_proxy_exec())
+ return false;
+
+ raw_spin_lock(&p->blocked_lock);
+ if (__get_task_blocked_on(p) && p->blocked_on_state == BO_WAKING) {
+ if (!task_current(rq, p) && (p->wake_cpu != cpu_of(rq))) {
+ if (task_current_donor(rq, p)) {
+ put_prev_task(rq, p);
+ rq_set_donor(rq, rq->idle);
+ }
+ deactivate_task(rq, p, DEQUEUE_NOCLOCK);
+ ret = true;
+ }
+ __set_blocked_on_runnable(p);
+ resched_curr(rq);
+ }
+ raw_spin_unlock(&p->blocked_lock);
+ return ret;
+}
+
static void
ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags,
struct rq_flags *rf)
@@ -3796,6 +3865,8 @@ static int ttwu_runnable(struct task_struct *p, int wake_flags)
update_rq_clock(rq);
if (p->se.sched_delayed)
enqueue_task(rq, p, ENQUEUE_NOCLOCK | ENQUEUE_DELAYED);
+ if (proxy_needs_return(rq, p))
+ goto out;
if (!task_on_cpu(rq, p)) {
/*
* When on_rq && !on_cpu the task is preempted, see if
@@ -3806,6 +3877,7 @@ static int ttwu_runnable(struct task_struct *p, int wake_flags)
ttwu_do_wakeup(p);
ret = 1;
}
+out:
__task_rq_unlock(rq, &rf);
return ret;
@@ -4193,6 +4265,8 @@ int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
* it disabling IRQs (this allows not taking ->pi_lock).
*/
WARN_ON_ONCE(p->se.sched_delayed);
+ /* If current is waking up, we know we can run here, so set BO_RUNNBLE */
+ set_blocked_on_runnable(p);
if (!ttwu_state_match(p, state, &success))
goto out;
@@ -4209,8 +4283,15 @@ int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
*/
scoped_guard (raw_spinlock_irqsave, &p->pi_lock) {
smp_mb__after_spinlock();
- if (!ttwu_state_match(p, state, &success))
- break;
+ if (!ttwu_state_match(p, state, &success)) {
+ /*
+ * If we're already TASK_RUNNING, and BO_WAKING
+ * continue on to ttwu_runnable check to force
+ * proxy_needs_return evaluation
+ */
+ if (!proxy_task_runnable_but_waking(p))
+ break;
+ }
trace_sched_waking(p);
@@ -4272,6 +4353,7 @@ int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
* enqueue, such as ttwu_queue_wakelist().
*/
WRITE_ONCE(p->__state, TASK_WAKING);
+ set_blocked_on_runnable(p);
/*
* If the owning (remote) CPU is still in the middle of schedule() with
@@ -4322,7 +4404,6 @@ int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
ttwu_queue(p, cpu, wake_flags);
}
out:
- set_blocked_on_runnable(p);
if (success)
ttwu_stat(p, task_cpu(p), wake_flags);
@@ -6624,7 +6705,7 @@ static inline struct task_struct *proxy_resched_idle(struct rq *rq)
return rq->idle;
}
-static bool __proxy_deactivate(struct rq *rq, struct task_struct *donor)
+static bool proxy_deactivate(struct rq *rq, struct task_struct *donor)
{
unsigned long state = READ_ONCE(donor->__state);
@@ -6644,17 +6725,98 @@ static bool __proxy_deactivate(struct rq *rq, struct task_struct *donor)
return try_to_block_task(rq, donor, &state, true);
}
-static struct task_struct *proxy_deactivate(struct rq *rq, struct task_struct *donor)
+/*
+ * If the blocked-on relationship crosses CPUs, migrate @p to the
+ * owner's CPU.
+ *
+ * This is because we must respect the CPU affinity of execution
+ * contexts (owner) but we can ignore affinity for scheduling
+ * contexts (@p). So we have to move scheduling contexts towards
+ * potential execution contexts.
+ *
+ * Note: The owner can disappear, but simply migrate to @target_cpu
+ * and leave that CPU to sort things out.
+ */
+static void proxy_migrate_task(struct rq *rq, struct rq_flags *rf,
+ struct task_struct *p, int target_cpu)
{
- if (!__proxy_deactivate(rq, donor)) {
- /*
- * XXX: For now, if deactivation failed, set donor
- * as unblocked, as we aren't doing proxy-migrations
- * yet (more logic will be needed then).
- */
- donor->blocked_on_state = BO_RUNNABLE;
- }
- return NULL;
+ struct rq *target_rq = cpu_rq(target_cpu);
+
+ lockdep_assert_rq_held(rq);
+
+ /*
+ * Since we're going to drop @rq, we have to put(@rq->donor) first,
+ * otherwise we have a reference that no longer belongs to us.
+ *
+ * Additionally, as we put_prev_task(prev) earlier, its possible that
+ * prev will migrate away as soon as we drop the rq lock, however we
+ * still have it marked as rq->curr, as we've not yet switched tasks.
+ *
+ * After the migration, we are going to pick_again in the __schedule
+ * logic, so backtrack a bit before we release the lock:
+ * Put rq->donor, and set rq->curr as rq->donor and set_next_task,
+ * so that we're close to the situation we had entering __schedule
+ * the first time.
+ *
+ * Then when we re-aquire the lock, we will re-put rq->curr then
+ * rq_set_donor(rq->idle) and set_next_task(rq->idle), before
+ * picking again.
+ */
+ /* XXX - Added to address problems with changed dl_server semantics - double check */
+ __put_prev_set_next_dl_server(rq, rq->donor, rq->curr);
+ put_prev_task(rq, rq->donor);
+ rq_set_donor(rq, rq->idle);
+ set_next_task(rq, rq->idle);
+
+ WARN_ON(p == rq->curr);
+
+ deactivate_task(rq, p, 0);
+ proxy_set_task_cpu(p, target_cpu);
+
+ zap_balance_callbacks(rq);
+ rq_unpin_lock(rq, rf);
+ raw_spin_rq_unlock(rq);
+ raw_spin_rq_lock(target_rq);
+
+ activate_task(target_rq, p, 0);
+ wakeup_preempt(target_rq, p, 0);
+
+ raw_spin_rq_unlock(target_rq);
+ raw_spin_rq_lock(rq);
+ rq_repin_lock(rq, rf);
+}
+
+static void proxy_force_return(struct rq *rq, struct rq_flags *rf,
+ struct task_struct *p)
+{
+ lockdep_assert_rq_held(rq);
+
+ put_prev_task(rq, rq->donor);
+ rq_set_donor(rq, rq->idle);
+ set_next_task(rq, rq->idle);
+
+ WARN_ON(p == rq->curr);
+
+ p->blocked_on_state = BO_WAKING;
+ get_task_struct(p);
+ block_task(rq, p, 0);
+
+ zap_balance_callbacks(rq);
+ rq_unpin_lock(rq, rf);
+ raw_spin_rq_unlock(rq);
+
+ wake_up_process(p);
+ put_task_struct(p);
+
+ raw_spin_rq_lock(rq);
+ rq_repin_lock(rq, rf);
+}
+
+static inline bool proxy_can_run_here(struct rq *rq, struct task_struct *p)
+{
+ if (p == rq->curr || p->wake_cpu == cpu_of(rq))
+ return true;
+ return false;
}
/*
@@ -6677,9 +6839,11 @@ static struct task_struct *
find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
{
struct task_struct *owner = NULL;
+ bool curr_in_chain = false;
int this_cpu = cpu_of(rq);
struct task_struct *p;
struct mutex *mutex;
+ int owner_cpu;
/* Follow blocked_on chain. */
for (p = donor; task_is_blocked(p); p = owner) {
@@ -6705,6 +6869,10 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
return NULL;
}
+ /* Double check blocked_on_state now we're holding the lock */
+ if (p->blocked_on_state == BO_RUNNABLE)
+ return p;
+
/*
* If a ww_mutex hits the die/wound case, it marks the task as
* BO_WAKING and calls try_to_wake_up(), so that the mutex
@@ -6720,26 +6888,50 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
* try_to_wake_up from completing and doing the return
* migration.
*
- * So when we hit a !BO_BLOCKED task briefly schedule idle
- * so we release the rq and let the wakeup complete.
+ * So when we hit a BO_WAKING task try to wake it up ourselves.
*/
- if (p->blocked_on_state != BO_BLOCKED)
- return proxy_resched_idle(rq);
+ if (p->blocked_on_state == BO_WAKING) {
+ if (task_current(rq, p)) {
+ /* If its current just set it runnable */
+ __force_blocked_on_runnable(p);
+ return p;
+ }
+ goto needs_return;
+ }
+
+ if (task_current(rq, p))
+ curr_in_chain = true;
owner = __mutex_owner(mutex);
if (!owner) {
+ /* If the owner is null, we may have some work to do */
+ if (!proxy_can_run_here(rq, p))
+ goto needs_return;
+
__force_blocked_on_runnable(p);
return p;
}
if (!READ_ONCE(owner->on_rq) || owner->se.sched_delayed) {
- /* XXX Don't handle blocked owners/delayed dequeue yet */
- return proxy_deactivate(rq, donor);
+ /* XXX Don't handle blocked owners / delay dequeued yet */
+ if (!proxy_deactivate(rq, donor)) {
+ if (!proxy_can_run_here(rq, p))
+ goto needs_return;
+ __force_blocked_on_runnable(p);
+ return p;
+ }
+ return NULL;
}
- if (task_cpu(owner) != this_cpu) {
- /* XXX Don't handle migrations yet */
- return proxy_deactivate(rq, donor);
+ owner_cpu = task_cpu(owner);
+ if (owner_cpu != this_cpu) {
+ /*
+ * @owner can disappear, simply migrate to @owner_cpu
+ * and leave that CPU to sort things out.
+ */
+ if (curr_in_chain)
+ return proxy_resched_idle(rq);
+ goto migrate;
}
if (task_on_rq_migrating(owner)) {
@@ -6799,6 +6991,19 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
WARN_ON_ONCE(owner && !owner->on_rq);
return owner;
+
+ /*
+ * NOTE: This logic is down here, because we need to call
+ * the functions with the mutex wait_lock and task
+ * blocked_lock released, so we have to get out of the
+ * guard() scope.
+ */
+migrate:
+ proxy_migrate_task(rq, rf, p, owner_cpu);
+ return NULL;
+needs_return:
+ proxy_force_return(rq, rf, p);
+ return NULL;
}
#else /* SCHED_PROXY_EXEC */
static struct task_struct *
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b173a059315c2..cc531eb939831 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8781,7 +8781,8 @@ pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf
se = &p->se;
#ifdef CONFIG_FAIR_GROUP_SCHED
- if (prev->sched_class != &fair_sched_class)
+ if (prev->sched_class != &fair_sched_class ||
+ rq->curr != rq->donor)
goto simple;
__put_prev_set_next_dl_server(rq, prev, p);
--
2.50.0.727.gbf7dc18ff4-goog
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [RFC][PATCH v20 5/6] sched: Add blocked_donor link to task for smarter mutex handoffs
2025-07-22 7:05 [RFC][PATCH v20 0/6] Donor Migration for Proxy Execution (v20) John Stultz
` (3 preceding siblings ...)
2025-07-22 7:05 ` [RFC][PATCH v20 4/6] sched: Handle blocked-waiter migration (and return migration) John Stultz
@ 2025-07-22 7:05 ` John Stultz
2025-07-22 7:05 ` [RFC][PATCH v20 6/6] sched: Migrate whole chain in proxy_migrate_task() John Stultz
2025-07-23 14:44 ` [RFC][PATCH v20 0/6] Donor Migration for Proxy Execution (v20) Juri Lelli
6 siblings, 0 replies; 9+ messages in thread
From: John Stultz @ 2025-07-22 7:05 UTC (permalink / raw)
To: LKML
Cc: Peter Zijlstra, Juri Lelli, Valentin Schneider,
Connor O'Brien, John Stultz, Joel Fernandes, Qais Yousef,
Ingo Molnar, Vincent Guittot, Dietmar Eggemann,
Valentin Schneider, Steven Rostedt, Ben Segall, Zimuzo Ezeozue,
Mel Gorman, Will Deacon, Waiman Long, Boqun Feng,
Paul E. McKenney, Metin Kaya, Xuewen Yan, K Prateek Nayak,
Thomas Gleixner, Daniel Lezcano, Suleiman Souhlal, kuyo chang,
hupu, kernel-team
From: Peter Zijlstra <peterz@infradead.org>
Add link to the task this task is proxying for, and use it so
the mutex owner can do an intelligent hand-off of the mutex to
the task that the owner is running on behalf.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Connor O'Brien <connoro@google.com>
[jstultz: This patch was split out from larger proxy patch]
Signed-off-by: John Stultz <jstultz@google.com>
---
v5:
* Split out from larger proxy patch
v6:
* Moved proxied value from earlier patch to this one where it
is actually used
* Rework logic to check sched_proxy_exec() instead of using ifdefs
* Moved comment change to this patch where it makes sense
v7:
* Use more descriptive term then "us" in comments, as suggested
by Metin Kaya.
* Minor typo fixup from Metin Kaya
* Reworked proxied variable to prev_not_proxied to simplify usage
v8:
* Use helper for donor blocked_on_state transition
v9:
* Re-add mutex lock handoff in the unlock path, but only when we
have a blocked donor
* Slight reword of commit message suggested by Metin
v18:
* Add task_init initialization for blocked_donor, suggested by
Suleiman
Cc: Joel Fernandes <joelagnelf@nvidia.com>
Cc: Qais Yousef <qyousef@layalina.io>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Zimuzo Ezeozue <zezeozue@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Metin Kaya <Metin.Kaya@arm.com>
Cc: Xuewen Yan <xuewen.yan94@gmail.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Suleiman Souhlal <suleiman@google.com>
Cc: kuyo chang <kuyo.chang@mediatek.com>
Cc: hupu <hupu.gm@gmail.com>
Cc: kernel-team@android.com
---
include/linux/sched.h | 1 +
init/init_task.c | 1 +
kernel/fork.c | 1 +
kernel/locking/mutex.c | 41 ++++++++++++++++++++++++++++++++++++++---
kernel/sched/core.c | 18 ++++++++++++++++--
5 files changed, 57 insertions(+), 5 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index ced001f889519..675e2f89ec0f8 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1239,6 +1239,7 @@ struct task_struct {
enum blocked_on_state blocked_on_state;
struct mutex *blocked_on; /* lock we're blocked on */
+ struct task_struct *blocked_donor; /* task that is boosting this task */
raw_spinlock_t blocked_lock;
#ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER
diff --git a/init/init_task.c b/init/init_task.c
index 6d72ec23410a6..627bbd8953e88 100644
--- a/init/init_task.c
+++ b/init/init_task.c
@@ -175,6 +175,7 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
&init_task.alloc_lock),
#endif
.blocked_on_state = BO_RUNNABLE,
+ .blocked_donor = NULL,
#ifdef CONFIG_RT_MUTEXES
.pi_waiters = RB_ROOT_CACHED,
.pi_top_task = NULL,
diff --git a/kernel/fork.c b/kernel/fork.c
index 5eacb25a0c5ab..61a2ac850faf0 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2126,6 +2126,7 @@ __latent_entropy struct task_struct *copy_process(
p->blocked_on_state = BO_RUNNABLE;
p->blocked_on = NULL; /* not blocked yet */
+ p->blocked_donor = NULL; /* nobody is boosting p yet */
#ifdef CONFIG_BCACHE
p->sequential_io = 0;
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index b5145ddaec242..da6e964498ad0 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -926,7 +926,7 @@ EXPORT_SYMBOL_GPL(ww_mutex_lock_interruptible);
*/
static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigned long ip)
{
- struct task_struct *next = NULL;
+ struct task_struct *donor, *next = NULL;
DEFINE_WAKE_Q(wake_q);
unsigned long owner;
unsigned long flags;
@@ -945,6 +945,12 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
MUTEX_WARN_ON(__owner_task(owner) != current);
MUTEX_WARN_ON(owner & MUTEX_FLAG_PICKUP);
+ if (sched_proxy_exec() && current->blocked_donor) {
+ /* force handoff if we have a blocked_donor */
+ owner = MUTEX_FLAG_HANDOFF;
+ break;
+ }
+
if (owner & MUTEX_FLAG_HANDOFF)
break;
@@ -958,7 +964,34 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
raw_spin_lock_irqsave(&lock->wait_lock, flags);
debug_mutex_unlock(lock);
- if (!list_empty(&lock->wait_list)) {
+
+ if (sched_proxy_exec()) {
+ raw_spin_lock(¤t->blocked_lock);
+ /*
+ * If we have a task boosting current, and that task was boosting
+ * current through this lock, hand the lock to that task, as that
+ * is the highest waiter, as selected by the scheduling function.
+ */
+ donor = current->blocked_donor;
+ if (donor) {
+ struct mutex *next_lock;
+
+ raw_spin_lock_nested(&donor->blocked_lock, SINGLE_DEPTH_NESTING);
+ next_lock = __get_task_blocked_on(donor);
+ if (next_lock == lock) {
+ next = donor;
+ __set_blocked_on_waking(donor);
+ wake_q_add(&wake_q, donor);
+ current->blocked_donor = NULL;
+ }
+ raw_spin_unlock(&donor->blocked_lock);
+ }
+ }
+
+ /*
+ * Failing that, pick any on the wait list.
+ */
+ if (!next && !list_empty(&lock->wait_list)) {
/* get the first entry from the wait-list: */
struct mutex_waiter *waiter =
list_first_entry(&lock->wait_list,
@@ -966,7 +999,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
next = waiter->task;
- raw_spin_lock(&next->blocked_lock);
+ raw_spin_lock_nested(&next->blocked_lock, SINGLE_DEPTH_NESTING);
debug_mutex_wake_waiter(lock, waiter);
WARN_ON_ONCE(__get_task_blocked_on(next) != lock);
__set_blocked_on_waking(next);
@@ -977,6 +1010,8 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
if (owner & MUTEX_FLAG_HANDOFF)
__mutex_handoff(lock, next);
+ if (sched_proxy_exec())
+ raw_spin_unlock(¤t->blocked_lock);
raw_spin_unlock_irqrestore_wake(&lock->wait_lock, flags, &wake_q);
}
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1c249d1d62f5a..2c3a4b9518927 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6823,7 +6823,17 @@ static inline bool proxy_can_run_here(struct rq *rq, struct task_struct *p)
* Find runnable lock owner to proxy for mutex blocked donor
*
* Follow the blocked-on relation:
- * task->blocked_on -> mutex->owner -> task...
+ *
+ * ,-> task
+ * | | blocked-on
+ * | v
+ * blocked_donor | mutex
+ * | | owner
+ * | v
+ * `-- task
+ *
+ * and set the blocked_donor relation, this latter is used by the mutex
+ * code to find which (blocked) task to hand-off to.
*
* Lock order:
*
@@ -6987,6 +6997,7 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
* rq, therefore holding @rq->lock is sufficient to
* guarantee its existence, as per ttwu_remote().
*/
+ owner->blocked_donor = p;
}
WARN_ON_ONCE(owner && !owner->on_rq);
@@ -7083,6 +7094,7 @@ static void __sched notrace __schedule(int sched_mode)
unsigned long prev_state;
struct rq_flags rf;
struct rq *rq;
+ bool prev_not_proxied;
int cpu;
trace_sched_entry_tp(preempt, CALLER_ADDR0);
@@ -7154,9 +7166,11 @@ static void __sched notrace __schedule(int sched_mode)
switch_count = &prev->nvcsw;
}
+ prev_not_proxied = !prev->blocked_donor;
pick_again:
next = pick_next_task(rq, rq->donor, &rf);
rq_set_donor(rq, next);
+ next->blocked_donor = NULL;
if (unlikely(task_is_blocked(next))) {
next = find_proxy_task(rq, next, &rf);
if (!next) {
@@ -7220,7 +7234,7 @@ static void __sched notrace __schedule(int sched_mode)
rq = context_switch(rq, prev, next, &rf);
} else {
/* In case next was already curr but just got blocked_donor */
- if (!task_current_donor(rq, next))
+ if (prev_not_proxied && next->blocked_donor)
proxy_tag_curr(rq, next);
rq_unpin_lock(rq, &rf);
--
2.50.0.727.gbf7dc18ff4-goog
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [RFC][PATCH v20 6/6] sched: Migrate whole chain in proxy_migrate_task()
2025-07-22 7:05 [RFC][PATCH v20 0/6] Donor Migration for Proxy Execution (v20) John Stultz
` (4 preceding siblings ...)
2025-07-22 7:05 ` [RFC][PATCH v20 5/6] sched: Add blocked_donor link to task for smarter mutex handoffs John Stultz
@ 2025-07-22 7:05 ` John Stultz
2025-07-23 14:44 ` [RFC][PATCH v20 0/6] Donor Migration for Proxy Execution (v20) Juri Lelli
6 siblings, 0 replies; 9+ messages in thread
From: John Stultz @ 2025-07-22 7:05 UTC (permalink / raw)
To: LKML
Cc: John Stultz, Joel Fernandes, Qais Yousef, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Valentin Schneider, Steven Rostedt, Ben Segall, Zimuzo Ezeozue,
Mel Gorman, Will Deacon, Waiman Long, Boqun Feng,
Paul E. McKenney, Metin Kaya, Xuewen Yan, K Prateek Nayak,
Thomas Gleixner, Daniel Lezcano, Suleiman Souhlal, kuyo chang,
hupu, kernel-team
Instead of migrating one task each time through find_proxy_task(),
we can walk up the blocked_donor ptrs and migrate the entire
current chain in one go.
This was broken out of earlier patches and held back while the
series was being stabilized, but I wanted to re-introduce it.
Signed-off-by: John Stultz <jstultz@google.com>
---
v12:
* Earlier this was re-using blocked_node, but I hit
a race with activating blocked entities, and to
avoid it introduced a new migration_node listhead
v18:
* Add init_task initialization of migration_node as suggested
by Suleiman
Cc: Joel Fernandes <joelagnelf@nvidia.com>
Cc: Qais Yousef <qyousef@layalina.io>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Zimuzo Ezeozue <zezeozue@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Metin Kaya <Metin.Kaya@arm.com>
Cc: Xuewen Yan <xuewen.yan94@gmail.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Suleiman Souhlal <suleiman@google.com>
Cc: kuyo chang <kuyo.chang@mediatek.com>
Cc: hupu <hupu.gm@gmail.com>
Cc: kernel-team@android.com
---
include/linux/sched.h | 1 +
init/init_task.c | 1 +
kernel/fork.c | 1 +
kernel/sched/core.c | 25 +++++++++++++++++--------
4 files changed, 20 insertions(+), 8 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 675e2f89ec0f8..e9242dfa5f271 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1240,6 +1240,7 @@ struct task_struct {
enum blocked_on_state blocked_on_state;
struct mutex *blocked_on; /* lock we're blocked on */
struct task_struct *blocked_donor; /* task that is boosting this task */
+ struct list_head migration_node;
raw_spinlock_t blocked_lock;
#ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER
diff --git a/init/init_task.c b/init/init_task.c
index 627bbd8953e88..65e0f90285966 100644
--- a/init/init_task.c
+++ b/init/init_task.c
@@ -176,6 +176,7 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
#endif
.blocked_on_state = BO_RUNNABLE,
.blocked_donor = NULL,
+ .migration_node = LIST_HEAD_INIT(init_task.migration_node),
#ifdef CONFIG_RT_MUTEXES
.pi_waiters = RB_ROOT_CACHED,
.pi_top_task = NULL,
diff --git a/kernel/fork.c b/kernel/fork.c
index 61a2ac850faf0..892940ea52958 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2127,6 +2127,7 @@ __latent_entropy struct task_struct *copy_process(
p->blocked_on_state = BO_RUNNABLE;
p->blocked_on = NULL; /* not blocked yet */
p->blocked_donor = NULL; /* nobody is boosting p yet */
+ INIT_LIST_HEAD(&p->migration_node);
#ifdef CONFIG_BCACHE
p->sequential_io = 0;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 2c3a4b9518927..c1d813a9cde96 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6741,6 +6741,7 @@ static void proxy_migrate_task(struct rq *rq, struct rq_flags *rf,
struct task_struct *p, int target_cpu)
{
struct rq *target_rq = cpu_rq(target_cpu);
+ LIST_HEAD(migrate_list);
lockdep_assert_rq_held(rq);
@@ -6768,19 +6769,27 @@ static void proxy_migrate_task(struct rq *rq, struct rq_flags *rf,
rq_set_donor(rq, rq->idle);
set_next_task(rq, rq->idle);
- WARN_ON(p == rq->curr);
-
- deactivate_task(rq, p, 0);
- proxy_set_task_cpu(p, target_cpu);
-
+ for (; p; p = p->blocked_donor) {
+ WARN_ON(p == rq->curr);
+ deactivate_task(rq, p, 0);
+ proxy_set_task_cpu(p, target_cpu);
+ /*
+ * We can abuse blocked_node to migrate the thing,
+ * because @p was still on the rq.
+ */
+ list_add(&p->migration_node, &migrate_list);
+ }
zap_balance_callbacks(rq);
rq_unpin_lock(rq, rf);
raw_spin_rq_unlock(rq);
raw_spin_rq_lock(target_rq);
+ while (!list_empty(&migrate_list)) {
+ p = list_first_entry(&migrate_list, struct task_struct, migration_node);
+ list_del_init(&p->migration_node);
- activate_task(target_rq, p, 0);
- wakeup_preempt(target_rq, p, 0);
-
+ activate_task(target_rq, p, 0);
+ wakeup_preempt(target_rq, p, 0);
+ }
raw_spin_rq_unlock(target_rq);
raw_spin_rq_lock(rq);
rq_repin_lock(rq, rf);
--
2.50.0.727.gbf7dc18ff4-goog
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [RFC][PATCH v20 0/6] Donor Migration for Proxy Execution (v20)
2025-07-22 7:05 [RFC][PATCH v20 0/6] Donor Migration for Proxy Execution (v20) John Stultz
` (5 preceding siblings ...)
2025-07-22 7:05 ` [RFC][PATCH v20 6/6] sched: Migrate whole chain in proxy_migrate_task() John Stultz
@ 2025-07-23 14:44 ` Juri Lelli
2025-07-23 22:42 ` John Stultz
6 siblings, 1 reply; 9+ messages in thread
From: Juri Lelli @ 2025-07-23 14:44 UTC (permalink / raw)
To: John Stultz
Cc: LKML, Joel Fernandes, Qais Yousef, Ingo Molnar, Peter Zijlstra,
Vincent Guittot, Dietmar Eggemann, Valentin Schneider,
Steven Rostedt, Ben Segall, Zimuzo Ezeozue, Mel Gorman,
Will Deacon, Waiman Long, Boqun Feng, Paul E. McKenney,
Metin Kaya, Xuewen Yan, K Prateek Nayak, Thomas Gleixner,
Daniel Lezcano, Suleiman Souhlal, kuyo chang, hupu, kernel-team
Hi,
On 22/07/25 07:05, John Stultz wrote:
...
> Issues still to address with the full series:
> * There’s a new quirk from recent changes for dl_server that
> is causing the ksched_football test in the full series to hang
> at boot. I’ve bisected and reverted the change for now, but I
> need to better understand what’s going wrong.
After our quick chat on IRC, I remembered that there were additional two
fixes for dl-server posted, but still not on tip.
https://lore.kernel.org/lkml/20250615131129.954975-1-kuyo.chang@mediatek.com/
https://lore.kernel.org/lkml/20250627035420.37712-1-yangyicong@huawei.com/
So I went ahead and pushed them to
git@github.com:jlelli/linux.git upstream/fix-dlserver
Could you please check if any (or both together) of the two topmost
changes do any good to the issue you are seeing?
Thanks!
Juri
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC][PATCH v20 0/6] Donor Migration for Proxy Execution (v20)
2025-07-23 14:44 ` [RFC][PATCH v20 0/6] Donor Migration for Proxy Execution (v20) Juri Lelli
@ 2025-07-23 22:42 ` John Stultz
0 siblings, 0 replies; 9+ messages in thread
From: John Stultz @ 2025-07-23 22:42 UTC (permalink / raw)
To: Juri Lelli
Cc: LKML, Joel Fernandes, Qais Yousef, Ingo Molnar, Peter Zijlstra,
Vincent Guittot, Dietmar Eggemann, Valentin Schneider,
Steven Rostedt, Ben Segall, Zimuzo Ezeozue, Mel Gorman,
Will Deacon, Waiman Long, Boqun Feng, Paul E. McKenney,
Metin Kaya, Xuewen Yan, K Prateek Nayak, Thomas Gleixner,
Daniel Lezcano, Suleiman Souhlal, kuyo chang, hupu, kernel-team
On Wed, Jul 23, 2025 at 7:44 AM Juri Lelli <juri.lelli@redhat.com> wrote:
> On 22/07/25 07:05, John Stultz wrote:
> > Issues still to address with the full series:
> > * There’s a new quirk from recent changes for dl_server that
> > is causing the ksched_football test in the full series to hang
> > at boot. I’ve bisected and reverted the change for now, but I
> > need to better understand what’s going wrong.
>
> After our quick chat on IRC, I remembered that there were additional two
> fixes for dl-server posted, but still not on tip.
>
> https://lore.kernel.org/lkml/20250615131129.954975-1-kuyo.chang@mediatek.com/
> https://lore.kernel.org/lkml/20250627035420.37712-1-yangyicong@huawei.com/
>
> So I went ahead and pushed them to
>
> git@github.com:jlelli/linux.git upstream/fix-dlserver
>
> Could you please check if any (or both together) of the two topmost
> changes do any good to the issue you are seeing?
Thanks for sharing these! Unfortunately they don't seem to help. :/
I'm still digging down into the behavior. I'm not 100% sure the
problem isn't just my test logic starving itself (after creating
NR_CPU RT spinners, its not surprising creating new threads might be
tough if the non-RT kthreadd can't get scheduled), but I don't quite
see how the dl_server patch cccb45d7c429 ("sched/deadline: Less
agressive dl_server handling") would be the cause of the dramatic
behavioral change - esp as this test was also functional prior to the
dl_server logic landing. Also it's odd just re-adding the
dl_server_stop() call removed from dequeue_entities() seems to make it
work again. So I clearly need to dig more to understand the behavior.
Thanks again for your suggestions! I'm going to dig further and let
folks know when I figure this detail out
thanks
-john
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-07-23 22:42 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-22 7:05 [RFC][PATCH v20 0/6] Donor Migration for Proxy Execution (v20) John Stultz
2025-07-22 7:05 ` [RFC][PATCH v20 1/6] locking: Add task::blocked_lock to serialize blocked_on state John Stultz
2025-07-22 7:05 ` [RFC][PATCH v20 2/6] kernel/locking: Add blocked_on_state to provide necessary tri-state for return migration John Stultz
2025-07-22 7:05 ` [RFC][PATCH v20 3/6] sched: Add logic to zap balance callbacks if we pick again John Stultz
2025-07-22 7:05 ` [RFC][PATCH v20 4/6] sched: Handle blocked-waiter migration (and return migration) John Stultz
2025-07-22 7:05 ` [RFC][PATCH v20 5/6] sched: Add blocked_donor link to task for smarter mutex handoffs John Stultz
2025-07-22 7:05 ` [RFC][PATCH v20 6/6] sched: Migrate whole chain in proxy_migrate_task() John Stultz
2025-07-23 14:44 ` [RFC][PATCH v20 0/6] Donor Migration for Proxy Execution (v20) Juri Lelli
2025-07-23 22:42 ` John Stultz
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).