* [RFC][PATCH v13 0/7] Single CPU Proxy Execution (v13)
@ 2024-11-06 2:56 John Stultz
2024-11-06 2:56 ` [RFC][PATCH v13 1/7] sched: Add CONFIG_SCHED_PROXY_EXEC & boot argument to enable/disable John Stultz
` (6 more replies)
0 siblings, 7 replies; 8+ messages in thread
From: John Stultz @ 2024-11-06 2:56 UTC (permalink / raw)
To: LKML
Cc: John Stultz, Joel Fernandes, Qais Yousef, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Valentin Schneider, Steven Rostedt, Ben Segall, Zimuzo Ezeozue,
Mel Gorman, Will Deacon, Waiman Long, Boqun Feng,
Paul E. McKenney, Metin Kaya, Xuewen Yan, K Prateek Nayak,
Thomas Gleixner, Daniel Lezcano, kernel-team
Hey All,
Since the earlier proxy-execution preparation patches have
been queued in tip/sched/core, I wanted to send out the next
chunk of Proxy Execution - an approach for a generalized form of
priority inheritance.
In this series, I’m only submitting the logic to support Proxy
Execution as both a build and runtime option, the mutex
blocked_on rework, some small fixes for assumptions that proxy
changes, and the initial logic to run lock owners in place of
the waiting task on the same cpu.
With v13 of this series, there have been quite a number of
changes:
* Mostly dealing with collisions from changes that landed in
6.12-rc1
* The most basic of handling of delayed dequeued tasks (just
deactivate for now)
* Renaming “next” as “donor” to clarify things in proxy related
functions
* Lots of cleanups
I’ve also continued working on the rest of the series, which
you can find here:
https://github.com/johnstultz-work/linux-dev/commits/proxy-exec-v13-6.12-rc6
https://github.com/johnstultz-work/linux-dev.git proxy-exec-v13-6.12-rc6
New changes in the full series include:
* After talking with Juri at LPC, he suggested for now that we
re-add donor migrations of SCHED_DEADLINE tasks, so I’ve
dropped the logic that previously disabled this.
* Workaround handling for “lost wakeups” issue (see below)
Issues still to address with the full series:
* The new delayed dequeuing logic added in 6.12-rc1 really
conceptually collides with proxy-execution: As we now have
multiple tasks that aren’t runnable for different reasons
(one may be sleeping, another may be blocked on a mutex) that
are left on the RQ and the rules for how we handle these
un-runnable but enqueued tasks are different for each case.
Right now my workaround is just stop proxying if we hit a
sched_delayed task, but I’d like to have a better solution.
My plan is to treat it similar to sleeping tasks, and do the
same deactivated-owner-queuing (queuing the waiters on the
sched_delayed task). The problem is when a sched_delayed task
gets a wakeup, we won’t hit the logic to do the
blocked-waiters activation, so I’ll need to change that.
Just getting it working won’t address the conceptual
collision, so I’d love any thoughts or feedback on how to
generalize these two new forms of unrunnable-on-the-runqueue
states.
* In testing with the full series (again, for clarity not with
this same-rq proxying series I’m sending out), I hit some
rare cases of what seem to be lost wakeups, where a task was
marked as BO_WAKING, but then ttwu never managed to transition
it to BO_RUNNABLE. This can cause us to get stuck either in the
pick-again loop, or in a idle resched loop. I’ve added handlers
to detect this and to safely do the BO_WAKING -> BO_RUNNABLE
transition along with return migration if needed to avoid this
issue, but this really is pasting over the underlying issue.
This has been difficult to diagnose as by the time the issue
is noticed, the wakeup may have been long in the past and the
tracebuffer overwritten.
* K Prateek Nayak did some testing with an earlier version of the
series and saw ~3-5% regressions in some cases. I’m hoping to
look into this soon to see if we can reduce those further.
* The chain migration functionality needs further iterations and
better validation to ensure it truly maintains the RT/DL load
balancing invariants (despite this being broken in vanilla
upstream with RT_PUSH_IPI currently)
I’d really appreciate any feedback or review thoughts on this
series. I’m trying to keep the chunks small, reviewable and
iteratively testable, but if you have any suggestions on how to
improve the series, I’m all ears.
Credit/Disclaimer:
—--------------------
As mentioned previously, this Proxy Execution series has a long
history:
First described in a paper[1] by Watkins, Straub, Niehaus, then
from patches from Peter Zijlstra, extended with lots of work by
Juri Lelli, Valentin Schneider, and Connor O'Brien. (and thank
you to Steven Rostedt for providing additional details here!)
So again, many thanks to those above, as all the credit for this
series really is due to them - while the mistakes are likely mine.
Thanks so much!
-john
[1] https://static.lwn.net/images/conf/rtlws11/papers/proc/p38.pdf
Cc: Joel Fernandes <joelaf@google.com>
Cc: Qais Yousef <qyousef@layalina.io>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Zimuzo Ezeozue <zezeozue@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Metin Kaya <Metin.Kaya@arm.com>
Cc: Xuewen Yan <xuewen.yan94@gmail.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: kernel-team@android.com
John Stultz (4):
sched: Add CONFIG_SCHED_PROXY_EXEC & boot argument to enable/disable
sched: Fix runtime accounting w/ split exec & sched contexts
sched: Fix psi_dequeue for Proxy Execution
sched: Add an initial sketch of the find_proxy_task() function
Peter Zijlstra (2):
locking/mutex: Rework task_struct::blocked_on
sched: Start blocked_on chain processing in find_proxy_task()
Valentin Schneider (1):
sched: Fix proxy/current (push,pull)ability
.../admin-guide/kernel-parameters.txt | 5 +
include/linux/sched.h | 79 ++++-
init/Kconfig | 7 +
init/init_task.c | 1 +
kernel/fork.c | 4 +-
kernel/locking/mutex-debug.c | 9 +-
kernel/locking/mutex.c | 40 ++-
kernel/locking/ww_mutex.h | 24 +-
kernel/sched/core.c | 300 +++++++++++++++++-
kernel/sched/fair.c | 31 +-
kernel/sched/rt.c | 15 +-
kernel/sched/sched.h | 22 +-
kernel/sched/stats.h | 6 +-
13 files changed, 507 insertions(+), 36 deletions(-)
--
2.47.0.199.ga7371fff76-goog
^ permalink raw reply [flat|nested] 8+ messages in thread
* [RFC][PATCH v13 1/7] sched: Add CONFIG_SCHED_PROXY_EXEC & boot argument to enable/disable
2024-11-06 2:56 [RFC][PATCH v13 0/7] Single CPU Proxy Execution (v13) John Stultz
@ 2024-11-06 2:56 ` John Stultz
2024-11-06 2:56 ` [RFC][PATCH v13 2/7] locking/mutex: Rework task_struct::blocked_on John Stultz
` (5 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: John Stultz @ 2024-11-06 2:56 UTC (permalink / raw)
To: LKML
Cc: John Stultz, Joel Fernandes, Qais Yousef, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Valentin Schneider, Steven Rostedt, Ben Segall, Zimuzo Ezeozue,
Mel Gorman, Will Deacon, Waiman Long, Boqun Feng,
Paul E. McKenney, Metin Kaya, Xuewen Yan, K Prateek Nayak,
Thomas Gleixner, Daniel Lezcano, kernel-team
Add a CONFIG_SCHED_PROXY_EXEC option, along with a boot argument
sched_proxy_exec= that can be used to disable the feature at boot
time if CONFIG_SCHED_PROXY_EXEC was enabled.
Cc: Joel Fernandes <joelaf@google.com>
Cc: Qais Yousef <qyousef@layalina.io>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Zimuzo Ezeozue <zezeozue@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Metin Kaya <Metin.Kaya@arm.com>
Cc: Xuewen Yan <xuewen.yan94@gmail.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: kernel-team@android.com
Tested-by: K Prateek Nayak <kprateek.nayak@amd.com>
Signed-off-by: John Stultz <jstultz@google.com>
---
v7:
* Switch to CONFIG_SCHED_PROXY_EXEC/sched_proxy_exec= as
suggested by Metin Kaya.
* Switch boot arg from =disable/enable to use kstrtobool(),
which supports =yes|no|1|0|true|false|on|off, as also
suggested by Metin Kaya, and print a message when a boot
argument is used.
v8:
* Move CONFIG_SCHED_PROXY_EXEC under Scheduler Features as
Suggested by Metin
* Minor rework reordering with split sched contexts patch
v12:
* Rework for selected -> donor renaming
---
.../admin-guide/kernel-parameters.txt | 5 ++++
include/linux/sched.h | 13 +++++++++
init/Kconfig | 7 +++++
kernel/sched/core.c | 29 +++++++++++++++++++
kernel/sched/sched.h | 12 ++++++++
5 files changed, 66 insertions(+)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 1518343bbe22..5152935b54b6 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5944,6 +5944,11 @@
sa1100ir [NET]
See drivers/net/irda/sa1100_ir.c.
+ sched_proxy_exec= [KNL]
+ Enables or disables "proxy execution" style
+ solution to mutex-based priority inversion.
+ Format: <bool>
+
sched_verbose [KNL,EARLY] Enables verbose scheduler debug messages.
schedstats= [KNL,X86] Enable or disable scheduled statistics.
diff --git a/include/linux/sched.h b/include/linux/sched.h
index a76e3d074a2a..2a47228a4808 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1609,6 +1609,19 @@ struct task_struct {
*/
};
+#ifdef CONFIG_SCHED_PROXY_EXEC
+DECLARE_STATIC_KEY_TRUE(__sched_proxy_exec);
+static inline bool sched_proxy_exec(void)
+{
+ return static_branch_likely(&__sched_proxy_exec);
+}
+#else
+static inline bool sched_proxy_exec(void)
+{
+ return false;
+}
+#endif
+
#define TASK_REPORT_IDLE (TASK_REPORT + 1)
#define TASK_REPORT_MAX (TASK_REPORT_IDLE << 1)
diff --git a/init/Kconfig b/init/Kconfig
index c521e1421ad4..a91c51850731 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -863,6 +863,13 @@ config UCLAMP_BUCKETS_COUNT
If in doubt, use the default value.
+config SCHED_PROXY_EXEC
+ bool "Proxy Execution"
+ default n
+ help
+ This option enables proxy execution, a mechanism for mutex-owning
+ tasks to inherit the scheduling context of higher priority waiters.
+
endmenu
#
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c57a79e34911..731ebd8614a9 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -119,6 +119,35 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(sched_compute_energy_tp);
DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
+#ifdef CONFIG_SCHED_PROXY_EXEC
+DEFINE_STATIC_KEY_TRUE(__sched_proxy_exec);
+static int __init setup_proxy_exec(char *str)
+{
+ bool proxy_enable;
+
+ if (kstrtobool(str, &proxy_enable)) {
+ pr_warn("Unable to parse sched_proxy_exec=\n");
+ return 0;
+ }
+
+ if (proxy_enable) {
+ pr_info("sched_proxy_exec enabled via boot arg\n");
+ static_branch_enable(&__sched_proxy_exec);
+ } else {
+ pr_info("sched_proxy_exec disabled via boot arg\n");
+ static_branch_disable(&__sched_proxy_exec);
+ }
+ return 1;
+}
+#else
+static int __init setup_proxy_exec(char *str)
+{
+ pr_warn("CONFIG_SCHED_PROXY_EXEC=n, so it cannot be enabled or disabled at boot time\n");
+ return 0;
+}
+#endif
+__setup("sched_proxy_exec=", setup_proxy_exec);
+
#ifdef CONFIG_SCHED_DEBUG
/*
* Debugging: various feature bits
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index e51bf5a344d3..258db6ef8c70 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1147,10 +1147,15 @@ struct rq {
*/
unsigned int nr_uninterruptible;
+#ifdef CONFIG_SCHED_PROXY_EXEC
+ struct task_struct __rcu *donor; /* Scheduling context */
+ struct task_struct __rcu *curr; /* Execution context */
+#else
union {
struct task_struct __rcu *donor; /* Scheduler context */
struct task_struct __rcu *curr; /* Execution context */
};
+#endif
struct sched_dl_entity *dl_server;
struct task_struct *idle;
struct task_struct *stop;
@@ -1347,10 +1352,17 @@ DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
#define raw_rq() raw_cpu_ptr(&runqueues)
+#ifdef CONFIG_SCHED_PROXY_EXEC
+static inline void rq_set_donor(struct rq *rq, struct task_struct *t)
+{
+ rcu_assign_pointer(rq->donor, t);
+}
+#else
static inline void rq_set_donor(struct rq *rq, struct task_struct *t)
{
/* Do nothing */
}
+#endif
#ifdef CONFIG_SCHED_CORE
static inline struct cpumask *sched_group_span(struct sched_group *sg);
--
2.47.0.199.ga7371fff76-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC][PATCH v13 2/7] locking/mutex: Rework task_struct::blocked_on
2024-11-06 2:56 [RFC][PATCH v13 0/7] Single CPU Proxy Execution (v13) John Stultz
2024-11-06 2:56 ` [RFC][PATCH v13 1/7] sched: Add CONFIG_SCHED_PROXY_EXEC & boot argument to enable/disable John Stultz
@ 2024-11-06 2:56 ` John Stultz
2024-11-06 2:56 ` [RFC][PATCH v13 3/7] sched: Fix runtime accounting w/ split exec & sched contexts John Stultz
` (4 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: John Stultz @ 2024-11-06 2:56 UTC (permalink / raw)
To: LKML
Cc: Peter Zijlstra, Joel Fernandes, Qais Yousef, Ingo Molnar,
Juri Lelli, Vincent Guittot, Dietmar Eggemann, Valentin Schneider,
Steven Rostedt, Ben Segall, Zimuzo Ezeozue, Mel Gorman,
Will Deacon, Waiman Long, Boqun Feng, Paul E. McKenney,
Metin Kaya, Xuewen Yan, K Prateek Nayak, Thomas Gleixner,
Daniel Lezcano, kernel-team, Connor O'Brien, John Stultz
From: Peter Zijlstra <peterz@infradead.org>
Track the blocked-on relation for mutexes, to allow following this
relation at schedule time.
task
| blocked-on
v
mutex
| owner
v
task
Also add a blocked_on_state value so we can distinguish when a
task is blocked_on a mutex, but is either blocked, waking up, or
runnable (such that it can try to acquire the lock its blocked
on).
This avoids some of the subtle & racy games where the blocked_on
state gets cleared, only to have it re-added by the
mutex_lock_slowpath call when it tries to acquire the lock on
wakeup
Also add blocked_lock to the task_struct so we can safely
serialize the blocked-on state.
Finally add wrappers that are useful to provide correctness
checks. Folded in from a patch by:
Valentin Schneider <valentin.schneider@arm.com>
This all will be used for tracking blocked-task/mutex chains
with the prox-execution patch in a similar fashion to how
priority inheritance is done with rt_mutexes.
Cc: Joel Fernandes <joelaf@google.com>
Cc: Qais Yousef <qyousef@layalina.io>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Zimuzo Ezeozue <zezeozue@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Metin Kaya <Metin.Kaya@arm.com>
Cc: Xuewen Yan <xuewen.yan94@gmail.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: kernel-team@android.com
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
[minor changes while rebasing]
Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Connor O'Brien <connoro@google.com>
[jstultz: Fix blocked_on tracking in __mutex_lock_common in error paths]
Signed-off-by: John Stultz <jstultz@google.com>
---
v2:
* Fixed blocked_on tracking in error paths that was causing crashes
v4:
* Ensure we clear blocked_on when waking ww_mutexes to die or wound.
This is critical so we don't get circular blocked_on relationships
that can't be resolved.
v5:
* Fix potential bug where the skip_wait path might clear blocked_on
when that path never set it
* Slight tweaks to where we set blocked_on to make it consistent,
along with extra WARN_ON correctness checking
* Minor comment changes
v7:
* Minor commit message change suggested by Metin Kaya
* Fix WARN_ON conditionals in unlock path (as blocked_on might already
be cleared), found while looking at issue Metin Kaya raised.
* Minor tweaks to be consistent in what we do under the
blocked_on lock, also tweaked variable name to avoid confusion
with label, and comment typos, as suggested by Metin Kaya
* Minor tweak for CONFIG_SCHED_PROXY_EXEC name change
* Moved unused block of code to later in the series, as suggested
by Metin Kaya
* Switch to a tri-state to be able to distinguish from waking and
runnable so we can later safely do return migration from ttwu
* Folded together with related blocked_on changes
v8:
* Fix issue leaving task BO_BLOCKED when calling into optimistic
spinning path.
* Include helper to better handle BO_BLOCKED->BO_WAKING transitions
v9:
* Typo fixup pointed out by Metin
* Cleanup BO_WAKING->BO_RUNNABLE transitions for the !proxy case
* Many cleanups and simplifications suggested by Metin
v11:
* Whitespace fixup pointed out by Metin
v13:
* Refactor set_blocked_on helpers clean things up a bit
---
include/linux/sched.h | 66 ++++++++++++++++++++++++++++++++----
init/init_task.c | 1 +
kernel/fork.c | 4 +--
kernel/locking/mutex-debug.c | 9 ++---
kernel/locking/mutex.c | 40 ++++++++++++++++++----
kernel/locking/ww_mutex.h | 24 +++++++++++--
kernel/sched/core.c | 1 +
7 files changed, 125 insertions(+), 20 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 2a47228a4808..abe23791de30 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -34,6 +34,7 @@
#include <linux/sched/prio.h>
#include <linux/sched/types.h>
#include <linux/signal_types.h>
+#include <linux/spinlock.h>
#include <linux/syscall_user_dispatch_types.h>
#include <linux/mm_types_task.h>
#include <linux/netdevice_xmit.h>
@@ -775,6 +776,12 @@ struct kmap_ctrl {
#endif
};
+enum blocked_on_state {
+ BO_RUNNABLE,
+ BO_BLOCKED,
+ BO_WAKING,
+};
+
struct task_struct {
#ifdef CONFIG_THREAD_INFO_IN_TASK
/*
@@ -1195,10 +1202,9 @@ struct task_struct {
struct rt_mutex_waiter *pi_blocked_on;
#endif
-#ifdef CONFIG_DEBUG_MUTEXES
- /* Mutex deadlock detection: */
- struct mutex_waiter *blocked_on;
-#endif
+ enum blocked_on_state blocked_on_state;
+ struct mutex *blocked_on; /* lock we're blocked on */
+ raw_spinlock_t blocked_lock;
#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
int non_block_count;
@@ -2116,6 +2122,56 @@ extern int __cond_resched_rwlock_write(rwlock_t *lock);
__cond_resched_rwlock_write(lock); \
})
+static inline void __set_blocked_on_runnable(struct task_struct *p)
+{
+ lockdep_assert_held(&p->blocked_lock);
+
+ if (p->blocked_on_state == BO_WAKING)
+ p->blocked_on_state = BO_RUNNABLE;
+}
+
+static inline void set_blocked_on_runnable(struct task_struct *p)
+{
+ unsigned long flags;
+
+ if (!sched_proxy_exec())
+ return;
+
+ raw_spin_lock_irqsave(&p->blocked_lock, flags);
+ __set_blocked_on_runnable(p);
+ raw_spin_unlock_irqrestore(&p->blocked_lock, flags);
+}
+
+static inline void set_blocked_on_waking(struct task_struct *p)
+{
+ lockdep_assert_held(&p->blocked_lock);
+
+ if (p->blocked_on_state == BO_BLOCKED)
+ p->blocked_on_state = BO_WAKING;
+}
+
+static inline void set_task_blocked_on(struct task_struct *p, struct mutex *m)
+{
+ lockdep_assert_held(&p->blocked_lock);
+
+ /*
+ * Check we are clearing values to NULL or setting NULL
+ * to values to ensure we don't overwrite existing mutex
+ * values or clear already cleared values
+ */
+ WARN_ON((!m && !p->blocked_on) || (m && p->blocked_on));
+
+ p->blocked_on = m;
+ p->blocked_on_state = m ? BO_BLOCKED : BO_RUNNABLE;
+}
+
+static inline struct mutex *get_task_blocked_on(struct task_struct *p)
+{
+ lockdep_assert_held(&p->blocked_lock);
+
+ return p->blocked_on;
+}
+
static __always_inline bool need_resched(void)
{
return unlikely(tif_need_resched());
@@ -2155,8 +2211,6 @@ extern bool sched_task_on_rq(struct task_struct *p);
extern unsigned long get_wchan(struct task_struct *p);
extern struct task_struct *cpu_curr_snapshot(int cpu);
-#include <linux/spinlock.h>
-
/*
* In order to reduce various lock holder preemption latencies provide an
* interface to see if a vCPU is currently running or not.
diff --git a/init/init_task.c b/init/init_task.c
index 136a8231355a..c2da9cce36bd 100644
--- a/init/init_task.c
+++ b/init/init_task.c
@@ -139,6 +139,7 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
.journal_info = NULL,
INIT_CPU_TIMERS(init_task)
.pi_lock = __RAW_SPIN_LOCK_UNLOCKED(init_task.pi_lock),
+ .blocked_lock = __RAW_SPIN_LOCK_UNLOCKED(init_task.blocked_lock),
.timer_slack_ns = 50000, /* 50 usec default slack */
.thread_pid = &init_struct_pid,
.thread_node = LIST_HEAD_INIT(init_signals.thread_head),
diff --git a/kernel/fork.c b/kernel/fork.c
index 7d950e93f080..99ebc8269140 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2229,6 +2229,7 @@ __latent_entropy struct task_struct *copy_process(
ftrace_graph_init_task(p);
rt_mutex_init_task(p);
+ raw_spin_lock_init(&p->blocked_lock);
lockdep_assert_irqs_enabled();
#ifdef CONFIG_PROVE_LOCKING
@@ -2326,9 +2327,8 @@ __latent_entropy struct task_struct *copy_process(
lockdep_init_task(p);
#endif
-#ifdef CONFIG_DEBUG_MUTEXES
+ p->blocked_on_state = BO_RUNNABLE;
p->blocked_on = NULL; /* not blocked yet */
-#endif
#ifdef CONFIG_BCACHE
p->sequential_io = 0;
p->sequential_io_avg = 0;
diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c
index 6e6f6071cfa2..1d8cff71f65e 100644
--- a/kernel/locking/mutex-debug.c
+++ b/kernel/locking/mutex-debug.c
@@ -53,17 +53,18 @@ void debug_mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter,
{
lockdep_assert_held(&lock->wait_lock);
- /* Mark the current thread as blocked on the lock: */
- task->blocked_on = waiter;
+ /* Current thread can't be already blocked (since it's executing!) */
+ DEBUG_LOCKS_WARN_ON(get_task_blocked_on(task));
}
void debug_mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter,
struct task_struct *task)
{
+ struct mutex *blocked_on = get_task_blocked_on(task);
+
DEBUG_LOCKS_WARN_ON(list_empty(&waiter->list));
DEBUG_LOCKS_WARN_ON(waiter->task != task);
- DEBUG_LOCKS_WARN_ON(task->blocked_on != waiter);
- task->blocked_on = NULL;
+ DEBUG_LOCKS_WARN_ON(blocked_on && blocked_on != lock);
INIT_LIST_HEAD(&waiter->list);
waiter->task = NULL;
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 3302e52f0c96..8f5d3fe6c102 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -597,6 +597,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
}
raw_spin_lock_irqsave(&lock->wait_lock, flags);
+ raw_spin_lock(¤t->blocked_lock);
/*
* After waiting to acquire the wait_lock, try again.
*/
@@ -627,6 +628,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
goto err_early_kill;
}
+ set_task_blocked_on(current, lock);
set_current_state(state);
trace_contention_begin(lock, LCB_F_MUTEX);
for (;;) {
@@ -639,7 +641,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
* the handoff.
*/
if (__mutex_trylock(lock))
- goto acquired;
+ break; /* acquired */;
/*
* Check for signals and kill conditions while holding
@@ -657,6 +659,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
goto err;
}
+ raw_spin_unlock(¤t->blocked_lock);
raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
/* Make sure we do wakeups before calling schedule */
wake_up_q(&wake_q);
@@ -666,6 +669,13 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
first = __mutex_waiter_is_first(lock, &waiter);
+ raw_spin_lock_irqsave(&lock->wait_lock, flags);
+ raw_spin_lock(¤t->blocked_lock);
+
+ /*
+ * Re-set blocked_on_state as unlock path set it to WAKING/RUNNABLE
+ */
+ current->blocked_on_state = BO_BLOCKED;
set_current_state(state);
/*
* Here we order against unlock; we must either see it change
@@ -676,16 +686,26 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
break;
if (first) {
+ bool opt_acquired;
+
+ /*
+ * mutex_optimistic_spin() can schedule, so we need to
+ * release these locks before calling it.
+ */
+ current->blocked_on_state = BO_RUNNABLE;
+ raw_spin_unlock(¤t->blocked_lock);
+ raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
trace_contention_begin(lock, LCB_F_MUTEX | LCB_F_SPIN);
- if (mutex_optimistic_spin(lock, ww_ctx, &waiter))
+ opt_acquired = mutex_optimistic_spin(lock, ww_ctx, &waiter);
+ raw_spin_lock_irqsave(&lock->wait_lock, flags);
+ raw_spin_lock(¤t->blocked_lock);
+ current->blocked_on_state = BO_BLOCKED;
+ if (opt_acquired)
break;
trace_contention_begin(lock, LCB_F_MUTEX);
}
-
- raw_spin_lock_irqsave(&lock->wait_lock, flags);
}
- raw_spin_lock_irqsave(&lock->wait_lock, flags);
-acquired:
+ set_task_blocked_on(current, NULL);
__set_current_state(TASK_RUNNING);
if (ww_ctx) {
@@ -710,16 +730,20 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
if (ww_ctx)
ww_mutex_lock_acquired(ww, ww_ctx);
+ raw_spin_unlock(¤t->blocked_lock);
raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
wake_up_q(&wake_q);
preempt_enable();
return 0;
err:
+ set_task_blocked_on(current, NULL);
__set_current_state(TASK_RUNNING);
__mutex_remove_waiter(lock, &waiter);
err_early_kill:
+ WARN_ON(get_task_blocked_on(current));
trace_contention_end(lock, ret);
+ raw_spin_unlock(¤t->blocked_lock);
raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
debug_mutex_free_waiter(&waiter);
mutex_release(&lock->dep_map, ip);
@@ -928,8 +952,12 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
next = waiter->task;
+ raw_spin_lock(&next->blocked_lock);
debug_mutex_wake_waiter(lock, waiter);
+ WARN_ON(get_task_blocked_on(next) != lock);
+ set_blocked_on_waking(next);
wake_q_add(&wake_q, next);
+ raw_spin_unlock(&next->blocked_lock);
}
if (owner & MUTEX_FLAG_HANDOFF)
diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
index 37f025a096c9..d1bec62c6ef0 100644
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -281,10 +281,21 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter,
return false;
if (waiter->ww_ctx->acquired > 0 && __ww_ctx_less(waiter->ww_ctx, ww_ctx)) {
+ /* nested as we should hold current->blocked_lock already */
+ raw_spin_lock_nested(&waiter->task->blocked_lock, SINGLE_DEPTH_NESTING);
#ifndef WW_RT
debug_mutex_wake_waiter(lock, waiter);
#endif
+ /*
+ * When waking up the task to die, be sure to set the
+ * blocked_on_state to WAKING. Otherwise we can see
+ * circular blocked_on relationships that can't
+ * resolve.
+ */
+ WARN_ON(get_task_blocked_on(waiter->task) != lock);
+ set_blocked_on_waking(waiter->task);
wake_q_add(wake_q, waiter->task);
+ raw_spin_unlock(&waiter->task->blocked_lock);
}
return true;
@@ -331,9 +342,18 @@ static bool __ww_mutex_wound(struct MUTEX *lock,
* it's wounded in __ww_mutex_check_kill() or has a
* wakeup pending to re-read the wounded state.
*/
- if (owner != current)
+ if (owner != current) {
+ /* nested as we should hold current->blocked_lock already */
+ raw_spin_lock_nested(&owner->blocked_lock, SINGLE_DEPTH_NESTING);
+ /*
+ * When waking up the task to wound, be sure to set the
+ * blocked_on_state flag. Otherwise we can see circular
+ * blocked_on relationships that can't resolve.
+ */
+ set_blocked_on_waking(owner);
wake_q_add(wake_q, owner);
-
+ raw_spin_unlock(&owner->blocked_lock);
+ }
return true;
}
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 731ebd8614a9..f040feed9df3 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4309,6 +4309,7 @@ int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
ttwu_queue(p, cpu, wake_flags);
}
out:
+ set_blocked_on_runnable(p);
if (success)
ttwu_stat(p, task_cpu(p), wake_flags);
--
2.47.0.199.ga7371fff76-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC][PATCH v13 3/7] sched: Fix runtime accounting w/ split exec & sched contexts
2024-11-06 2:56 [RFC][PATCH v13 0/7] Single CPU Proxy Execution (v13) John Stultz
2024-11-06 2:56 ` [RFC][PATCH v13 1/7] sched: Add CONFIG_SCHED_PROXY_EXEC & boot argument to enable/disable John Stultz
2024-11-06 2:56 ` [RFC][PATCH v13 2/7] locking/mutex: Rework task_struct::blocked_on John Stultz
@ 2024-11-06 2:56 ` John Stultz
2024-11-06 2:56 ` [RFC][PATCH v13 4/7] sched: Fix psi_dequeue for Proxy Execution John Stultz
` (3 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: John Stultz @ 2024-11-06 2:56 UTC (permalink / raw)
To: LKML
Cc: John Stultz, Joel Fernandes, Qais Yousef, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Valentin Schneider, Steven Rostedt, Ben Segall, Zimuzo Ezeozue,
Mel Gorman, Will Deacon, Waiman Long, Boqun Feng,
Paul E. McKenney, Metin Kaya, Xuewen Yan, K Prateek Nayak,
Thomas Gleixner, Daniel Lezcano, kernel-team
The idea here is we want to charge the scheduler-context task's
vruntime but charge the execution-context task's sum_exec_runtime.
This way cputime accounting goes against the task actually running
but vruntime accounting goes against the rq->donor task so we get
proper fairness.
Cc: Joel Fernandes <joelaf@google.com>
Cc: Qais Yousef <qyousef@layalina.io>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Zimuzo Ezeozue <zezeozue@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Metin Kaya <Metin.Kaya@arm.com>
Cc: Xuewen Yan <xuewen.yan94@gmail.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: kernel-team@android.com
Signed-off-by: John Stultz <jstultz@google.com>
---
kernel/sched/fair.c | 21 ++++++++++++++++-----
1 file changed, 16 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6512258dc71f..42043310adfe 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1131,22 +1131,33 @@ static void update_tg_load_avg(struct cfs_rq *cfs_rq)
}
#endif /* CONFIG_SMP */
-static s64 update_curr_se(struct rq *rq, struct sched_entity *curr)
+static s64 update_curr_se(struct rq *rq, struct sched_entity *se)
{
u64 now = rq_clock_task(rq);
s64 delta_exec;
- delta_exec = now - curr->exec_start;
+ delta_exec = now - se->exec_start;
if (unlikely(delta_exec <= 0))
return delta_exec;
- curr->exec_start = now;
- curr->sum_exec_runtime += delta_exec;
+ se->exec_start = now;
+ if (entity_is_task(se)) {
+ struct task_struct *running = rq->curr;
+ /*
+ * If se is a task, we account the time against the running
+ * task, as w/ proxy-exec they may not be the same.
+ */
+ running->se.exec_start = now;
+ running->se.sum_exec_runtime += delta_exec;
+ } else {
+ /* If not task, account the time against se */
+ se->sum_exec_runtime += delta_exec;
+ }
if (schedstat_enabled()) {
struct sched_statistics *stats;
- stats = __schedstats_from_se(curr);
+ stats = __schedstats_from_se(se);
__schedstat_set(stats->exec_max,
max(delta_exec, stats->exec_max));
}
--
2.47.0.199.ga7371fff76-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC][PATCH v13 4/7] sched: Fix psi_dequeue for Proxy Execution
2024-11-06 2:56 [RFC][PATCH v13 0/7] Single CPU Proxy Execution (v13) John Stultz
` (2 preceding siblings ...)
2024-11-06 2:56 ` [RFC][PATCH v13 3/7] sched: Fix runtime accounting w/ split exec & sched contexts John Stultz
@ 2024-11-06 2:56 ` John Stultz
2024-11-06 2:56 ` [RFC][PATCH v13 5/7] sched: Add an initial sketch of the find_proxy_task() function John Stultz
` (2 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: John Stultz @ 2024-11-06 2:56 UTC (permalink / raw)
To: LKML
Cc: John Stultz, Joel Fernandes, Qais Yousef, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Valentin Schneider, Steven Rostedt, Ben Segall, Zimuzo Ezeozue,
Mel Gorman, Will Deacon, Waiman Long, Boqun Feng,
Paul E. McKenney, Metin Kaya, Xuewen Yan, K Prateek Nayak,
Thomas Gleixner, Daniel Lezcano, kernel-team
Currently, if the sleep flag is set, psi_dequeue() doesn't
change any of the psi_flags.
This is because psi_switch_task() will clear TSK_ONCPU as well
as other potential flags (TSK_RUNNING), and the assumption is
that a voluntary sleep always consists of a task being dequeued
followed shortly there after with a psi_sched_switch() call.
Proxy Execution changes this expectation, as mutex-blocked tasks
that would normally sleep stay on the runqueue. In the case where
the mutex owning task goes to sleep, we will then deactivate the
blocked task as well.
However, in that case, the mutex-blocked task will have had
its TSK_ONCPU cleared when it was switched off the cpu, but it
will stay TSK_RUNNING. Then when we later dequeue it becaues of
a sleeping-owner, as it is sleeping psi_dequeue() won't change
any state (leaving it TSK_RUNNING), as it incorrectly expects a
psi_task_switch() call to immediately follow.
Later on when it get re enqueued, and psi_flags are set for
TSK_RUNNING, we hit an error as the task is already TSK_RUNNING:
psi: inconsistent task state!
To resolve this, extend the logic in psi_dequeue() so that
if the sleep flag is set, we also check if psi_flags have
TSK_ONCPU set (meaning the psi_task_switch is imminient) before
we do the shortcut return.
If TSK_ONCPU is not set, that means we've already swtiched away,
and this psi_dequeue call needs to clear the flags.
Cc: Joel Fernandes <joelaf@google.com>
Cc: Qais Yousef <qyousef@layalina.io>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Zimuzo Ezeozue <zezeozue@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Metin Kaya <Metin.Kaya@arm.com>
Cc: Xuewen Yan <xuewen.yan94@gmail.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: kernel-team@android.com
Signed-off-by: John Stultz <jstultz@google.com>
---
v13:
* Reworked for collision
---
kernel/sched/stats.h | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
index 8ee0add5a48a..c313fe76a772 100644
--- a/kernel/sched/stats.h
+++ b/kernel/sched/stats.h
@@ -176,8 +176,12 @@ static inline void psi_dequeue(struct task_struct *p, int flags)
* avoid walking all ancestors twice, psi_task_switch() handles
* TSK_RUNNING and TSK_IOWAIT for us when it moves TSK_ONCPU.
* Do nothing here.
+ * In the SCHED_PROXY_EXECUTION case we may do sleeping
+ * dequeues that are not followed by a task switch, so check
+ * TSK_ONCPU is set to ensure the task switch is imminent.
+ * Otherwise clear the flags as usual.
*/
- if (flags & DEQUEUE_SLEEP)
+ if ((flags & DEQUEUE_SLEEP) && (p->psi_flags & TSK_ONCPU))
return;
/*
--
2.47.0.199.ga7371fff76-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC][PATCH v13 5/7] sched: Add an initial sketch of the find_proxy_task() function
2024-11-06 2:56 [RFC][PATCH v13 0/7] Single CPU Proxy Execution (v13) John Stultz
` (3 preceding siblings ...)
2024-11-06 2:56 ` [RFC][PATCH v13 4/7] sched: Fix psi_dequeue for Proxy Execution John Stultz
@ 2024-11-06 2:56 ` John Stultz
2024-11-06 2:56 ` [RFC][PATCH v13 6/7] sched: Fix proxy/current (push,pull)ability John Stultz
2024-11-06 2:56 ` [RFC][PATCH v13 7/7] sched: Start blocked_on chain processing in find_proxy_task() John Stultz
6 siblings, 0 replies; 8+ messages in thread
From: John Stultz @ 2024-11-06 2:56 UTC (permalink / raw)
To: LKML
Cc: John Stultz, Joel Fernandes, Qais Yousef, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Valentin Schneider, Steven Rostedt, Ben Segall, Zimuzo Ezeozue,
Mel Gorman, Will Deacon, Waiman Long, Boqun Feng,
Paul E. McKenney, Metin Kaya, Xuewen Yan, K Prateek Nayak,
Thomas Gleixner, Daniel Lezcano, kernel-team
Add a find_proxy_task() function which doesn't do much.
When we select a blocked task to run, we will just deactivate it
and pick again. The exception being if it has become unblocked
after find_proxy_task() was called.
Greatly simplified from patch by:
Peter Zijlstra (Intel) <peterz@infradead.org>
Juri Lelli <juri.lelli@redhat.com>
Valentin Schneider <valentin.schneider@arm.com>
Connor O'Brien <connoro@google.com>
Cc: Joel Fernandes <joelaf@google.com>
Cc: Qais Yousef <qyousef@layalina.io>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Zimuzo Ezeozue <zezeozue@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Metin Kaya <Metin.Kaya@arm.com>
Cc: Xuewen Yan <xuewen.yan94@gmail.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: kernel-team@android.com
[jstultz: Split out from larger proxy patch and simplified
for review and testing.]
Signed-off-by: John Stultz <jstultz@google.com>
---
v5:
* Split out from larger proxy patch
v7:
* Fixed unused function arguments, spelling nits, and tweaks for
clarity, pointed out by Metin Kaya
* Fix build warning Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202311081028.yDLmCWgr-lkp@intel.com/
v8:
* Fixed case where we might return a blocked task from find_proxy_task()
* Continued tweaks to handle avoiding returning blocked tasks
v9:
* Add zap_balance_callbacks helper to unwind balance_callbacks
when we will re-call pick_next_task() again.
* Add extra comment suggested by Metin
* Typo fixes from Metin
* Moved adding proxy_resched_idle earlier in the series, as suggested
by Metin
* Fix to call proxy_resched_idle() *prior* to deactivating next, to avoid
crashes caused by stale references to next
* s/PROXY/SCHED_PROXY_EXEC/ as suggested by Metin
* Number of tweaks and cleanups suggested by Metin
* Simplify proxy_deactivate as suggested by Metin
v11:
* Tweaks for earlier simplification in try_to_deactivate_task
v13:
* Rename rename "next" to "donor" in find_proxy_task() for clarity
* Similarly use "donor" instead of next in proxy_deactivate
* Refactor/simplify proxy_resched_idle
* Moved up a needed fix from later in the series
---
kernel/sched/core.c | 129 ++++++++++++++++++++++++++++++++++++++++++-
kernel/sched/rt.c | 15 ++++-
kernel/sched/sched.h | 10 +++-
3 files changed, 148 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index f040feed9df3..4e2c51c477b0 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5010,6 +5010,34 @@ static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
}
}
+/*
+ * Only called from __schedule context
+ *
+ * There are some cases where we are going to re-do the action
+ * that added the balance callbacks. We may not be in a state
+ * where we can run them, so just zap them so they can be
+ * properly re-added on the next time around. This is similar
+ * handling to running the callbacks, except we just don't call
+ * them.
+ */
+static void zap_balance_callbacks(struct rq *rq)
+{
+ struct balance_callback *next, *head;
+ bool found = false;
+
+ lockdep_assert_rq_held(rq);
+
+ head = rq->balance_callback;
+ while (head) {
+ if (head == &balance_push_callback)
+ found = true;
+ next = head->next;
+ head->next = NULL;
+ head = next;
+ }
+ rq->balance_callback = found ? &balance_push_callback : NULL;
+}
+
static void balance_push(struct rq *rq);
/*
@@ -6543,7 +6571,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
* Otherwise marks the task's __state as RUNNING
*/
static bool try_to_block_task(struct rq *rq, struct task_struct *p,
- unsigned long task_state)
+ unsigned long task_state, bool deactivate_cond)
{
int flags = DEQUEUE_NOCLOCK;
@@ -6552,6 +6580,9 @@ static bool try_to_block_task(struct rq *rq, struct task_struct *p,
return false;
}
+ if (!deactivate_cond)
+ return false;
+
p->sched_contributes_to_load =
(task_state & TASK_UNINTERRUPTIBLE) &&
!(task_state & TASK_NOLOAD) &&
@@ -6575,6 +6606,88 @@ static bool try_to_block_task(struct rq *rq, struct task_struct *p,
return true;
}
+#ifdef CONFIG_SCHED_PROXY_EXEC
+
+static inline struct task_struct *
+proxy_resched_idle(struct rq *rq)
+{
+ put_prev_task(rq, rq->donor);
+ rq_set_donor(rq, rq->idle);
+ set_next_task(rq, rq->idle);
+ set_tsk_need_resched(rq->idle);
+ return rq->idle;
+}
+
+static bool proxy_deactivate(struct rq *rq, struct task_struct *donor)
+{
+ unsigned long state = READ_ONCE(donor->__state);
+
+ /* Don't deactivate if the state has been changed to TASK_RUNNING */
+ if (state == TASK_RUNNING)
+ return false;
+ /*
+ * Because we got donor from pick_next_task, it is *crucial*
+ * that we call proxy_resched_idle before we deactivate it.
+ * As once we deactivate donor, donor->on_rq is set to zero,
+ * which allows ttwu to immediately try to wake the task on
+ * another rq. So we cannot use *any* references to donor
+ * after that point. So things like cfs_rq->curr or rq->donor
+ * need to be changed from next *before* we deactivate.
+ */
+ proxy_resched_idle(rq);
+ return try_to_block_task(rq, donor, state, true);
+}
+
+/*
+ * Initial simple proxy that just returns the task if it's waking
+ * or deactivates the blocked task so we can pick something that
+ * isn't blocked.
+ */
+static struct task_struct *
+find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
+{
+ struct task_struct *p = donor;
+ struct mutex *mutex;
+
+ mutex = p->blocked_on;
+ /* Something changed in the chain, so pick again */
+ if (!mutex)
+ return NULL;
+ /*
+ * By taking mutex->wait_lock we hold off concurrent mutex_unlock()
+ * and ensure @owner sticks around.
+ */
+ raw_spin_lock(&mutex->wait_lock);
+ raw_spin_lock(&p->blocked_lock);
+
+ /* Check again that p is blocked with blocked_lock held */
+ if (!task_is_blocked(p) || mutex != get_task_blocked_on(p)) {
+ /*
+ * Something changed in the blocked_on chain and
+ * we don't know if only at this level. So, let's
+ * just bail out completely and let __schedule
+ * figure things out (pick_again loop).
+ */
+ goto out;
+ }
+ if (!proxy_deactivate(rq, donor))
+ /* XXX: This hack won't work when we get to migrations */
+ donor->blocked_on_state = BO_RUNNABLE;
+
+out:
+ raw_spin_unlock(&p->blocked_lock);
+ raw_spin_unlock(&mutex->wait_lock);
+ return NULL;
+}
+#else /* SCHED_PROXY_EXEC */
+static struct task_struct *
+find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
+{
+ WARN_ONCE(1, "This should never be called in the !SCHED_PROXY_EXEC case\n");
+ return donor;
+}
+#endif /* SCHED_PROXY_EXEC */
+
/*
* __schedule() is the main scheduler function.
*
@@ -6683,12 +6796,22 @@ static void __sched notrace __schedule(int sched_mode)
goto picked;
}
} else if (!preempt && prev_state) {
- block = try_to_block_task(rq, prev, prev_state);
+ block = try_to_block_task(rq, prev, prev_state,
+ !task_is_blocked(prev));
switch_count = &prev->nvcsw;
}
- next = pick_next_task(rq, prev, &rf);
+pick_again:
+ next = pick_next_task(rq, rq->donor, &rf);
rq_set_donor(rq, next);
+ if (unlikely(task_is_blocked(next))) {
+ next = find_proxy_task(rq, next, &rf);
+ if (!next) {
+ /* zap the balance_callbacks before picking again */
+ zap_balance_callbacks(rq);
+ goto pick_again;
+ }
+ }
picked:
clear_tsk_need_resched(prev);
clear_preempt_need_resched();
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index bd66a46b06ac..fa4d9bf76ad4 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1479,8 +1479,19 @@ enqueue_task_rt(struct rq *rq, struct task_struct *p, int flags)
enqueue_rt_entity(rt_se, flags);
- if (!task_current(rq, p) && p->nr_cpus_allowed > 1)
- enqueue_pushable_task(rq, p);
+ /*
+ * Current can't be pushed away. Selected is tied to current,
+ * so don't push it either.
+ */
+ if (task_current(rq, p) || task_current_donor(rq, p))
+ return;
+ /*
+ * Pinned tasks can't be pushed.
+ */
+ if (p->nr_cpus_allowed == 1)
+ return;
+
+ enqueue_pushable_task(rq, p);
}
static bool dequeue_task_rt(struct rq *rq, struct task_struct *p, int flags)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 258db6ef8c70..529d4f34ea7b 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2271,6 +2271,14 @@ static inline int task_current_donor(struct rq *rq, struct task_struct *p)
return rq->donor == p;
}
+static inline bool task_is_blocked(struct task_struct *p)
+{
+ if (!sched_proxy_exec())
+ return false;
+
+ return !!p->blocked_on && p->blocked_on_state != BO_RUNNABLE;
+}
+
static inline int task_on_cpu(struct rq *rq, struct task_struct *p)
{
#ifdef CONFIG_SMP
@@ -2480,7 +2488,7 @@ static inline void put_prev_set_next_task(struct rq *rq,
struct task_struct *prev,
struct task_struct *next)
{
- WARN_ON_ONCE(rq->curr != prev);
+ WARN_ON_ONCE(rq->donor != prev);
__put_prev_set_next_dl_server(rq, prev, next);
--
2.47.0.199.ga7371fff76-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC][PATCH v13 6/7] sched: Fix proxy/current (push,pull)ability
2024-11-06 2:56 [RFC][PATCH v13 0/7] Single CPU Proxy Execution (v13) John Stultz
` (4 preceding siblings ...)
2024-11-06 2:56 ` [RFC][PATCH v13 5/7] sched: Add an initial sketch of the find_proxy_task() function John Stultz
@ 2024-11-06 2:56 ` John Stultz
2024-11-06 2:56 ` [RFC][PATCH v13 7/7] sched: Start blocked_on chain processing in find_proxy_task() John Stultz
6 siblings, 0 replies; 8+ messages in thread
From: John Stultz @ 2024-11-06 2:56 UTC (permalink / raw)
To: LKML
Cc: Valentin Schneider, Joel Fernandes, Qais Yousef, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Valentin Schneider, Steven Rostedt, Ben Segall, Zimuzo Ezeozue,
Mel Gorman, Will Deacon, Waiman Long, Boqun Feng,
Paul E. McKenney, Metin Kaya, Xuewen Yan, K Prateek Nayak,
Thomas Gleixner, Daniel Lezcano, kernel-team, Connor O'Brien,
John Stultz
From: Valentin Schneider <valentin.schneider@arm.com>
Proxy execution forms atomic pairs of tasks: The waiting donor
task (scheduling context) and a proxy (execution context). The
donor task, along with the rest of the blocked chain, follows
the proxy wrt CPU placement.
They can be the same task, in which case push/pull doesn't need any
modification. When they are different, however,
FIFO1 & FIFO42:
,-> RT42
| | blocked-on
| v
blocked_donor | mutex
| | owner
| v
`-- RT1
RT1
RT42
CPU0 CPU1
^ ^
| |
overloaded !overloaded
rq prio = 42 rq prio = 0
RT1 is eligible to be pushed to CPU1, but should that happen it will
"carry" RT42 along. Clearly here neither RT1 nor RT42 must be seen as
push/pullable.
Unfortunately, only the donor task is usually dequeued from the rq,
and the proxy'ed execution context (rq->curr) remains on the rq.
This can cause RT1 to be selected for migration from logic like the
rt pushable_list.
Thus, adda a dequeue/enqueue cycle on the proxy task before __schedule
returns, which allows the sched class logic to avoid adding the now
current task to the pushable_list.
Furthermore, tasks becoming blocked on a mutex don't need an explicit
dequeue/enqueue cycle to be made (push/pull)able: they have to be running
to block on a mutex, thus they will eventually hit put_prev_task().
Cc: Joel Fernandes <joelaf@google.com>
Cc: Qais Yousef <qyousef@layalina.io>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Zimuzo Ezeozue <zezeozue@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Metin Kaya <Metin.Kaya@arm.com>
Cc: Xuewen Yan <xuewen.yan94@gmail.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: kernel-team@android.com
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Connor O'Brien <connoro@google.com>
Signed-off-by: John Stultz <jstultz@google.com>
---
v3:
* Tweaked comments & commit message
v5:
* Minor simplifications to utilize the fix earlier
in the patch series.
* Rework the wording of the commit message to match selected/
proxy terminology and expand a bit to make it more clear how
it works.
v6:
* Dropped now-unused proxied value, to be re-added later in the
series when it is used, as caught by Dietmar
v7:
* Unused function argument fixup
* Commit message nit pointed out by Metin Kaya
* Dropped unproven unlikely() and use sched_proxy_exec()
in proxy_tag_curr, suggested by Metin Kaya
v8:
* More cleanups and typo fixes suggested by Metin Kaya
v11:
* Cleanup of comimt message suggested by Metin
v12:
* Rework for rq_selected -> rq->donor renaming
---
kernel/sched/core.c | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 4e2c51c477b0..42ea651d1469 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6688,6 +6688,23 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
}
#endif /* SCHED_PROXY_EXEC */
+static inline void proxy_tag_curr(struct rq *rq, struct task_struct *owner)
+{
+ if (!sched_proxy_exec())
+ return;
+ /*
+ * pick_next_task() calls set_next_task() on the chosen task
+ * at some point, which ensures it is not push/pullable.
+ * However, the chosen/donor task *and* the mutex owner form an
+ * atomic pair wrt push/pull.
+ *
+ * Make sure owner we run is not pushable. Unfortunately we can
+ * only deal with that by means of a dequeue/enqueue cycle. :-/
+ */
+ dequeue_task(rq, owner, DEQUEUE_NOCLOCK | DEQUEUE_SAVE);
+ enqueue_task(rq, owner, ENQUEUE_NOCLOCK | ENQUEUE_RESTORE);
+}
+
/*
* __schedule() is the main scheduler function.
*
@@ -6826,6 +6843,10 @@ static void __sched notrace __schedule(int sched_mode)
* changes to task_struct made by pick_next_task().
*/
RCU_INIT_POINTER(rq->curr, next);
+
+ if (!task_current_donor(rq, next))
+ proxy_tag_curr(rq, next);
+
/*
* The membarrier system call requires each architecture
* to have a full memory barrier after updating
@@ -6859,6 +6880,10 @@ static void __sched notrace __schedule(int sched_mode)
/* Also unlocks the rq: */
rq = context_switch(rq, prev, next, &rf);
} else {
+ /* In case next was already curr but just got blocked_donor */
+ if (!task_current_donor(rq, next))
+ proxy_tag_curr(rq, next);
+
rq_unpin_lock(rq, &rf);
__balance_callbacks(rq);
raw_spin_rq_unlock_irq(rq);
--
2.47.0.199.ga7371fff76-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC][PATCH v13 7/7] sched: Start blocked_on chain processing in find_proxy_task()
2024-11-06 2:56 [RFC][PATCH v13 0/7] Single CPU Proxy Execution (v13) John Stultz
` (5 preceding siblings ...)
2024-11-06 2:56 ` [RFC][PATCH v13 6/7] sched: Fix proxy/current (push,pull)ability John Stultz
@ 2024-11-06 2:56 ` John Stultz
6 siblings, 0 replies; 8+ messages in thread
From: John Stultz @ 2024-11-06 2:56 UTC (permalink / raw)
To: LKML
Cc: Peter Zijlstra, Joel Fernandes, Qais Yousef, Ingo Molnar,
Juri Lelli, Vincent Guittot, Dietmar Eggemann, Valentin Schneider,
Steven Rostedt, Ben Segall, Zimuzo Ezeozue, Mel Gorman,
Will Deacon, Waiman Long, Boqun Feng, Paul E. McKenney,
Metin Kaya, Xuewen Yan, K Prateek Nayak, Thomas Gleixner,
Daniel Lezcano, kernel-team, Valentin Schneider,
Connor O'Brien, John Stultz
From: Peter Zijlstra <peterz@infradead.org>
Start to flesh out the real find_proxy_task() implementation,
but avoid the migration cases for now, in those cases just
deactivate the donor task and pick again.
To ensure the donor task or other blocked tasks in the chain
aren't migrated away while we're running the proxy, also tweak
the CFS logic to avoid migrating donor or mutex blocked tasks.
Cc: Joel Fernandes <joelaf@google.com>
Cc: Qais Yousef <qyousef@layalina.io>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Zimuzo Ezeozue <zezeozue@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Metin Kaya <Metin.Kaya@arm.com>
Cc: Xuewen Yan <xuewen.yan94@gmail.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: kernel-team@android.com
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Connor O'Brien <connoro@google.com>
[jstultz: This change was split out from the larger proxy patch]
Signed-off-by: John Stultz <jstultz@google.com>
---
v5:
* Split this out from larger proxy patch
v7:
* Minor refactoring of core find_proxy_task() function
* Minor spelling and corrections suggested by Metin Kaya
* Dropped an added BUG_ON that was frequently tripped
v8:
* Fix issue if proxy_deactivate fails, we don't leave task
BO_BLOCKED
* Switch to WARN_ON from BUG_ON checks
v9:
* Improve comments suggested by Metin
* Minor cleanups
v11:
* Previously we checked next==rq->idle && prev==rq->idle, but I
think we only really care if next==rq->idle from find_proxy_task,
as we will still want to resched regardless of what prev was.
v12:
* Commit message rework for selected -> donor rewording
v13:
* Address new delayed dequeue condition (deactivate donor for now)
* Next to donor renaming in find_proxy_task
* Improved comments for find_proxy_task
* Rework for proxy_deactivate cleanup
---
kernel/sched/core.c | 164 ++++++++++++++++++++++++++++++++++++--------
kernel/sched/fair.c | 10 ++-
2 files changed, 146 insertions(+), 28 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 42ea651d1469..932f49765ddf 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -96,6 +96,7 @@
#include "../workqueue_internal.h"
#include "../../io_uring/io-wq.h"
#include "../smpboot.h"
+#include "../locking/mutex.h"
EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpu);
EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpumask);
@@ -2900,8 +2901,15 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
struct set_affinity_pending my_pending = { }, *pending = NULL;
bool stop_pending, complete = false;
- /* Can the task run on the task's current CPU? If so, we're done */
- if (cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
+ /*
+ * Can the task run on the task's current CPU? If so, we're done
+ *
+ * We are also done if the task is the current donor, boosting a lock-
+ * holding proxy, (and potentially has been migrated outside its
+ * current or previous affinity mask)
+ */
+ if (cpumask_test_cpu(task_cpu(p), &p->cpus_mask) ||
+ (task_current_donor(rq, p) && !task_current(rq, p))) {
struct task_struct *push_task = NULL;
if ((flags & SCA_MIGRATE_ENABLE) &&
@@ -6639,41 +6647,139 @@ static bool proxy_deactivate(struct rq *rq, struct task_struct *donor)
}
/*
- * Initial simple proxy that just returns the task if it's waking
- * or deactivates the blocked task so we can pick something that
- * isn't blocked.
+ * Find runnable lock owner to proxy for mutex blocked donor
+ *
+ * Follow the blocked-on relation:
+ * task->blocked_on -> mutex->owner -> task...
+ *
+ * Lock order:
+ *
+ * p->pi_lock
+ * rq->lock
+ * mutex->wait_lock
+ * p->blocked_lock
+ *
+ * Returns the task that is going to be used as execution context (the one
+ * that is actually going to be run on cpu_of(rq)).
*/
static struct task_struct *
find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
{
- struct task_struct *p = donor;
+ struct task_struct *owner = NULL;
+ struct task_struct *ret = NULL;
+ int this_cpu = cpu_of(rq);
+ struct task_struct *p;
struct mutex *mutex;
- mutex = p->blocked_on;
- /* Something changed in the chain, so pick again */
- if (!mutex)
- return NULL;
- /*
- * By taking mutex->wait_lock we hold off concurrent mutex_unlock()
- * and ensure @owner sticks around.
- */
- raw_spin_lock(&mutex->wait_lock);
- raw_spin_lock(&p->blocked_lock);
+ /* Follow blocked_on chain. */
+ for (p = donor; task_is_blocked(p); p = owner) {
+ mutex = p->blocked_on;
+ /* Something changed in the chain, so pick again */
+ if (!mutex)
+ return NULL;
+ /*
+ * By taking mutex->wait_lock we hold off concurrent mutex_unlock()
+ * and ensure @owner sticks around.
+ */
+ raw_spin_lock(&mutex->wait_lock);
+ raw_spin_lock(&p->blocked_lock);
+
+ /* Check again that p is blocked with blocked_lock held */
+ if (mutex != get_task_blocked_on(p)) {
+ /*
+ * Something changed in the blocked_on chain and
+ * we don't know if only at this level. So, let's
+ * just bail out completely and let __schedule
+ * figure things out (pick_again loop).
+ */
+ goto out;
+ }
+
+ owner = __mutex_owner(mutex);
+ if (!owner) {
+ p->blocked_on_state = BO_RUNNABLE;
+ ret = p;
+ goto out;
+ }
+
+ if (task_cpu(owner) != this_cpu) {
+ /* XXX Don't handle migrations yet */
+ if (!proxy_deactivate(rq, donor))
+ goto deactivate_failed;
+ goto out;
+ }
+
+ if (task_on_rq_migrating(owner)) {
+ /*
+ * One of the chain of mutex owners is currently migrating to this
+ * CPU, but has not yet been enqueued because we are holding the
+ * rq lock. As a simple solution, just schedule rq->idle to give
+ * the migration a chance to complete. Much like the migrate_task
+ * case we should end up back in find_proxy_task(), this time
+ * hopefully with all relevant tasks already enqueued.
+ */
+ raw_spin_unlock(&p->blocked_lock);
+ raw_spin_unlock(&mutex->wait_lock);
+ return proxy_resched_idle(rq);
+ }
+
+ if (!owner->on_rq) {
+ /* XXX Don't handle blocked owners yet */
+ if (!proxy_deactivate(rq, donor))
+ goto deactivate_failed;
+ goto out;
+ }
+
+ if (owner->se.sched_delayed) {
+ /* XXX Don't handle delayed dequeue yet */
+ if (!proxy_deactivate(rq, donor))
+ goto deactivate_failed;
+ goto out;
+ }
+
+ if (owner == p) {
+ /*
+ * It's possible we interleave with mutex_unlock like:
+ *
+ * lock(&rq->lock);
+ * find_proxy_task()
+ * mutex_unlock()
+ * lock(&wait_lock);
+ * donor(owner) = current->blocked_donor;
+ * unlock(&wait_lock);
+ *
+ * wake_up_q();
+ * ...
+ * ttwu_runnable()
+ * __task_rq_lock()
+ * lock(&wait_lock);
+ * owner == p
+ *
+ * Which leaves us to finish the ttwu_runnable() and make it go.
+ *
+ * So schedule rq->idle so that ttwu_runnable can get the rq lock
+ * and mark owner as running.
+ */
+ raw_spin_unlock(&p->blocked_lock);
+ raw_spin_unlock(&mutex->wait_lock);
+ return proxy_resched_idle(rq);
+ }
- /* Check again that p is blocked with blocked_lock held */
- if (!task_is_blocked(p) || mutex != get_task_blocked_on(p)) {
/*
- * Something changed in the blocked_on chain and
- * we don't know if only at this level. So, let's
- * just bail out completely and let __schedule
- * figure things out (pick_again loop).
+ * OK, now we're absolutely sure @owner is on this
+ * rq, therefore holding @rq->lock is sufficient to
+ * guarantee its existence, as per ttwu_remote().
*/
- goto out;
+ raw_spin_unlock(&p->blocked_lock);
+ raw_spin_unlock(&mutex->wait_lock);
}
- if (!proxy_deactivate(rq, donor))
- /* XXX: This hack won't work when we get to migrations */
- donor->blocked_on_state = BO_RUNNABLE;
+ WARN_ON_ONCE(owner && !owner->on_rq);
+ return owner;
+
+deactivate_failed:
+ /* XXX: This hack won't work when we get to migrations */
+ donor->blocked_on_state = BO_RUNNABLE;
out:
raw_spin_unlock(&p->blocked_lock);
raw_spin_unlock(&mutex->wait_lock);
@@ -6758,6 +6864,7 @@ static void __sched notrace __schedule(int sched_mode)
struct rq_flags rf;
struct rq *rq;
int cpu;
+ bool preserve_need_resched = false;
cpu = smp_processor_id();
rq = cpu_rq(cpu);
@@ -6828,9 +6935,12 @@ static void __sched notrace __schedule(int sched_mode)
zap_balance_callbacks(rq);
goto pick_again;
}
+ if (next == rq->idle)
+ preserve_need_resched = true;
}
picked:
- clear_tsk_need_resched(prev);
+ if (!preserve_need_resched)
+ clear_tsk_need_resched(prev);
clear_preempt_need_resched();
#ifdef CONFIG_SCHED_DEBUG
rq->last_seen_need_resched_ns = 0;
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 42043310adfe..9625c46aed1e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9372,6 +9372,7 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
* 2) cannot be migrated to this CPU due to cpus_ptr, or
* 3) running (obviously), or
* 4) are cache-hot on their current CPU.
+ * 5) are blocked on mutexes (if SCHED_PROXY_EXEC is enabled)
*/
if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
return 0;
@@ -9380,6 +9381,9 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
if (kthread_is_per_cpu(p))
return 0;
+ if (task_is_blocked(p))
+ return 0;
+
if (!cpumask_test_cpu(env->dst_cpu, p->cpus_ptr)) {
int cpu;
@@ -9416,7 +9420,8 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
/* Record that we found at least one task that could run on dst_cpu */
env->flags &= ~LBF_ALL_PINNED;
- if (task_on_cpu(env->src_rq, p)) {
+ if (task_on_cpu(env->src_rq, p) ||
+ task_current_donor(env->src_rq, p)) {
schedstat_inc(p->stats.nr_failed_migrations_running);
return 0;
}
@@ -9455,6 +9460,9 @@ static void detach_task(struct task_struct *p, struct lb_env *env)
{
lockdep_assert_rq_held(env->src_rq);
+ WARN_ON(task_current(env->src_rq, p));
+ WARN_ON(task_current_donor(env->src_rq, p));
+
deactivate_task(env->src_rq, p, DEQUEUE_NOCLOCK);
set_task_cpu(p, env->dst_cpu);
}
--
2.47.0.199.ga7371fff76-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread
end of thread, other threads:[~2024-11-06 2:57 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-11-06 2:56 [RFC][PATCH v13 0/7] Single CPU Proxy Execution (v13) John Stultz
2024-11-06 2:56 ` [RFC][PATCH v13 1/7] sched: Add CONFIG_SCHED_PROXY_EXEC & boot argument to enable/disable John Stultz
2024-11-06 2:56 ` [RFC][PATCH v13 2/7] locking/mutex: Rework task_struct::blocked_on John Stultz
2024-11-06 2:56 ` [RFC][PATCH v13 3/7] sched: Fix runtime accounting w/ split exec & sched contexts John Stultz
2024-11-06 2:56 ` [RFC][PATCH v13 4/7] sched: Fix psi_dequeue for Proxy Execution John Stultz
2024-11-06 2:56 ` [RFC][PATCH v13 5/7] sched: Add an initial sketch of the find_proxy_task() function John Stultz
2024-11-06 2:56 ` [RFC][PATCH v13 6/7] sched: Fix proxy/current (push,pull)ability John Stultz
2024-11-06 2:56 ` [RFC][PATCH v13 7/7] sched: Start blocked_on chain processing in find_proxy_task() John Stultz
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox