From: John Stultz <jstultz@google.com>
To: LKML <linux-kernel@vger.kernel.org>
Cc: "Connor O'Brien" <connoro@google.com>,
Joel Fernandes <joelaf@google.com>,
Qais Yousef <qyousef@google.com>, Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Juri Lelli <juri.lelli@redhat.com>,
Vincent Guittot <vincent.guittot@linaro.org>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Valentin Schneider <vschneid@redhat.com>,
Steven Rostedt <rostedt@goodmis.org>,
Ben Segall <bsegall@google.com>,
Zimuzo Ezeozue <zezeozue@google.com>,
Youssef Esmat <youssefesmat@google.com>,
Mel Gorman <mgorman@suse.de>,
Daniel Bristot de Oliveira <bristot@redhat.com>,
Will Deacon <will@kernel.org>, Waiman Long <longman@redhat.com>,
Boqun Feng <boqun.feng@gmail.com>,
"Paul E. McKenney" <paulmck@kernel.org>,
Metin Kaya <Metin.Kaya@arm.com>,
Xuewen Yan <xuewen.yan94@gmail.com>,
K Prateek Nayak <kprateek.nayak@amd.com>,
Thomas Gleixner <tglx@linutronix.de>,
kernel-team@android.com, John Stultz <jstultz@google.com>
Subject: [PATCH v7 21/23] sched: Add find_exec_ctx helper
Date: Tue, 19 Dec 2023 16:18:32 -0800 [thread overview]
Message-ID: <20231220001856.3710363-22-jstultz@google.com> (raw)
In-Reply-To: <20231220001856.3710363-1-jstultz@google.com>
From: Connor O'Brien <connoro@google.com>
Add a helper to find the runnable owner down a chain of blocked waiters
This patch was broken out from a larger chain migration
patch originally by Connor O'Brien.
Cc: Joel Fernandes <joelaf@google.com>
Cc: Qais Yousef <qyousef@google.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Zimuzo Ezeozue <zezeozue@google.com>
Cc: Youssef Esmat <youssefesmat@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Daniel Bristot de Oliveira <bristot@redhat.com>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Metin Kaya <Metin.Kaya@arm.com>
Cc: Xuewen Yan <xuewen.yan94@gmail.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-team@android.com
Signed-off-by: Connor O'Brien <connoro@google.com>
[jstultz: split out from larger chain migration patch]
Signed-off-by: John Stultz <jstultz@google.com>
---
kernel/sched/core.c | 42 +++++++++++++++++++++++++++++++++++++++++
kernel/sched/cpupri.c | 11 ++++++++---
kernel/sched/deadline.c | 15 +++++++++++++--
kernel/sched/rt.c | 9 ++++++++-
kernel/sched/sched.h | 10 ++++++++++
5 files changed, 81 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 0c212dcd4b7a..77a79d5f829a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3896,6 +3896,48 @@ static void activate_blocked_entities(struct rq *target_rq,
}
raw_spin_unlock_irqrestore(&owner->blocked_lock, flags);
}
+
+static inline bool task_queued_on_rq(struct rq *rq, struct task_struct *task)
+{
+ if (!task_on_rq_queued(task))
+ return false;
+ smp_rmb();
+ if (task_rq(task) != rq)
+ return false;
+ smp_rmb();
+ if (!task_on_rq_queued(task))
+ return false;
+ return true;
+}
+
+/*
+ * Returns the unblocked task at the end of the blocked chain starting with p
+ * if that chain is composed entirely of tasks enqueued on rq, or NULL otherwise.
+ */
+struct task_struct *find_exec_ctx(struct rq *rq, struct task_struct *p)
+{
+ struct task_struct *exec_ctx, *owner;
+ struct mutex *mutex;
+
+ if (!sched_proxy_exec())
+ return p;
+
+ lockdep_assert_rq_held(rq);
+
+ for (exec_ctx = p; task_is_blocked(exec_ctx) && !task_on_cpu(rq, exec_ctx);
+ exec_ctx = owner) {
+ mutex = exec_ctx->blocked_on;
+ owner = __mutex_owner(mutex);
+ if (owner == exec_ctx)
+ break;
+
+ if (!task_queued_on_rq(rq, owner) || task_current_selected(rq, owner)) {
+ exec_ctx = NULL;
+ break;
+ }
+ }
+ return exec_ctx;
+}
#else /* !CONFIG_SCHED_PROXY_EXEC */
static inline void do_activate_task(struct rq *rq, struct task_struct *p,
int en_flags)
diff --git a/kernel/sched/cpupri.c b/kernel/sched/cpupri.c
index 15e947a3ded7..53be78afdd07 100644
--- a/kernel/sched/cpupri.c
+++ b/kernel/sched/cpupri.c
@@ -96,12 +96,17 @@ static inline int __cpupri_find(struct cpupri *cp, struct task_struct *p,
if (skip)
return 0;
- if (cpumask_any_and(&p->cpus_mask, vec->mask) >= nr_cpu_ids)
+ if ((p && cpumask_any_and(&p->cpus_mask, vec->mask) >= nr_cpu_ids) ||
+ (!p && cpumask_any(vec->mask) >= nr_cpu_ids))
return 0;
if (lowest_mask) {
- cpumask_and(lowest_mask, &p->cpus_mask, vec->mask);
- cpumask_and(lowest_mask, lowest_mask, cpu_active_mask);
+ if (p) {
+ cpumask_and(lowest_mask, &p->cpus_mask, vec->mask);
+ cpumask_and(lowest_mask, lowest_mask, cpu_active_mask);
+ } else {
+ cpumask_copy(lowest_mask, vec->mask);
+ }
/*
* We have to ensure that we have at least one bit
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 999bd17f11c4..21e56ac58e32 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1866,6 +1866,8 @@ static void migrate_task_rq_dl(struct task_struct *p, int new_cpu __maybe_unused
static void check_preempt_equal_dl(struct rq *rq, struct task_struct *p)
{
+ struct task_struct *exec_ctx;
+
/*
* Current can't be migrated, useless to reschedule,
* let's hope p can move out.
@@ -1874,12 +1876,16 @@ static void check_preempt_equal_dl(struct rq *rq, struct task_struct *p)
!cpudl_find(&rq->rd->cpudl, rq_selected(rq), rq->curr, NULL))
return;
+ exec_ctx = find_exec_ctx(rq, p);
+ if (task_current(rq, exec_ctx))
+ return;
+
/*
* p is migratable, so let's not schedule it and
* see if it is pushed or pulled somewhere else.
*/
if (p->nr_cpus_allowed != 1 &&
- cpudl_find(&rq->rd->cpudl, p, p, NULL))
+ cpudl_find(&rq->rd->cpudl, p, exec_ctx, NULL))
return;
resched_curr(rq);
@@ -2169,12 +2175,17 @@ static int find_later_rq(struct task_struct *sched_ctx, struct task_struct *exec
/* Locks the rq it finds */
static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq)
{
+ struct task_struct *exec_ctx;
struct rq *later_rq = NULL;
int tries;
int cpu;
for (tries = 0; tries < DL_MAX_TRIES; tries++) {
- cpu = find_later_rq(task, task);
+ exec_ctx = find_exec_ctx(rq, task);
+ if (!exec_ctx)
+ break;
+
+ cpu = find_later_rq(task, exec_ctx);
if ((cpu == -1) || (cpu == rq->cpu))
break;
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 6371b0fca4ad..f8134d062fa3 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1640,6 +1640,11 @@ static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p)
!cpupri_find(&rq->rd->cpupri, rq_selected(rq), rq->curr, NULL))
return;
+ /* No reason to preempt since rq->curr wouldn't change anyway */
+ exec_ctx = find_exec_ctx(rq, p);
+ if (task_current(rq, exec_ctx))
+ return;
+
/*
* p is migratable, so let's not schedule it and
* see if it is pushed or pulled somewhere else.
@@ -1933,12 +1938,14 @@ static int find_lowest_rq(struct task_struct *sched_ctx, struct task_struct *exe
/* Will lock the rq it finds */
static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
{
+ struct task_struct *exec_ctx;
struct rq *lowest_rq = NULL;
int tries;
int cpu;
for (tries = 0; tries < RT_MAX_TRIES; tries++) {
- cpu = find_lowest_rq(task, task);
+ exec_ctx = find_exec_ctx(rq, task);
+ cpu = find_lowest_rq(task, exec_ctx);
if ((cpu == -1) || (cpu == rq->cpu))
break;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index ef3d327e267c..6cd473224cfe 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3564,6 +3564,16 @@ int task_is_pushable(struct rq *rq, struct task_struct *p, int cpu)
return 0;
}
+
+#ifdef CONFIG_SCHED_PROXY_EXEC
+struct task_struct *find_exec_ctx(struct rq *rq, struct task_struct *p);
+#else /* !CONFIG_SCHED_PROXY_EXEC */
+static inline
+struct task_struct *find_exec_ctx(struct rq *rq, struct task_struct *p)
+{
+ return p;
+}
+#endif /* CONFIG_SCHED_PROXY_EXEC */
#endif
#endif /* _KERNEL_SCHED_SCHED_H */
--
2.43.0.472.g3155946c3a-goog
next prev parent reply other threads:[~2023-12-20 0:19 UTC|newest]
Thread overview: 68+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-20 0:18 [PATCH v7 00/23] Proxy Execution: A generalized form of Priority Inheritance v7 John Stultz
2023-12-20 0:18 ` [PATCH v7 01/23] sched: Unify runtime accounting across classes John Stultz
2023-12-20 0:18 ` [PATCH v7 02/23] locking/mutex: Remove wakeups from under mutex::wait_lock John Stultz
2023-12-20 0:18 ` [PATCH v7 03/23] locking/mutex: Make mutex::wait_lock irq safe John Stultz
2023-12-20 0:18 ` [PATCH v7 04/23] locking/mutex: Expose __mutex_owner() John Stultz
2023-12-20 0:18 ` [PATCH v7 05/23] locking/mutex: Rework task_struct::blocked_on John Stultz
2023-12-21 10:13 ` Metin Kaya
2023-12-21 17:52 ` John Stultz
2023-12-20 0:18 ` [PATCH v7 06/23] sched: Add CONFIG_SCHED_PROXY_EXEC & boot argument to enable/disable John Stultz
2023-12-20 1:04 ` Randy Dunlap
2023-12-21 17:05 ` John Stultz
2023-12-28 15:06 ` Metin Kaya
2024-01-10 22:36 ` John Stultz
2023-12-20 0:18 ` [PATCH v7 07/23] locking/mutex: Switch to mutex handoffs for CONFIG_SCHED_PROXY_EXEC John Stultz
2023-12-20 0:18 ` [PATCH v7 08/23] sched: Split scheduler and execution contexts John Stultz
2023-12-21 10:43 ` Metin Kaya
2023-12-21 18:23 ` John Stultz
2024-01-03 14:49 ` Valentin Schneider
2024-01-10 22:24 ` John Stultz
2023-12-20 0:18 ` [PATCH v7 09/23] sched: Fix runtime accounting w/ split exec & sched contexts John Stultz
2024-01-03 13:47 ` Valentin Schneider
2023-12-20 0:18 ` [PATCH v7 10/23] sched: Split out __sched() deactivate task logic into a helper John Stultz
2023-12-21 12:30 ` Metin Kaya
2023-12-21 18:49 ` John Stultz
2023-12-20 0:18 ` [PATCH v7 11/23] sched: Add a initial sketch of the find_proxy_task() function John Stultz
2023-12-21 12:55 ` Metin Kaya
2023-12-21 19:12 ` John Stultz
2023-12-20 0:18 ` [PATCH v7 12/23] sched: Fix proxy/current (push,pull)ability John Stultz
2023-12-21 15:03 ` Metin Kaya
2023-12-21 21:02 ` John Stultz
2023-12-20 0:18 ` [PATCH v7 13/23] sched: Start blocked_on chain processing in find_proxy_task() John Stultz
2023-12-21 15:30 ` Metin Kaya
2023-12-20 0:18 ` [PATCH v7 14/23] sched: Handle blocked-waiter migration (and return migration) John Stultz
2023-12-21 16:12 ` Metin Kaya
2023-12-21 19:46 ` John Stultz
2024-01-02 15:33 ` Phil Auld
2024-01-04 23:33 ` John Stultz
2023-12-20 0:18 ` [PATCH v7 15/23] sched: Add blocked_donor link to task for smarter mutex handoffs John Stultz
2023-12-20 0:18 ` [PATCH v7 16/23] sched: Add deactivated (sleeping) owner handling to find_proxy_task() John Stultz
2023-12-22 8:33 ` Metin Kaya
2024-01-04 23:25 ` John Stultz
2023-12-20 0:18 ` [PATCH v7 17/23] sched: Initial sched_football test implementation John Stultz
2023-12-20 0:59 ` Randy Dunlap
2023-12-20 2:37 ` John Stultz
2023-12-22 9:32 ` Metin Kaya
2024-01-05 5:20 ` John Stultz
2023-12-28 15:19 ` Metin Kaya
2024-01-05 5:22 ` John Stultz
2023-12-28 16:36 ` Metin Kaya
2024-01-05 5:25 ` John Stultz
2023-12-20 0:18 ` [PATCH v7 18/23] sched: Add push_task_chain helper John Stultz
2023-12-22 10:32 ` Metin Kaya
2023-12-20 0:18 ` [PATCH v7 19/23] sched: Consolidate pick_*_task to task_is_pushable helper John Stultz
2023-12-22 10:23 ` Metin Kaya
2024-01-04 23:44 ` John Stultz
2023-12-20 0:18 ` [PATCH v7 20/23] sched: Push execution and scheduler context split into deadline and rt paths John Stultz
2023-12-22 11:33 ` Metin Kaya
2024-01-05 0:01 ` John Stultz
2023-12-20 0:18 ` John Stultz [this message]
2023-12-22 11:57 ` [PATCH v7 21/23] sched: Add find_exec_ctx helper Metin Kaya
2024-01-05 3:12 ` John Stultz
2023-12-20 0:18 ` [PATCH v7 22/23] sched: Refactor dl/rt find_lowest/latest_rq logic John Stultz
2023-12-22 13:52 ` Metin Kaya
2023-12-20 0:18 ` [PATCH v7 23/23] sched: Fix rt/dl load balancing via chain level balance John Stultz
2023-12-22 14:51 ` Metin Kaya
2024-01-05 3:42 ` John Stultz
2023-12-21 8:35 ` [PATCH v7 00/23] Proxy Execution: A generalized form of Priority Inheritance v7 Metin Kaya
2023-12-21 17:13 ` John Stultz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231220001856.3710363-22-jstultz@google.com \
--to=jstultz@google.com \
--cc=Metin.Kaya@arm.com \
--cc=boqun.feng@gmail.com \
--cc=bristot@redhat.com \
--cc=bsegall@google.com \
--cc=connoro@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=joelaf@google.com \
--cc=juri.lelli@redhat.com \
--cc=kernel-team@android.com \
--cc=kprateek.nayak@amd.com \
--cc=linux-kernel@vger.kernel.org \
--cc=longman@redhat.com \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=qyousef@google.com \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
--cc=will@kernel.org \
--cc=xuewen.yan94@gmail.com \
--cc=youssefesmat@google.com \
--cc=zezeozue@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox