From: John Stultz <jstultz@google.com>
To: LKML <linux-kernel@vger.kernel.org>
Cc: John Stultz <jstultz@google.com>,
Joel Fernandes <joelaf@google.com>,
Qais Yousef <qyousef@google.com>, Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Juri Lelli <juri.lelli@redhat.com>,
Vincent Guittot <vincent.guittot@linaro.org>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Valentin Schneider <vschneid@redhat.com>,
Steven Rostedt <rostedt@goodmis.org>,
Ben Segall <bsegall@google.com>,
Zimuzo Ezeozue <zezeozue@google.com>,
Youssef Esmat <youssefesmat@google.com>,
Mel Gorman <mgorman@suse.de>,
Daniel Bristot de Oliveira <bristot@redhat.com>,
Will Deacon <will@kernel.org>, Waiman Long <longman@redhat.com>,
Boqun Feng <boqun.feng@gmail.com>,
"Paul E . McKenney" <paulmck@kernel.org>,
kernel-team@android.com
Subject: [PATCH v5 08/19] locking/mutex: Split blocked_on logic into two states (blocked_on and blocked_on_waking)
Date: Sat, 19 Aug 2023 06:08:42 +0000 [thread overview]
Message-ID: <20230819060915.3001568-9-jstultz@google.com> (raw)
In-Reply-To: <20230819060915.3001568-1-jstultz@google.com>
This patch adds blocked_on_waking so we can track separately if
the task should be able to try to aquire the lock separately
from the lock it is blocked on.
This avoids some of the subtle magic where the blocked_on state
gets cleared, only to have it re-added by the
mutex_lock_slowpath call when it tries to aquire the lock on
wakeup
This should make dealing with the ww_mutex issue cleaner.
Cc: Joel Fernandes <joelaf@google.com>
Cc: Qais Yousef <qyousef@google.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Zimuzo Ezeozue <zezeozue@google.com>
Cc: Youssef Esmat <youssefesmat@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Daniel Bristot de Oliveira <bristot@redhat.com>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: "Paul E . McKenney" <paulmck@kernel.org>
Cc: kernel-team@android.com
Signed-off-by: John Stultz <jstultz@google.com>
---
include/linux/sched.h | 2 ++
kernel/fork.c | 1 +
kernel/locking/mutex.c | 7 ++++---
kernel/locking/ww_mutex.h | 12 ++++++------
kernel/sched/sched.h | 12 ++++++++++++
5 files changed, 25 insertions(+), 9 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 0f32bea47e5e..3b7f26df2496 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1141,6 +1141,7 @@ struct task_struct {
#endif
struct mutex *blocked_on; /* lock we're blocked on */
+ bool blocked_on_waking; /* blocked on, but waking */
raw_spinlock_t blocked_lock;
#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
@@ -2241,6 +2242,7 @@ static inline void set_task_blocked_on(struct task_struct *p, struct mutex *m)
WARN_ON((!m && !p->blocked_on) || (m && p->blocked_on));
p->blocked_on = m;
+ p->blocked_on_waking = false;
}
static inline struct mutex *get_task_blocked_on(struct task_struct *p)
diff --git a/kernel/fork.c b/kernel/fork.c
index 8bad899b6c6e..5b11ead90b12 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2460,6 +2460,7 @@ __latent_entropy struct task_struct *copy_process(
#endif
p->blocked_on = NULL; /* not blocked yet */
+ p->blocked_on_waking = false; /* not blocked yet */
#ifdef CONFIG_BCACHE
p->sequential_io = 0;
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 04b0ea45cc01..687009eca2d1 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -666,10 +666,11 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
raw_spin_lock_irqsave(&lock->wait_lock, flags);
raw_spin_lock(¤t->blocked_lock);
+
/*
- * Gets reset by unlock path().
+ * Clear blocked_on_waking flag set by the unlock path().
*/
- set_task_blocked_on(current, lock);
+ current->blocked_on_waking = false;
set_current_state(state);
/*
* Here we order against unlock; we must either see it change
@@ -948,7 +949,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
debug_mutex_wake_waiter(lock, waiter);
raw_spin_lock(&next->blocked_lock);
WARN_ON(next->blocked_on != lock);
- set_task_blocked_on(current, NULL);
+ next->blocked_on_waking = true;
raw_spin_unlock(&next->blocked_lock);
wake_q_add(&wake_q, next);
}
diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
index 44a532dda927..3b0a68d7e308 100644
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -287,12 +287,12 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter,
debug_mutex_wake_waiter(lock, waiter);
#endif
/*
- * When waking up the task to die, be sure to clear the
- * blocked_on pointer. Otherwise we can see circular
+ * When waking up the task to die, be sure to set the
+ * blocked_on_waking flag. Otherwise we can see circular
* blocked_on relationships that can't resolve.
*/
WARN_ON(waiter->task->blocked_on != lock);
- set_task_blocked_on(waiter->task, NULL);
+ waiter->task->blocked_on_waking = true;
wake_q_add(wake_q, waiter->task);
raw_spin_unlock(&waiter->task->blocked_lock);
}
@@ -345,11 +345,11 @@ static bool __ww_mutex_wound(struct MUTEX *lock,
/* nested as we should hold current->blocked_lock already */
raw_spin_lock_nested(&owner->blocked_lock, SINGLE_DEPTH_NESTING);
/*
- * When waking up the task to wound, be sure to clear the
- * blocked_on pointer. Otherwise we can see circular
+ * When waking up the task to wound, be sure to set the
+ * blocked_on_waking flag. Otherwise we can see circular
* blocked_on relationships that can't resolve.
*/
- set_task_blocked_on(owner, NULL);
+ owner->blocked_on_waking = true;
wake_q_add(wake_q, owner);
raw_spin_unlock(&owner->blocked_lock);
}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 33ad47a093ae..95900ccaaf82 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2111,6 +2111,18 @@ static inline int task_current(struct rq *rq, struct task_struct *p)
return rq->curr == p;
}
+#ifdef CONFIG_PROXY_EXEC
+static inline bool task_is_blocked(struct task_struct *p)
+{
+ return !!p->blocked_on && !p->blocked_on_waking;
+}
+#else /* !PROXY_EXEC */
+static inline bool task_is_blocked(struct task_struct *p)
+{
+ return false;
+}
+#endif /* PROXY_EXEC */
+
static inline int task_on_cpu(struct rq *rq, struct task_struct *p)
{
#ifdef CONFIG_SMP
--
2.42.0.rc1.204.g551eb34607-goog
next prev parent reply other threads:[~2023-08-19 6:18 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-19 6:08 [PATCH v5 00/19] Proxy Execution: A generalized form of Priority Inheritance v5 John Stultz
2023-08-19 6:08 ` [PATCH v5 01/19] sched: Unify runtime accounting across classes John Stultz
2023-08-19 6:08 ` [PATCH v5 02/19] locking/mutex: Removes wakeups from under mutex::wait_lock John Stultz
2023-08-22 19:11 ` Waiman Long
2023-08-22 19:24 ` John Stultz
2023-08-19 6:08 ` [PATCH v5 03/19] locking/mutex: make mutex::wait_lock irq safe John Stultz
2023-08-19 6:08 ` [PATCH v5 04/19] locking/mutex: Expose __mutex_owner() John Stultz
2023-08-19 6:08 ` [PATCH v5 05/19] locking/mutex: Rework task_struct::blocked_on John Stultz
2023-08-19 6:08 ` [PATCH v5 06/19] locking/mutex: Add task_struct::blocked_lock to serialize changes to the blocked_on state John Stultz
2023-08-19 6:08 ` [PATCH v5 07/19] locking/mutex: Add p->blocked_on wrappers for correctness checks John Stultz
2023-08-19 6:08 ` John Stultz [this message]
2023-08-19 6:08 ` [PATCH v5 09/19] locking/mutex: Switch to mutex handoffs for CONFIG_PROXY_EXEC John Stultz
2023-08-19 6:08 ` [PATCH v5 10/19] sched: Split scheduler execution context John Stultz
2023-08-19 6:08 ` [PATCH v5 11/19] sched: Fix runtime accounting w/ split exec & sched contexts John Stultz
2023-08-19 6:08 ` [PATCH v5 12/19] sched: Unnest ttwu_runnable in prep for proxy-execution John Stultz
2023-08-19 6:08 ` [PATCH v5 13/19] sched: Split out __sched() deactivate task logic into a helper John Stultz
2023-08-23 21:12 ` kernel test robot
2023-08-23 21:25 ` John Stultz
2023-08-24 0:00 ` kernel test robot
2023-08-24 0:34 ` kernel test robot
2023-08-19 6:08 ` [PATCH v5 14/19] sched: Add a very simple proxy() function John Stultz
2023-08-19 6:08 ` [PATCH v5 15/19] sched: Add proxy deactivate helper John Stultz
2023-08-24 11:34 ` kernel test robot
2023-08-19 6:08 ` [PATCH v5 16/19] sched: Fix proxy/current (push,pull)ability John Stultz
2023-08-22 15:20 ` Dietmar Eggemann
2023-08-22 16:19 ` John Stultz
2023-08-19 6:08 ` [PATCH v5 17/19] sched: Start blocked_on chain processing in proxy() John Stultz
2023-08-19 6:08 ` [PATCH v5 18/19] sched: Handle blocked-waiter migration (and return migration) John Stultz
2023-08-19 6:08 ` [PATCH v5 19/19] sched: Add blocked_donor link to task for smarter mutex handoffs John Stultz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230819060915.3001568-9-jstultz@google.com \
--to=jstultz@google.com \
--cc=boqun.feng@gmail.com \
--cc=bristot@redhat.com \
--cc=bsegall@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=joelaf@google.com \
--cc=juri.lelli@redhat.com \
--cc=kernel-team@android.com \
--cc=linux-kernel@vger.kernel.org \
--cc=longman@redhat.com \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=qyousef@google.com \
--cc=rostedt@goodmis.org \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
--cc=will@kernel.org \
--cc=youssefesmat@google.com \
--cc=zezeozue@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox