From: Peter Zijlstra <peterz@infradead.org>
To: mingo@kernel.org, tglx@linutronix.de, juri.lelli@arm.com,
rostedt@goodmis.org, xlpang@redhat.com, bigeasy@linutronix.de
Cc: linux-kernel@vger.kernel.org, mathieu.desnoyers@efficios.com,
jdesfossez@efficios.com, bristot@redhat.com,
peterz@infradead.org
Subject: [PATCH -v2 9/9] rtmutex: Fix more prio comparisons
Date: Mon, 26 Sep 2016 14:32:22 +0200 [thread overview]
Message-ID: <20160926124128.400602969@infradead.org> (raw)
In-Reply-To: 20160926123213.851818224@infradead.org
[-- Attachment #1: peterz-cleanup-rt-mutex-6.patch --]
[-- Type: text/plain, Size: 2701 bytes --]
There was a pure ->prio comparison left in try_to_wake_rt_mutex(),
convert it to use rt_mutex_waiter_less(), noting that greater-or-equal
is not-less (both in kernel priority view).
This necessitated the introduction of cmp_task() which creates a
pointer to an unnamed stack variable of struct rt_mutex_waiter type to
compare against tasks.
With this, we can now also create and employ rt_mutex_waiter_equal().
Reviewed-and-tested-by: Juri Lelli <juri.lelli@arm.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
kernel/locking/rtmutex.c | 32 +++++++++++++++++++++++++++++---
1 file changed, 29 insertions(+), 3 deletions(-)
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -158,6 +158,12 @@ static inline bool unlock_rt_mutex_safe(
}
#endif
+/*
+ * Only use with rt_mutex_waiter_{less,equal}()
+ */
+#define cmp_task(p) \
+ &(struct rt_mutex_waiter){ .prio = (p)->prio, .deadline = (p)->dl.deadline }
+
static inline int
rt_mutex_waiter_less(struct rt_mutex_waiter *left,
struct rt_mutex_waiter *right)
@@ -177,6 +183,25 @@ rt_mutex_waiter_less(struct rt_mutex_wai
return 0;
}
+static inline int
+rt_mutex_waiter_equal(struct rt_mutex_waiter *left,
+ struct rt_mutex_waiter *right)
+{
+ if (left->prio != right->prio)
+ return 0;
+
+ /*
+ * If both waiters have dl_prio(), we check the deadlines of the
+ * associated tasks.
+ * If left waiter has a dl_prio(), and we didn't return 0 above,
+ * then right waiter has a dl_prio() too.
+ */
+ if (dl_prio(left->prio))
+ return left->deadline == right->deadline;
+
+ return 1;
+}
+
static void
rt_mutex_enqueue(struct rt_mutex *lock, struct rt_mutex_waiter *waiter)
{
@@ -487,7 +512,7 @@ static int rt_mutex_adjust_prio_chain(st
* enabled we continue, but stop the requeueing in the chain
* walk.
*/
- if (waiter->prio == task->prio && !dl_task(task)) {
+ if (rt_mutex_waiter_equal(waiter, cmp_task(task))) {
if (!detect_deadlock)
goto out_unlock_pi;
else
@@ -790,7 +815,8 @@ static int try_to_take_rt_mutex(struct r
* the top waiter priority (kernel view),
* @task lost.
*/
- if (task->prio >= rt_mutex_top_waiter(lock)->prio)
+ if (!rt_mutex_waiter_less(cmp_task(task),
+ rt_mutex_top_waiter(lock)))
return 0;
/*
@@ -1055,7 +1081,7 @@ void rt_mutex_adjust_pi(struct task_stru
raw_spin_lock_irqsave(&task->pi_lock, flags);
waiter = task->pi_blocked_on;
- if (!waiter || (waiter->prio == task->prio && !dl_prio(task->prio))) {
+ if (!waiter || rt_mutex_waiter_equal(waiter, cmp_task(task))) {
raw_spin_unlock_irqrestore(&task->pi_lock, flags);
return;
}
prev parent reply other threads:[~2016-09-26 12:45 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-09-26 12:32 [PATCH -v2 0/9] PI and assorted failings Peter Zijlstra
2016-09-26 12:32 ` [PATCH -v2 1/9] rtmutex: Deboost before waking up the top waiter Peter Zijlstra
2016-09-26 15:15 ` Steven Rostedt
2016-09-26 15:22 ` Peter Zijlstra
2016-09-26 15:35 ` Steven Rostedt
2016-09-26 15:37 ` Steven Rostedt
2016-09-26 15:41 ` Peter Zijlstra
2016-09-29 14:43 ` Thomas Gleixner
2016-09-29 14:49 ` Peter Zijlstra
2016-09-26 15:39 ` Peter Zijlstra
2016-09-28 9:07 ` Sebastian Andrzej Siewior
2016-09-28 9:24 ` Peter Zijlstra
2016-09-26 12:32 ` [PATCH -v2 2/9] sched/rtmutex/deadline: Fix a PI crash for deadline tasks Peter Zijlstra
2016-09-26 15:20 ` Steven Rostedt
2016-09-26 15:26 ` Peter Zijlstra
2016-09-29 14:49 ` Thomas Gleixner
2016-09-26 12:32 ` [PATCH -v2 3/9] sched/deadline/rtmutex: Dont miss the dl_runtime/dl_period update Peter Zijlstra
2016-09-26 16:03 ` Steven Rostedt
2016-09-29 14:48 ` Thomas Gleixner
2016-09-26 12:32 ` [PATCH -v2 4/9] rtmutex: Remove rt_mutex_fastunlock() Peter Zijlstra
2016-09-29 14:47 ` Thomas Gleixner
2016-09-26 12:32 ` [PATCH -v2 5/9] rtmutex: Clean up Peter Zijlstra
2016-09-26 16:09 ` Steven Rostedt
2016-09-29 14:51 ` Thomas Gleixner
2016-09-26 12:32 ` [PATCH -v2 6/9] sched/rtmutex: Refactor rt_mutex_setprio() Peter Zijlstra
2016-09-26 16:57 ` Steven Rostedt
2016-09-26 12:32 ` [PATCH -v2 7/9] sched,tracing: Update trace_sched_pi_setprio() Peter Zijlstra
2016-09-26 17:04 ` Steven Rostedt
2016-09-27 7:44 ` Peter Zijlstra
2016-09-26 12:32 ` [PATCH -v2 8/9] rtmutex: Fix PI chain order integrity Peter Zijlstra
2016-09-26 12:32 ` Peter Zijlstra [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160926124128.400602969@infradead.org \
--to=peterz@infradead.org \
--cc=bigeasy@linutronix.de \
--cc=bristot@redhat.com \
--cc=jdesfossez@efficios.com \
--cc=juri.lelli@arm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=mingo@kernel.org \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
--cc=xlpang@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox