From: Steven Rostedt <rostedt@goodmis.org>
To: linux-kernel@vger.kernel.org,
linux-rt-users <linux-rt-users@vger.kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>,
Carsten Emde <C.Emde@osadl.org>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
John Kacur <jkacur@redhat.com>,
Paul Gortmaker <paul.gortmaker@windriver.com>,
Julia Cartwright <julia@ni.com>,
Daniel Wagner <daniel.wagner@siemens.com>,
tom.zanussi@linux.intel.com, Alex Shi <alex.shi@linaro.org>,
Mike Galbraith <efault@gmx.de>
Subject: [PATCH RT 03/15] rtmutex: Fix lock stealing logic
Date: Fri, 01 Dec 2017 19:01:52 -0500 [thread overview]
Message-ID: <20171202000427.155008831@goodmis.org> (raw)
In-Reply-To: 20171202000149.842718953@goodmis.org
[-- Attachment #1: 0003-rtmutex-Fix-lock-stealing-logic.patch --]
[-- Type: text/plain, Size: 5570 bytes --]
4.9.65-rt57-rc2 stable review patch.
If anyone has any objections, please let me know.
------------------
From: Mike Galbraith <efault@gmx.de>
1. When trying to acquire an rtmutex, we first try to grab it without
queueing the waiter, and explicitly check for that initial attempt
in the !waiter path of __try_to_take_rt_mutex(). Checking whether
the lock taker is top waiter before allowing a steal attempt in that
path is a thinko: the lock taker has not yet blocked.
2. It seems wrong to change the definition of rt_mutex_waiter_less()
to mean less or perhaps equal when we have an rt_mutex_waiter_equal().
Remove the thinko, restore rt_mutex_waiter_less(), implement and use
rt_mutex_steal() based upon rt_mutex_waiter_less/equal(), moving all
qualification criteria into the function itself.
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
kernel/locking/rtmutex.c | 73 ++++++++++++++++++++++++------------------------
1 file changed, 36 insertions(+), 37 deletions(-)
diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index b73cd7c87551..5dbf6789383b 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -235,25 +235,18 @@ static inline bool unlock_rt_mutex_safe(struct rt_mutex *lock,
}
#endif
-#define STEAL_NORMAL 0
-#define STEAL_LATERAL 1
/*
* Only use with rt_mutex_waiter_{less,equal}()
*/
-#define task_to_waiter(p) \
- &(struct rt_mutex_waiter){ .prio = (p)->prio, .deadline = (p)->dl.deadline }
+#define task_to_waiter(p) &(struct rt_mutex_waiter) \
+ { .prio = (p)->prio, .deadline = (p)->dl.deadline, .task = (p) }
static inline int
rt_mutex_waiter_less(struct rt_mutex_waiter *left,
- struct rt_mutex_waiter *right, int mode)
+ struct rt_mutex_waiter *right)
{
- if (mode == STEAL_NORMAL) {
- if (left->prio < right->prio)
- return 1;
- } else {
- if (left->prio <= right->prio)
- return 1;
- }
+ if (left->prio < right->prio)
+ return 1;
/*
* If both waiters have dl_prio(), we check the deadlines of the
@@ -286,6 +279,27 @@ rt_mutex_waiter_equal(struct rt_mutex_waiter *left,
return 1;
}
+#define STEAL_NORMAL 0
+#define STEAL_LATERAL 1
+
+static inline int
+rt_mutex_steal(struct rt_mutex *lock, struct rt_mutex_waiter *waiter, int mode)
+{
+ struct rt_mutex_waiter *top_waiter = rt_mutex_top_waiter(lock);
+
+ if (waiter == top_waiter || rt_mutex_waiter_less(waiter, top_waiter))
+ return 1;
+
+ /*
+ * Note that RT tasks are excluded from lateral-steals
+ * to prevent the introduction of an unbounded latency.
+ */
+ if (mode == STEAL_NORMAL || rt_task(waiter->task))
+ return 0;
+
+ return rt_mutex_waiter_equal(waiter, top_waiter);
+}
+
static void
rt_mutex_enqueue(struct rt_mutex *lock, struct rt_mutex_waiter *waiter)
{
@@ -297,7 +311,7 @@ rt_mutex_enqueue(struct rt_mutex *lock, struct rt_mutex_waiter *waiter)
while (*link) {
parent = *link;
entry = rb_entry(parent, struct rt_mutex_waiter, tree_entry);
- if (rt_mutex_waiter_less(waiter, entry, STEAL_NORMAL)) {
+ if (rt_mutex_waiter_less(waiter, entry)) {
link = &parent->rb_left;
} else {
link = &parent->rb_right;
@@ -336,7 +350,7 @@ rt_mutex_enqueue_pi(struct task_struct *task, struct rt_mutex_waiter *waiter)
while (*link) {
parent = *link;
entry = rb_entry(parent, struct rt_mutex_waiter, pi_tree_entry);
- if (rt_mutex_waiter_less(waiter, entry, STEAL_NORMAL)) {
+ if (rt_mutex_waiter_less(waiter, entry)) {
link = &parent->rb_left;
} else {
link = &parent->rb_right;
@@ -847,6 +861,7 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task,
* @task: The task which wants to acquire the lock
* @waiter: The waiter that is queued to the lock's wait tree if the
* callsite called task_blocked_on_lock(), otherwise NULL
+ * @mode: Lock steal mode (STEAL_NORMAL, STEAL_LATERAL)
*/
static int __try_to_take_rt_mutex(struct rt_mutex *lock,
struct task_struct *task,
@@ -886,14 +901,11 @@ static int __try_to_take_rt_mutex(struct rt_mutex *lock,
*/
if (waiter) {
/*
- * If waiter is not the highest priority waiter of
- * @lock, give up.
+ * If waiter is not the highest priority waiter of @lock,
+ * or its peer when lateral steal is allowed, give up.
*/
- if (waiter != rt_mutex_top_waiter(lock)) {
- /* XXX rt_mutex_waiter_less() ? */
+ if (!rt_mutex_steal(lock, waiter, mode))
return 0;
- }
-
/*
* We can acquire the lock. Remove the waiter from the
* lock waiters tree.
@@ -910,25 +922,12 @@ static int __try_to_take_rt_mutex(struct rt_mutex *lock,
* not need to be dequeued.
*/
if (rt_mutex_has_waiters(lock)) {
- struct task_struct *pown = rt_mutex_top_waiter(lock)->task;
-
- if (task != pown)
- return 0;
-
- /*
- * Note that RT tasks are excluded from lateral-steals
- * to prevent the introduction of an unbounded latency.
- */
- if (rt_task(task))
- mode = STEAL_NORMAL;
/*
- * If @task->prio is greater than or equal to
- * the top waiter priority (kernel view),
- * @task lost.
+ * If @task->prio is greater than the top waiter
+ * priority (kernel view), or equal to it when a
+ * lateral steal is forbidden, @task lost.
*/
- if (!rt_mutex_waiter_less(task_to_waiter(task),
- rt_mutex_top_waiter(lock),
- mode))
+ if (!rt_mutex_steal(lock, task_to_waiter(task), mode))
return 0;
/*
* The current top waiter stays enqueued. We
--
2.13.2
next prev parent reply other threads:[~2017-12-02 0:01 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-12-02 0:01 [PATCH RT 00/15] Linux 4.9.65-rt57-rc2 Steven Rostedt
2017-12-02 0:01 ` [PATCH RT 01/15] Revert "memcontrol: Prevent scheduling while atomic in cgroup code" Steven Rostedt
2017-12-02 0:01 ` [PATCH RT 02/15] Revert "fs: jbd2: pull your plug when waiting for space" Steven Rostedt
2017-12-04 8:37 ` Sebastian Andrzej Siewior
2017-12-08 18:13 ` Steven Rostedt
2017-12-02 0:01 ` Steven Rostedt [this message]
2017-12-02 0:01 ` [PATCH RT 04/15] cpu_pm: replace raw_notifier to atomic_notifier Steven Rostedt
2017-12-02 0:01 ` [PATCH RT 05/15] PM / CPU: replace raw_notifier with atomic_notifier (fixup) Steven Rostedt
2017-12-02 0:01 ` [PATCH RT 06/15] kernel/hrtimer: migrate deferred timer on CPU down Steven Rostedt
2017-12-02 0:01 ` [PATCH RT 07/15] net: take the tcp_sk_lock lock with BH disabled Steven Rostedt
2017-12-02 0:01 ` [PATCH RT 08/15] kernel/hrtimer: dont wakeup a process while holding the hrtimer base lock Steven Rostedt
2017-12-02 0:01 ` [PATCH RT 09/15] kernel/hrtimer/hotplug: dont wake ktimersoftd " Steven Rostedt
2017-12-02 0:01 ` [PATCH RT 10/15] Bluetooth: avoid recursive locking in hci_send_to_channel() Steven Rostedt
2017-12-02 0:02 ` [PATCH RT 11/15] iommu/amd: Use raw_cpu_ptr() instead of get_cpu_ptr() for ->flush_queue Steven Rostedt
2017-12-02 0:02 ` [PATCH RT 12/15] rt/locking: allow recursive local_trylock() Steven Rostedt
2017-12-02 0:02 ` [PATCH RT 13/15] locking/rtmutex: dont drop the wait_lock twice Steven Rostedt
2017-12-02 0:02 ` [PATCH RT 14/15] net: use trylock in icmp_sk Steven Rostedt
2017-12-02 0:02 ` [PATCH RT 15/15] Linux 4.9.65-rt57-rc2 Steven Rostedt
-- strict thread matches above, loose matches on Subject: below --
2017-12-01 15:49 [PATCH RT 00/15] Linux 4.9.65-rt57-rc1 Steven Rostedt
2017-12-01 15:50 ` [PATCH RT 03/15] rtmutex: Fix lock stealing logic Steven Rostedt
2017-12-01 15:48 [PATCH RT 00/15] Linux Steven Rostedt
2017-12-01 15:48 ` [PATCH RT 03/15] rtmutex: Fix lock stealing logic Steven Rostedt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20171202000427.155008831@goodmis.org \
--to=rostedt@goodmis.org \
--cc=C.Emde@osadl.org \
--cc=alex.shi@linaro.org \
--cc=bigeasy@linutronix.de \
--cc=daniel.wagner@siemens.com \
--cc=efault@gmx.de \
--cc=jkacur@redhat.com \
--cc=julia@ni.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rt-users@vger.kernel.org \
--cc=paul.gortmaker@windriver.com \
--cc=tglx@linutronix.de \
--cc=tom.zanussi@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).