From: Steven Rostedt <rostedt@goodmis.org>
To: linux-kernel@vger.kernel.org,
linux-rt-users <linux-rt-users@vger.kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>,
Carsten Emde <C.Emde@osadl.org>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
John Kacur <jkacur@redhat.com>,
Paul Gortmaker <paul.gortmaker@windriver.com>
Subject: [PATCH RT 04/36] rtmutex: Simplify and document try_to_take_rtmutex()
Date: Thu, 12 Mar 2015 15:21:43 -0400 [thread overview]
Message-ID: <20150312192155.119505708@goodmis.org> (raw)
In-Reply-To: 20150312192139.799127123@goodmis.org
[-- Attachment #1: 0004-rtmutex-Simplify-and-document-try_to_take_rtmutex.patch --]
[-- Type: text/plain, Size: 6423 bytes --]
3.12.38-rt53-rc1 stable review patch.
If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner <tglx@linutronix.de>
upstream commit: 358c331f391f3e0432f4f96f25017d12ac8d10b1
The current implementation of try_to_take_rtmutex() is correct, but
requires more than a single brain twist to understand the clever
encoded conditionals.
Untangle it and document the cases proper.
Looks less efficient at the first glance, but actually reduces the
binary code size on x8664 by 80 bytes.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Conflicts:
kernel/rtmutex.c
---
kernel/rtmutex.c | 134 ++++++++++++++++++++++++++++++++++++-------------------
1 file changed, 89 insertions(+), 45 deletions(-)
diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
index 85e03e15c7f9..27420e448a9f 100644
--- a/kernel/rtmutex.c
+++ b/kernel/rtmutex.c
@@ -501,78 +501,122 @@ static inline int lock_is_stealable(struct task_struct *task,
*
* Must be called with lock->wait_lock held.
*
- * @lock: the lock to be acquired.
- * @task: the task which wants to acquire the lock
- * @waiter: the waiter that is queued to the lock's wait list. (could be NULL)
+ * @lock: The lock to be acquired.
+ * @task: The task which wants to acquire the lock
+ * @waiter: The waiter that is queued to the lock's wait list if the
+ * callsite called task_blocked_on_lock(), otherwise NULL
*/
static int
__try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task,
struct rt_mutex_waiter *waiter, int mode)
{
+ unsigned long flags;
+
/*
- * We have to be careful here if the atomic speedups are
- * enabled, such that, when
- * - no other waiter is on the lock
- * - the lock has been released since we did the cmpxchg
- * the lock can be released or taken while we are doing the
- * checks and marking the lock with RT_MUTEX_HAS_WAITERS.
+ * Before testing whether we can acquire @lock, we set the
+ * RT_MUTEX_HAS_WAITERS bit in @lock->owner. This forces all
+ * other tasks which try to modify @lock into the slow path
+ * and they serialize on @lock->wait_lock.
*
- * The atomic acquire/release aware variant of
- * mark_rt_mutex_waiters uses a cmpxchg loop. After setting
- * the WAITERS bit, the atomic release / acquire can not
- * happen anymore and lock->wait_lock protects us from the
- * non-atomic case.
+ * The RT_MUTEX_HAS_WAITERS bit can have a transitional state
+ * as explained at the top of this file if and only if:
*
- * Note, that this might set lock->owner =
- * RT_MUTEX_HAS_WAITERS in the case the lock is not contended
- * any more. This is fixed up when we take the ownership.
- * This is the transitional state explained at the top of this file.
+ * - There is a lock owner. The caller must fixup the
+ * transient state if it does a trylock or leaves the lock
+ * function due to a signal or timeout.
+ *
+ * - @task acquires the lock and there are no other
+ * waiters. This is undone in rt_mutex_set_owner(@task) at
+ * the end of this function.
*/
mark_rt_mutex_waiters(lock);
+ /*
+ * If @lock has an owner, give up.
+ */
if (rt_mutex_owner(lock))
return 0;
/*
- * It will get the lock because of one of these conditions:
- * 1) there is no waiter
- * 2) higher priority than waiters
- * 3) it is top waiter
+ * If @waiter != NULL, @task has already enqueued the waiter
+ * into @lock waiter list. If @waiter == NULL then this is a
+ * trylock attempt.
*/
- if (rt_mutex_has_waiters(lock)) {
- struct task_struct *pown = rt_mutex_top_waiter(lock)->task;
-
- if (task != pown && !lock_is_stealable(task, pown, mode))
+ if (waiter) {
+ /*
+ * If waiter is not the highest priority waiter of
+ * @lock, give up.
+ */
+ if (waiter != rt_mutex_top_waiter(lock))
return 0;
- }
-
- /* We got the lock. */
- if (waiter || rt_mutex_has_waiters(lock)) {
- unsigned long flags;
- struct rt_mutex_waiter *top;
-
- raw_spin_lock_irqsave(&task->pi_lock, flags);
-
- /* remove the queued waiter. */
- if (waiter) {
- rt_mutex_dequeue(lock, waiter);
- task->pi_blocked_on = NULL;
- }
+ /*
+ * We can acquire the lock. Remove the waiter from the
+ * lock waiters list.
+ */
+ rt_mutex_dequeue(lock, waiter);
+ } else {
/*
- * We have to enqueue the top waiter(if it exists) into
- * task->pi_waiters list.
+ * If the lock has waiters already we check whether @task is
+ * eligible to take over the lock.
+ *
+ * If there are no other waiters, @task can acquire
+ * the lock. @task->pi_blocked_on is NULL, so it does
+ * not need to be dequeued.
*/
if (rt_mutex_has_waiters(lock)) {
- top = rt_mutex_top_waiter(lock);
- rt_mutex_enqueue_pi(task, top);
+ struct task_struct *pown = rt_mutex_top_waiter(lock)->task;
+
+ /*
+ * If @task->prio is greater than or equal to
+ * the top waiter priority (kernel view),
+ * @task lost.
+ */
+ if (task != pown && !lock_is_stealable(task, pown, mode))
+ return 0;
+
+ /*
+ * The current top waiter stays enqueued. We
+ * don't have to change anything in the lock
+ * waiters order.
+ */
+ } else {
+ /*
+ * No waiters. Take the lock without the
+ * pi_lock dance.@task->pi_blocked_on is NULL
+ * and we have no waiters to enqueue in @task
+ * pi waiters list.
+ */
+ goto takeit;
}
- raw_spin_unlock_irqrestore(&task->pi_lock, flags);
}
+ /*
+ * Clear @task->pi_blocked_on. Requires protection by
+ * @task->pi_lock. Redundant operation for the @waiter == NULL
+ * case, but conditionals are more expensive than a redundant
+ * store.
+ */
+ raw_spin_lock_irqsave(&task->pi_lock, flags);
+ task->pi_blocked_on = NULL;
+ /*
+ * Finish the lock acquisition. @task is the new owner. If
+ * other waiters exist we have to insert the highest priority
+ * waiter into @task->pi_waiters list.
+ */
+ if (rt_mutex_has_waiters(lock))
+ rt_mutex_enqueue_pi(task, rt_mutex_top_waiter(lock));
+ raw_spin_unlock_irqrestore(&task->pi_lock, flags);
+
+takeit:
+ /* We got the lock. */
debug_rt_mutex_lock(lock);
+ /*
+ * This either preserves the RT_MUTEX_HAS_WAITERS bit if there
+ * are still waiters or clears it.
+ */
rt_mutex_set_owner(lock, task);
rt_mutex_deadlock_account_lock(lock, task);
--
2.1.4
next prev parent reply other threads:[~2015-03-12 19:21 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-03-12 19:21 [PATCH RT 00/36] Linux 3.12.38-rt53-rc1 Steven Rostedt
2015-03-12 19:21 ` [PATCH RT 01/36] gpio: omap: use raw locks for locking Steven Rostedt
2015-03-12 19:21 ` [PATCH RT 02/36] create-rt-enqueue Steven Rostedt
2015-03-12 19:21 ` [PATCH RT 03/36] rtmutex: Simplify rtmutex_slowtrylock() Steven Rostedt
2015-03-12 19:21 ` Steven Rostedt [this message]
2015-03-12 19:21 ` [PATCH RT 05/36] rtmutex: No need to keep task ref for lock owner check Steven Rostedt
2015-03-12 19:21 ` [PATCH RT 06/36] rtmutex: Clarify the boost/deboost part Steven Rostedt
2015-03-12 19:21 ` [PATCH RT 07/36] rtmutex: Document pi chain walk Steven Rostedt
2015-03-12 19:21 ` [PATCH RT 08/36] rtmutex: Simplify remove_waiter() Steven Rostedt
2015-03-12 19:21 ` [PATCH RT 09/36] rtmutex: Confine deadlock logic to futex Steven Rostedt
2015-03-12 19:21 ` [PATCH RT 10/36] rtmutex: Cleanup deadlock detector debug logic Steven Rostedt
2015-03-12 19:21 ` [PATCH RT 11/36] rtmutex: Avoid pointless requeueing in the deadlock detection chain walk Steven Rostedt
2015-03-12 19:21 ` [PATCH RT 12/36] futex: Make unlock_pi more robust Steven Rostedt
2015-03-12 19:21 ` [PATCH RT 13/36] futex: Use futex_top_waiter() in lookup_pi_state() Steven Rostedt
2015-03-12 19:21 ` [PATCH RT 14/36] futex: Split out the waiter check from lookup_pi_state() Steven Rostedt
2015-03-12 19:21 ` [PATCH RT 15/36] futex: Split out the first waiter attachment " Steven Rostedt
2015-03-12 19:21 ` [PATCH RT 16/36] futex: Simplify futex_lock_pi_atomic() and make it more robust Steven Rostedt
2015-03-12 19:21 ` [PATCH RT 17/36] rt-mutex: avoid a NULL pointer dereference on deadlock Steven Rostedt
2015-03-12 19:21 ` [PATCH RT 18/36] rt: fix __ww_mutex_lock_interruptible() lockdep annotation Steven Rostedt
2015-03-12 19:21 ` [PATCH RT 19/36] rtmutex: enable deadlock detection in ww_mutex_lock functions Steven Rostedt
2015-03-12 19:21 ` [PATCH RT 20/36] x86: UV: raw_spinlock conversion Steven Rostedt
2015-03-13 15:13 ` [PATCH RT 21/36] ARM: enable irq in translation/section permission fault handlers Steven Rostedt
2015-03-12 19:22 ` [PATCH RT 22/36] arm/futex: disable preemption during futex_atomic_cmpxchg_inatomic() Steven Rostedt
2015-03-12 19:22 ` [PATCH RT 23/36] ARM: cmpxchg: define __HAVE_ARCH_CMPXCHG for armv6 and later Steven Rostedt
2015-03-12 19:22 ` [PATCH RT 24/36] sas-ata/isci: dontt disable interrupts in qc_issue handler Steven Rostedt
2015-03-12 19:22 ` [PATCH RT 25/36] scheduling while atomic in cgroup code Steven Rostedt
2015-03-12 19:22 ` [PATCH RT 26/36] work-simple: Simple work queue implemenation Steven Rostedt
2015-03-12 19:22 ` [PATCH RT 27/36] sunrpc: make svc_xprt_do_enqueue() use get_cpu_light() Steven Rostedt
2015-03-12 19:22 ` [PATCH RT 28/36] locking: ww_mutex: fix ww_mutex vs self-deadlock Steven Rostedt
2015-03-12 19:22 ` [PATCH RT 29/36] thermal: Defer thermal wakups to threads Steven Rostedt
2015-03-12 19:22 ` [PATCH RT 30/36] lockdep: selftest: fix warnings due to missing PREEMPT_RT conditionals Steven Rostedt
2015-03-12 19:22 ` [PATCH RT 31/36] fs/aio: simple simple work Steven Rostedt
2015-03-12 19:22 ` [PATCH RT 32/36] timers: Track total number of timers in list Steven Rostedt
2015-03-12 19:22 ` [PATCH RT 33/36] timers: Reduce __run_timers() latency for empty list Steven Rostedt
2015-03-12 19:22 ` [PATCH RT 34/36] timers: Reduce future __run_timers() latency for newly emptied list Steven Rostedt
2015-03-12 19:22 ` [PATCH RT 35/36] timers: Reduce future __run_timers() latency for first add to empty list Steven Rostedt
2015-03-12 19:22 ` [PATCH RT 36/36] Linux 3.12.38-rt53-rc1 Steven Rostedt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150312192155.119505708@goodmis.org \
--to=rostedt@goodmis.org \
--cc=C.Emde@osadl.org \
--cc=bigeasy@linutronix.de \
--cc=jkacur@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rt-users@vger.kernel.org \
--cc=paul.gortmaker@windriver.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).