From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: RT-users <linux-rt-users@vger.kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Steven Rostedt <rostedt@goodmis.org>,
Sebastian Siewior <bigeasy@linutronix.de>,
Julia Cartwright <julia@ni.com>, Wargreen <wargreen@lebib.org>
Subject: [patch RT 3/4] rtmutex: Provide locked slowpath
Date: Sat, 01 Apr 2017 12:51:01 +0200 [thread overview]
Message-ID: <20170401112758.211070565@linutronix.de> (raw)
In-Reply-To: 20170401105058.958246042@linutronix.de
[-- Attachment #1: rtmutex-Provide-locked-slowpath.patch --]
[-- Type: text/plain, Size: 4724 bytes --]
The new rt rwsem implementation needs rtmutex::wait_lock to protect struct
rw_semaphore. Dropping the lock and reaquiring it for locking the rtmutex
would open a race window.
Split out the inner workings of the locked slowpath so it can be called with
wait_lock held.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
kernel/locking/rtmutex.c | 72 +++++++++++++++++++++++-----------------
kernel/locking/rtmutex_common.h | 9 +++++
2 files changed, 51 insertions(+), 30 deletions(-)
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1758,36 +1758,18 @@ static void ww_mutex_account_lock(struct
}
#endif
-/*
- * Slow path lock function:
- */
-static int __sched
-rt_mutex_slowlock(struct rt_mutex *lock, int state,
- struct hrtimer_sleeper *timeout,
- enum rtmutex_chainwalk chwalk,
- struct ww_acquire_ctx *ww_ctx)
+int __sched rt_mutex_slowlock_locked(struct rt_mutex *lock, int state,
+ struct hrtimer_sleeper *timeout,
+ enum rtmutex_chainwalk chwalk,
+ struct ww_acquire_ctx *ww_ctx,
+ struct rt_mutex_waiter *waiter)
{
- struct rt_mutex_waiter waiter;
- unsigned long flags;
- int ret = 0;
-
- rt_mutex_init_waiter(&waiter, false);
-
- /*
- * Technically we could use raw_spin_[un]lock_irq() here, but this can
- * be called in early boot if the cmpxchg() fast path is disabled
- * (debug, no architecture support). In this case we will acquire the
- * rtmutex with lock->wait_lock held. But we cannot unconditionally
- * enable interrupts in that early boot case. So we need to use the
- * irqsave/restore variants.
- */
- raw_spin_lock_irqsave(&lock->wait_lock, flags);
+ int ret;
/* Try to acquire the lock again: */
if (try_to_take_rt_mutex(lock, current, NULL)) {
if (ww_ctx)
ww_mutex_account_lock(lock, ww_ctx);
- raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
return 0;
}
@@ -1797,13 +1779,13 @@ rt_mutex_slowlock(struct rt_mutex *lock,
if (unlikely(timeout))
hrtimer_start_expires(&timeout->timer, HRTIMER_MODE_ABS);
- ret = task_blocks_on_rt_mutex(lock, &waiter, current, chwalk);
+ ret = task_blocks_on_rt_mutex(lock, waiter, current, chwalk);
- if (likely(!ret))
+ if (likely(!ret)) {
/* sleep on the mutex */
- ret = __rt_mutex_slowlock(lock, state, timeout, &waiter,
+ ret = __rt_mutex_slowlock(lock, state, timeout, waiter,
ww_ctx);
- else if (ww_ctx) {
+ } else if (ww_ctx) {
/* ww_mutex received EDEADLK, let it become EALREADY */
ret = __mutex_lock_check_stamp(lock, ww_ctx);
BUG_ON(!ret);
@@ -1812,10 +1794,10 @@ rt_mutex_slowlock(struct rt_mutex *lock,
if (unlikely(ret)) {
__set_current_state(TASK_RUNNING);
if (rt_mutex_has_waiters(lock))
- remove_waiter(lock, &waiter);
+ remove_waiter(lock, waiter);
/* ww_mutex want to report EDEADLK/EALREADY, let them */
if (!ww_ctx)
- rt_mutex_handle_deadlock(ret, chwalk, &waiter);
+ rt_mutex_handle_deadlock(ret, chwalk, waiter);
} else if (ww_ctx) {
ww_mutex_account_lock(lock, ww_ctx);
}
@@ -1825,6 +1807,36 @@ rt_mutex_slowlock(struct rt_mutex *lock,
* unconditionally. We might have to fix that up.
*/
fixup_rt_mutex_waiters(lock);
+ return ret;
+}
+
+/*
+ * Slow path lock function:
+ */
+static int __sched
+rt_mutex_slowlock(struct rt_mutex *lock, int state,
+ struct hrtimer_sleeper *timeout,
+ enum rtmutex_chainwalk chwalk,
+ struct ww_acquire_ctx *ww_ctx)
+{
+ struct rt_mutex_waiter waiter;
+ unsigned long flags;
+ int ret = 0;
+
+ rt_mutex_init_waiter(&waiter, false);
+
+ /*
+ * Technically we could use raw_spin_[un]lock_irq() here, but this can
+ * be called in early boot if the cmpxchg() fast path is disabled
+ * (debug, no architecture support). In this case we will acquire the
+ * rtmutex with lock->wait_lock held. But we cannot unconditionally
+ * enable interrupts in that early boot case. So we need to use the
+ * irqsave/restore variants.
+ */
+ raw_spin_lock_irqsave(&lock->wait_lock, flags);
+
+ ret = rt_mutex_slowlock_locked(lock, state, timeout, chwalk, ww_ctx,
+ &waiter);
raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
--- a/kernel/locking/rtmutex_common.h
+++ b/kernel/locking/rtmutex_common.h
@@ -129,6 +129,15 @@ extern bool __rt_mutex_futex_unlock(stru
extern void rt_mutex_adjust_prio(struct task_struct *task);
+/* RW semaphore special interface */
+struct ww_acquire_ctx;
+
+int __sched rt_mutex_slowlock_locked(struct rt_mutex *lock, int state,
+ struct hrtimer_sleeper *timeout,
+ enum rtmutex_chainwalk chwalk,
+ struct ww_acquire_ctx *ww_ctx,
+ struct rt_mutex_waiter *waiter);
+
#ifdef CONFIG_DEBUG_RT_MUTEXES
# include "rtmutex-debug.h"
#else
next prev parent reply other threads:[~2017-04-01 10:51 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-01 10:50 [patch RT 0/4] rwsem/rt: Lift the single reader restriction Thomas Gleixner
2017-04-01 10:50 ` [patch RT 1/4] rtmutex: Make lock_killable work Thomas Gleixner
2017-04-03 15:21 ` Steven Rostedt
2017-04-04 7:13 ` Peter Zijlstra
2017-04-04 13:20 ` Steven Rostedt
2017-04-01 10:51 ` [patch RT 2/4] rtmutex: Provide rt_mutex_lock_state() Thomas Gleixner
2017-04-01 10:51 ` Thomas Gleixner [this message]
2017-04-01 10:51 ` [patch RT 4/4] rwsem/rt: Lift single reader restriction Thomas Gleixner
2017-04-04 14:01 ` [patch RT 0/4] rwsem/rt: Lift the " Sebastian Siewior
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170401112758.211070565@linutronix.de \
--to=tglx@linutronix.de \
--cc=bigeasy@linutronix.de \
--cc=julia@ni.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rt-users@vger.kernel.org \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=wargreen@lebib.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).