public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: bigeasy@linutronix.de, tglx@linutronix.de
Cc: linux-kernel@vger.kernel.org, peterz@infradead.org,
	bsegall@google.com, boqun.feng@gmail.com, swood@redhat.com,
	bristot@redhat.com, dietmar.eggemann@arm.com, mingo@redhat.com,
	jstultz@google.com, juri.lelli@redhat.com, mgorman@suse.de,
	rostedt@goodmis.org, vschneid@redhat.com,
	vincent.guittot@linaro.org, longman@redhat.com, will@kernel.org
Subject: [PATCH 5/6] locking/rtmutex: Use rt_mutex specific scheduler helpers
Date: Tue, 15 Aug 2023 13:01:26 +0200	[thread overview]
Message-ID: <20230815111430.421408298@infradead.org> (raw)
In-Reply-To: 20230815110121.117752409@infradead.org

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Have rt_mutex use the rt_mutex specific scheduler helpers to avoid
recursion vs rtlock on the PI state.

[[ peterz: adapted to new names ]]

Reported-by: Crystal Wood <swood@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 kernel/locking/rtmutex.c     |   14 ++++++++++++--
 kernel/locking/rwbase_rt.c   |    2 ++
 kernel/locking/rwsem.c       |    8 +++++++-
 kernel/locking/spinlock_rt.c |    4 ++++
 4 files changed, 25 insertions(+), 3 deletions(-)

--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1636,7 +1636,7 @@ static int __sched rt_mutex_slowlock_blo
 		raw_spin_unlock_irq(&lock->wait_lock);
 
 		if (!owner || !rtmutex_spin_on_owner(lock, waiter, owner))
-			schedule();
+			rt_mutex_schedule();
 
 		raw_spin_lock_irq(&lock->wait_lock);
 		set_current_state(state);
@@ -1665,7 +1665,7 @@ static void __sched rt_mutex_handle_dead
 	WARN(1, "rtmutex deadlock detected\n");
 	while (1) {
 		set_current_state(TASK_INTERRUPTIBLE);
-		schedule();
+		rt_mutex_schedule();
 	}
 }
 
@@ -1761,6 +1761,15 @@ static int __sched rt_mutex_slowlock(str
 	int ret;
 
 	/*
+	 * Do all pre-schedule work here, before we queue a waiter and invoke
+	 * PI -- any such work that trips on rtlock (PREEMPT_RT spinlock) would
+	 * otherwise recurse back into task_blocks_on_rt_mutex() through
+	 * rtlock_slowlock() and will then enqueue a second waiter for this
+	 * same task and things get really confusing real fast.
+	 */
+	rt_mutex_pre_schedule();
+
+	/*
 	 * Technically we could use raw_spin_[un]lock_irq() here, but this can
 	 * be called in early boot if the cmpxchg() fast path is disabled
 	 * (debug, no architecture support). In this case we will acquire the
@@ -1771,6 +1780,7 @@ static int __sched rt_mutex_slowlock(str
 	raw_spin_lock_irqsave(&lock->wait_lock, flags);
 	ret = __rt_mutex_slowlock_locked(lock, ww_ctx, state);
 	raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
+	rt_mutex_post_schedule();
 
 	return ret;
 }
--- a/kernel/locking/rwbase_rt.c
+++ b/kernel/locking/rwbase_rt.c
@@ -71,6 +71,7 @@ static int __sched __rwbase_read_lock(st
 	struct rt_mutex_base *rtm = &rwb->rtmutex;
 	int ret;
 
+	rwbase_pre_schedule();
 	raw_spin_lock_irq(&rtm->wait_lock);
 
 	/*
@@ -125,6 +126,7 @@ static int __sched __rwbase_read_lock(st
 		rwbase_rtmutex_unlock(rtm);
 
 	trace_contention_end(rwb, ret);
+	rwbase_post_schedule();
 	return ret;
 }
 
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -1427,8 +1427,14 @@ static inline void __downgrade_write(str
 #define rwbase_signal_pending_state(state, current)	\
 	signal_pending_state(state, current)
 
+#define rwbase_pre_schedule()				\
+	rt_mutex_pre_schedule()
+
 #define rwbase_schedule()				\
-	schedule()
+	rt_mutex_schedule()
+
+#define rwbase_post_schedule()				\
+	rt_mutex_post_schedule()
 
 #include "rwbase_rt.c"
 
--- a/kernel/locking/spinlock_rt.c
+++ b/kernel/locking/spinlock_rt.c
@@ -184,9 +184,13 @@ static __always_inline int  rwbase_rtmut
 
 #define rwbase_signal_pending_state(state, current)	(0)
 
+#define rwbase_pre_schedule()
+
 #define rwbase_schedule()				\
 	schedule_rtlock()
 
+#define rwbase_post_schedule()
+
 #include "rwbase_rt.c"
 /*
  * The common functions which get wrapped into the rwlock API.



  parent reply	other threads:[~2023-08-15 11:18 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-15 11:01 [PATCH 0/6] locking/rtmutex: Avoid PI state recursion through sched_submit_work() Peter Zijlstra
2023-08-15 11:01 ` [PATCH 1/6] sched: Constrain locks in sched_submit_work() Peter Zijlstra
2023-08-15 11:01 ` [PATCH 2/6] locking/rtmutex: Avoid unconditional slowpath for DEBUG_RT_MUTEXES Peter Zijlstra
2023-08-15 11:01 ` [PATCH 3/6] sched: Extract __schedule_loop() Peter Zijlstra
2023-08-15 22:33   ` Phil Auld
2023-08-15 22:39     ` Peter Zijlstra
2023-08-16 14:14       ` Phil Auld
2023-08-15 22:42     ` Phil Auld
2023-08-16 10:01     ` Sebastian Andrzej Siewior
2023-08-16 11:39       ` Phil Auld
2023-08-16 12:20         ` Sebastian Andrzej Siewior
2023-08-16 12:48           ` Phil Auld
2023-08-15 11:01 ` [PATCH 4/6] sched: Provide rt_mutex specific scheduler helpers Peter Zijlstra
2023-08-15 11:01 ` Peter Zijlstra [this message]
2023-08-15 11:01 ` [PATCH 6/6] locking/rtmutex: Add a lockdep assert to catch potential nested blocking Peter Zijlstra
2023-08-15 16:15 ` [PATCH 0/6] locking/rtmutex: Avoid PI state recursion through sched_submit_work() Peter Zijlstra
2023-08-16  8:58   ` Sebastian Andrzej Siewior
2023-08-16  9:42     ` Peter Zijlstra
2023-08-16 10:19       ` Sebastian Andrzej Siewior
2023-08-16 13:46         ` Sebastian Andrzej Siewior
2023-08-16 14:58           ` Peter Zijlstra
2023-08-16 15:22             ` Peter Zijlstra
2023-08-16 15:25               ` Sebastian Andrzej Siewior
2023-08-17  6:59             ` Sebastian Andrzej Siewior
2023-08-17  8:26               ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230815111430.421408298@infradead.org \
    --to=peterz@infradead.org \
    --cc=bigeasy@linutronix.de \
    --cc=boqun.feng@gmail.com \
    --cc=bristot@redhat.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=jstultz@google.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=rostedt@goodmis.org \
    --cc=swood@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=vincent.guittot@linaro.org \
    --cc=vschneid@redhat.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox