stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* FAILED: patch "[PATCH] rtmutex: Drop rt_mutex::wait_lock before scheduling" failed to apply to 4.19-stable tree
@ 2024-09-08 10:30 gregkh
  2024-09-09  6:16 ` [PATCH 5.10.y, 5.4.y, 4.19.y] rtmutex: Drop rt_mutex::wait_lock before scheduling Thomas Gleixner
  0 siblings, 1 reply; 3+ messages in thread
From: gregkh @ 2024-09-08 10:30 UTC (permalink / raw)
  To: mu001999, tglx; +Cc: stable


The patch below does not apply to the 4.19-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable@vger.kernel.org>.

To reproduce the conflict and resubmit, you may use the following commands:

git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-4.19.y
git checkout FETCH_HEAD
git cherry-pick -x d33d26036a0274b472299d7dcdaa5fb34329f91b
# <resolve conflicts, build, test, etc.>
git commit -s
git send-email --to '<stable@vger.kernel.org>' --in-reply-to '2024090850-nuclear-radar-ea2b@gregkh' --subject-prefix 'PATCH 4.19.y' HEAD^..

Possible dependencies:

d33d26036a02 ("rtmutex: Drop rt_mutex::wait_lock before scheduling")
add461325ec5 ("locking/rtmutex: Extend the rtmutex core to support ww_mutex")
1c143c4b65da ("locking/rtmutex: Provide the spin/rwlock core lock function")
e17ba59b7e8e ("locking/rtmutex: Guard regular sleeping locks specific functions")
7980aa397cc0 ("locking/rtmutex: Use rt_mutex_wake_q_head")
c014ef69b3ac ("locking/rtmutex: Add wake_state to rt_mutex_waiter")
42254105dfe8 ("locking/rwsem: Add rtmutex based R/W semaphore implementation")
ebbdc41e90ff ("locking/rtmutex: Provide rt_mutex_slowlock_locked()")
830e6acc8a1c ("locking/rtmutex: Split out the inner parts of 'struct rtmutex'")
531ae4b06a73 ("locking/rtmutex: Split API from implementation")
785159301bed ("locking/rtmutex: Convert macros to inlines")
b41cda037655 ("locking/rtmutex: Set proper wait context for lockdep")
2f064a59a11f ("sched: Change task_struct::state")
d6c23bb3a2ad ("sched: Add get_current_state()")
b03fbd4ff24c ("sched: Introduce task_is_running()")
a9e906b71f96 ("Merge branch 'sched/urgent' into sched/core, to pick up fixes")

thanks,

greg k-h

------------------ original commit in Linus's tree ------------------

From d33d26036a0274b472299d7dcdaa5fb34329f91b Mon Sep 17 00:00:00 2001
From: Roland Xu <mu001999@outlook.com>
Date: Thu, 15 Aug 2024 10:58:13 +0800
Subject: [PATCH] rtmutex: Drop rt_mutex::wait_lock before scheduling

rt_mutex_handle_deadlock() is called with rt_mutex::wait_lock held.  In the
good case it returns with the lock held and in the deadlock case it emits a
warning and goes into an endless scheduling loop with the lock held, which
triggers the 'scheduling in atomic' warning.

Unlock rt_mutex::wait_lock in the dead lock case before issuing the warning
and dropping into the schedule for ever loop.

[ tglx: Moved unlock before the WARN(), removed the pointless comment,
  	massaged changelog, added Fixes tag ]

Fixes: 3d5c9340d194 ("rtmutex: Handle deadlock detection smarter")
Signed-off-by: Roland Xu <mu001999@outlook.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/all/ME0P300MB063599BEF0743B8FA339C2CECC802@ME0P300MB0635.AUSP300.PROD.OUTLOOK.COM

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 88d08eeb8bc0..fba1229f1de6 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1644,6 +1644,7 @@ static int __sched rt_mutex_slowlock_block(struct rt_mutex_base *lock,
 }
 
 static void __sched rt_mutex_handle_deadlock(int res, int detect_deadlock,
+					     struct rt_mutex_base *lock,
 					     struct rt_mutex_waiter *w)
 {
 	/*
@@ -1656,10 +1657,10 @@ static void __sched rt_mutex_handle_deadlock(int res, int detect_deadlock,
 	if (build_ww_mutex() && w->ww_ctx)
 		return;
 
-	/*
-	 * Yell loudly and stop the task right here.
-	 */
+	raw_spin_unlock_irq(&lock->wait_lock);
+
 	WARN(1, "rtmutex deadlock detected\n");
+
 	while (1) {
 		set_current_state(TASK_INTERRUPTIBLE);
 		rt_mutex_schedule();
@@ -1713,7 +1714,7 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock,
 	} else {
 		__set_current_state(TASK_RUNNING);
 		remove_waiter(lock, waiter);
-		rt_mutex_handle_deadlock(ret, chwalk, waiter);
+		rt_mutex_handle_deadlock(ret, chwalk, lock, waiter);
 	}
 
 	/*


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [PATCH 5.10.y, 5.4.y, 4.19.y] rtmutex: Drop rt_mutex::wait_lock before scheduling
  2024-09-08 10:30 FAILED: patch "[PATCH] rtmutex: Drop rt_mutex::wait_lock before scheduling" failed to apply to 4.19-stable tree gregkh
@ 2024-09-09  6:16 ` Thomas Gleixner
  2024-09-10  7:34   ` Greg KH
  0 siblings, 1 reply; 3+ messages in thread
From: Thomas Gleixner @ 2024-09-09  6:16 UTC (permalink / raw)
  To: gregkh, mu001999; +Cc: stable


From: Roland Xu <mu001999@outlook.com>

commit d33d26036a0274b472299d7dcdaa5fb34329f91b upstream.

rt_mutex_handle_deadlock() is called with rt_mutex::wait_lock held.  In the
good case it returns with the lock held and in the deadlock case it emits a
warning and goes into an endless scheduling loop with the lock held, which
triggers the 'scheduling in atomic' warning.

Unlock rt_mutex::wait_lock in the dead lock case before issuing the warning
and dropping into the schedule for ever loop.

[ tglx: Moved unlock before the WARN(), removed the pointless comment,
  	massaged changelog, added Fixes tag ]

Fixes: 3d5c9340d194 ("rtmutex: Handle deadlock detection smarter")
Signed-off-by: Roland Xu <mu001999@outlook.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/all/ME0P300MB063599BEF0743B8FA339C2CECC802@ME0P300MB0635.AUSP300.PROD.OUTLOOK.COM
---
Backport to 5.10.y, 5.4.y, 4.19.y
---
 kernel/locking/rtmutex.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1205,6 +1205,7 @@ static int __sched
 }
 
 static void rt_mutex_handle_deadlock(int res, int detect_deadlock,
+				     struct rt_mutex *lock,
 				     struct rt_mutex_waiter *w)
 {
 	/*
@@ -1214,6 +1215,7 @@ static void rt_mutex_handle_deadlock(int
 	if (res != -EDEADLOCK || detect_deadlock)
 		return;
 
+	raw_spin_unlock_irq(&lock->wait_lock);
 	/*
 	 * Yell lowdly and stop the task right here.
 	 */
@@ -1269,7 +1271,7 @@ rt_mutex_slowlock(struct rt_mutex *lock,
 	if (unlikely(ret)) {
 		__set_current_state(TASK_RUNNING);
 		remove_waiter(lock, &waiter);
-		rt_mutex_handle_deadlock(ret, chwalk, &waiter);
+		rt_mutex_handle_deadlock(ret, chwalk, lock, &waiter);
 	}
 
 	/*

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH 5.10.y, 5.4.y, 4.19.y] rtmutex: Drop rt_mutex::wait_lock before scheduling
  2024-09-09  6:16 ` [PATCH 5.10.y, 5.4.y, 4.19.y] rtmutex: Drop rt_mutex::wait_lock before scheduling Thomas Gleixner
@ 2024-09-10  7:34   ` Greg KH
  0 siblings, 0 replies; 3+ messages in thread
From: Greg KH @ 2024-09-10  7:34 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: mu001999, stable

On Mon, Sep 09, 2024 at 08:16:48AM +0200, Thomas Gleixner wrote:
> 
> From: Roland Xu <mu001999@outlook.com>
> 
> commit d33d26036a0274b472299d7dcdaa5fb34329f91b upstream.
> 
> rt_mutex_handle_deadlock() is called with rt_mutex::wait_lock held.  In the
> good case it returns with the lock held and in the deadlock case it emits a
> warning and goes into an endless scheduling loop with the lock held, which
> triggers the 'scheduling in atomic' warning.
> 
> Unlock rt_mutex::wait_lock in the dead lock case before issuing the warning
> and dropping into the schedule for ever loop.
> 
> [ tglx: Moved unlock before the WARN(), removed the pointless comment,
>   	massaged changelog, added Fixes tag ]
> 
> Fixes: 3d5c9340d194 ("rtmutex: Handle deadlock detection smarter")
> Signed-off-by: Roland Xu <mu001999@outlook.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: stable@vger.kernel.org
> Link: https://lore.kernel.org/all/ME0P300MB063599BEF0743B8FA339C2CECC802@ME0P300MB0635.AUSP300.PROD.OUTLOOK.COM
> ---
> Backport to 5.10.y, 5.4.y, 4.19.y

Now queued up, thanks!

greg k-h

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2024-09-10  7:34 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-08 10:30 FAILED: patch "[PATCH] rtmutex: Drop rt_mutex::wait_lock before scheduling" failed to apply to 4.19-stable tree gregkh
2024-09-09  6:16 ` [PATCH 5.10.y, 5.4.y, 4.19.y] rtmutex: Drop rt_mutex::wait_lock before scheduling Thomas Gleixner
2024-09-10  7:34   ` Greg KH

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).