public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] locking/mutexes: Revert "locking/mutexes: Add extra reschedule point"
@ 2014-07-31 10:16 Ilya Dryomov
  2014-07-31 11:57 ` Peter Zijlstra
  0 siblings, 1 reply; 14+ messages in thread
From: Ilya Dryomov @ 2014-07-31 10:16 UTC (permalink / raw)
  To: linux-kernel; +Cc: Peter Zijlstra, Ingo Molnar, ceph-devel

This reverts commit 34c6bc2c919a55e5ad4e698510a2f35ee13ab900.

This commit can lead to deadlocks by way of what at a high level
appears to look like a missing wakeup on mutex_unlock() when
CONFIG_MUTEX_SPIN_ON_OWNER is set, which is how most distributions ship
their kernels.  In particular, it causes reproducible deadlocks in
libceph/rbd code under higher than moderate loads with the evidence
actually pointing to the bowels of mutex_lock().

kernel/locking/mutex.c, __mutex_lock_common():
476         osq_unlock(&lock->osq);
477 slowpath:
478         /*
479          * If we fell out of the spin path because of need_resched(),
480          * reschedule now, before we try-lock the mutex. This avoids getting
481          * scheduled out right after we obtained the mutex.
482          */
483         if (need_resched())
484                 schedule_preempt_disabled(); <-- never returns
485 #endif
486         spin_lock_mutex(&lock->wait_lock, flags);

We started bumping into deadlocks in QA the day our branch has been
rebased onto 3.15 (the release this commit went in) but then as part of
debugging effort I enabled all locking debug options, which also
disabled CONFIG_MUTEX_SPIN_ON_OWNER and made everything disappear,
which is why it hasn't been looked into until now.  Revert makes the
problem go away, confirmed by our users.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: stable@vger.kernel.org # 3.15
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
---
 kernel/locking/mutex.c |    7 -------
 1 file changed, 7 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index acca2c1a3c5e..746ff280a2fc 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -475,13 +475,6 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
 	}
 	osq_unlock(&lock->osq);
 slowpath:
-	/*
-	 * If we fell out of the spin path because of need_resched(),
-	 * reschedule now, before we try-lock the mutex. This avoids getting
-	 * scheduled out right after we obtained the mutex.
-	 */
-	if (need_resched())
-		schedule_preempt_disabled();
 #endif
 	spin_lock_mutex(&lock->wait_lock, flags);
 
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2014-08-02 20:04 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-07-31 10:16 [PATCH] locking/mutexes: Revert "locking/mutexes: Add extra reschedule point" Ilya Dryomov
2014-07-31 11:57 ` Peter Zijlstra
2014-07-31 12:37   ` Ilya Dryomov
2014-07-31 13:13     ` Peter Zijlstra
2014-07-31 13:25       ` Ilya Dryomov
2014-07-31 13:44       ` Ingo Molnar
2014-07-31 13:56         ` Peter Zijlstra
2014-08-02 20:04           ` [RFC][PATCH] locking: Debug nested wait/locking primitives Peter Zijlstra
2014-07-31 14:30       ` [PATCH] locking/mutexes: Revert "locking/mutexes: Add extra reschedule point" Mike Galbraith
2014-07-31 14:37         ` Ilya Dryomov
2014-07-31 14:39         ` Peter Zijlstra
2014-08-01 12:56           ` Ilya Dryomov
2014-08-01 13:27             ` Peter Zijlstra
2014-08-01 13:50               ` Ilya Dryomov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox