public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Waiman Long <longman@redhat.com>
To: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>, Will Deacon <will@kernel.org>,
	Boqun Feng <boqun.feng@gmail.com>
Cc: linux-kernel@vger.kernel.org, "Xu,
	Yanfei" <yanfei.xu@windriver.com>,
	Waiman Long <longman@redhat.com>
Subject: [PATCH] locking/mutex: Reduce chance of setting HANDOFF bit on unlocked mutex
Date: Tue, 29 Jun 2021 16:11:38 -0400	[thread overview]
Message-ID: <20210629201138.31507-1-longman@redhat.com> (raw)

The current mutex code may set the HANDOFF bit right after wakeup
without checking if the mutex is unlocked.  The chance of setting the
HANDOFF bit on an unlocked mutex can be relatively high. In this case,
it doesn't really block other waiters from acquiring the lock thus
wasting an unnecessary atomic operation.

To reduce the chance, do a trylock first before setting the HANDOFF bit.
In addition, optimistic spinning on the mutex will only be done if the
HANDOFF bit is set on a locked mutex to guarantee that no one else can
steal it.

Reported-by: Xu, Yanfei <yanfei.xu@windriver.com>
Signed-off-by: Waiman Long <longman@redhat.com>
---
 kernel/locking/mutex.c | 42 +++++++++++++++++++++++++++++-------------
 1 file changed, 29 insertions(+), 13 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index d2df5e68b503..472ab21b5b8e 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -118,9 +118,9 @@ static inline struct task_struct *__mutex_trylock_or_owner(struct mutex *lock)
 		}
 
 		/*
-		 * We set the HANDOFF bit, we must make sure it doesn't live
-		 * past the point where we acquire it. This would be possible
-		 * if we (accidentally) set the bit on an unlocked mutex.
+		 * Always clear the HANDOFF bit before acquiring the lock.
+		 * Note that if the bit is accidentally set on an unlocked
+		 * mutex, anyone can acquire it.
 		 */
 		flags &= ~MUTEX_FLAG_HANDOFF;
 
@@ -180,6 +180,11 @@ static inline void __mutex_set_flag(struct mutex *lock, unsigned long flag)
 	atomic_long_or(flag, &lock->owner);
 }
 
+static inline long __mutex_fetch_set_flag(struct mutex *lock, unsigned long flag)
+{
+	return atomic_long_fetch_or_relaxed(flag, &lock->owner);
+}
+
 static inline void __mutex_clear_flag(struct mutex *lock, unsigned long flag)
 {
 	atomic_long_andnot(flag, &lock->owner);
@@ -1007,6 +1012,8 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
 
 	set_current_state(state);
 	for (;;) {
+		long owner = 0L;
+
 		/*
 		 * Once we hold wait_lock, we're serialized against
 		 * mutex_unlock() handing the lock off to us, do a trylock
@@ -1035,24 +1042,33 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
 		spin_unlock(&lock->wait_lock);
 		schedule_preempt_disabled();
 
+		/*
+		 * Here we order against unlock; we must either see it change
+		 * state back to RUNNING and fall through the next schedule(),
+		 * or we must see its unlock and acquire.
+		 */
+		if (__mutex_trylock(lock))
+			break;
+
+		set_current_state(state);
+
 		/*
 		 * ww_mutex needs to always recheck its position since its waiter
 		 * list is not FIFO ordered.
 		 */
-		if (ww_ctx || !first) {
+		if (ww_ctx || !first)
 			first = __mutex_waiter_is_first(lock, &waiter);
-			if (first)
-				__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
-		}
 
-		set_current_state(state);
+		if (first)
+			owner = __mutex_fetch_set_flag(lock, MUTEX_FLAG_HANDOFF);
+
 		/*
-		 * Here we order against unlock; we must either see it change
-		 * state back to RUNNING and fall through the next schedule(),
-		 * or we must see its unlock and acquire.
+		 * If a lock holder is present with HANDOFF bit set, it will
+		 * guarantee that no one else can steal the lock. We may spin
+		 * on the lock to acquire it earlier.
 		 */
-		if (__mutex_trylock(lock) ||
-		    (first && mutex_optimistic_spin(lock, ww_ctx, &waiter)))
+		if ((owner & ~MUTEX_FLAGS) &&
+		     mutex_optimistic_spin(lock, ww_ctx, &waiter))
 			break;
 
 		spin_lock(&lock->wait_lock);
-- 
2.18.1


             reply	other threads:[~2021-06-29 20:12 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-29 20:11 Waiman Long [this message]
2021-06-30 10:21 ` [PATCH] locking/mutex: Reduce chance of setting HANDOFF bit on unlocked mutex Peter Zijlstra
2021-06-30 13:50   ` Waiman Long
2021-06-30 13:56     ` Peter Zijlstra
2021-06-30 14:13       ` Waiman Long
2021-06-30 14:46         ` Peter Zijlstra
2021-06-30 14:43   ` Xu, Yanfei
2021-06-30 14:50     ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210629201138.31507-1-longman@redhat.com \
    --to=longman@redhat.com \
    --cc=boqun.feng@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=will@kernel.org \
    --cc=yanfei.xu@windriver.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox