From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754553AbcB2LVh (ORCPT ); Mon, 29 Feb 2016 06:21:37 -0500 Received: from torg.zytor.com ([198.137.202.12]:54724 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753508AbcB2LVf (ORCPT ); Mon, 29 Feb 2016 06:21:35 -0500 Date: Mon, 29 Feb 2016 03:20:57 -0800 From: tip-bot for Waiman Long Message-ID: Cc: scott.norton@hpe.com, akpm@linux-foundation.org, hpa@zytor.com, torvalds@linux-foundation.org, peterz@infradead.org, doug.hatch@hpe.com, Waiman.Long@hpe.com, paulmck@linux.vnet.ibm.com, linux-kernel@vger.kernel.org, mingo@kernel.org, tglx@linutronix.de Reply-To: tglx@linutronix.de, mingo@kernel.org, linux-kernel@vger.kernel.org, paulmck@linux.vnet.ibm.com, Waiman.Long@hpe.com, doug.hatch@hpe.com, peterz@infradead.org, hpa@zytor.com, torvalds@linux-foundation.org, akpm@linux-foundation.org, scott.norton@hpe.com In-Reply-To: <1449778666-13593-1-git-send-email-Waiman.Long@hpe.com> References: <1449778666-13593-1-git-send-email-Waiman.Long@hpe.com> To: linux-tip-commits@vger.kernel.org Subject: [tip:locking/core] locking/qspinlock: Use smp_cond_acquire() in pending code Git-Commit-ID: cb037fdad6772df2d49fe61c97d7c0d8265bc918 X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: cb037fdad6772df2d49fe61c97d7c0d8265bc918 Gitweb: http://git.kernel.org/tip/cb037fdad6772df2d49fe61c97d7c0d8265bc918 Author: Waiman Long AuthorDate: Thu, 10 Dec 2015 15:17:44 -0500 Committer: Ingo Molnar CommitDate: Mon, 29 Feb 2016 10:02:42 +0100 locking/qspinlock: Use smp_cond_acquire() in pending code The newly introduced smp_cond_acquire() was used to replace the slowpath lock acquisition loop. Similarly, the new function can also be applied to the pending bit locking loop. This patch uses the new function in that loop. Signed-off-by: Waiman Long Signed-off-by: Peter Zijlstra (Intel) Cc: Andrew Morton Cc: Douglas Hatch Cc: Linus Torvalds Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Scott J Norton Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/1449778666-13593-1-git-send-email-Waiman.Long@hpe.com Signed-off-by: Ingo Molnar --- kernel/locking/qspinlock.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 393d187..ce2f75e 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -358,8 +358,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) * sequentiality; this is because not all clear_pending_set_locked() * implementations imply full barriers. */ - while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK) - cpu_relax(); + smp_cond_acquire(!(atomic_read(&lock->val) & _Q_LOCKED_MASK)); /* * take ownership and clear the pending bit. @@ -435,7 +434,7 @@ queue: * * The PV pv_wait_head_or_lock function, if active, will acquire * the lock and return a non-zero value. So we have to skip the - * smp_load_acquire() call. As the next PV queue head hasn't been + * smp_cond_acquire() call. As the next PV queue head hasn't been * designated yet, there is no way for the locked value to become * _Q_SLOW_VAL. So both the set_locked() and the * atomic_cmpxchg_relaxed() calls will be safe. @@ -466,7 +465,7 @@ locked: break; } /* - * The smp_load_acquire() call above has provided the necessary + * The smp_cond_acquire() call above has provided the necessary * acquire semantics required for locking. At most two * iterations of this loop may be ran. */