From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754478AbcB2LVI (ORCPT ); Mon, 29 Feb 2016 06:21:08 -0500 Received: from torg.zytor.com ([198.137.202.12]:54704 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753498AbcB2LVG (ORCPT ); Mon, 29 Feb 2016 06:21:06 -0500 Date: Mon, 29 Feb 2016 03:20:33 -0800 From: tip-bot for Waiman Long Message-ID: Cc: akpm@linux-foundation.org, paulmck@linux.vnet.ibm.com, hpa@zytor.com, peterz@infradead.org, scott.norton@hpe.com, mingo@kernel.org, torvalds@linux-foundation.org, tglx@linutronix.de, linux-kernel@vger.kernel.org, doug.hatch@hpe.com, Waiman.Long@hpe.com Reply-To: mingo@kernel.org, tglx@linutronix.de, torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, doug.hatch@hpe.com, Waiman.Long@hpe.com, akpm@linux-foundation.org, paulmck@linux.vnet.ibm.com, hpa@zytor.com, peterz@infradead.org, scott.norton@hpe.com In-Reply-To: <1449778666-13593-3-git-send-email-Waiman.Long@hpe.com> References: <1449778666-13593-3-git-send-email-Waiman.Long@hpe.com> To: linux-tip-commits@vger.kernel.org Subject: [tip:locking/core] locking/pvqspinlock: Move lock stealing count tracking code into pv_queued_spin_steal_lock() Git-Commit-ID: eaff0e7003cca6c2748b67ead2d4b1a8ad858fc7 X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: eaff0e7003cca6c2748b67ead2d4b1a8ad858fc7 Gitweb: http://git.kernel.org/tip/eaff0e7003cca6c2748b67ead2d4b1a8ad858fc7 Author: Waiman Long AuthorDate: Thu, 10 Dec 2015 15:17:46 -0500 Committer: Ingo Molnar CommitDate: Mon, 29 Feb 2016 10:02:41 +0100 locking/pvqspinlock: Move lock stealing count tracking code into pv_queued_spin_steal_lock() This patch moves the lock stealing count tracking code into pv_queued_spin_steal_lock() instead of via a jacket function simplifying the code. Signed-off-by: Waiman Long Signed-off-by: Peter Zijlstra (Intel) Cc: Andrew Morton Cc: Douglas Hatch Cc: Linus Torvalds Cc: Paul E. McKenney Cc: Peter Zijlstra Cc: Scott J Norton Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/1449778666-13593-3-git-send-email-Waiman.Long@hpe.com Signed-off-by: Ingo Molnar --- kernel/locking/qspinlock_paravirt.h | 16 +++++++++------- kernel/locking/qspinlock_stat.h | 13 ------------- 2 files changed, 9 insertions(+), 20 deletions(-) diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h index 87bb235..78f04a2 100644 --- a/kernel/locking/qspinlock_paravirt.h +++ b/kernel/locking/qspinlock_paravirt.h @@ -55,6 +55,11 @@ struct pv_node { }; /* + * Include queued spinlock statistics code + */ +#include "qspinlock_stat.h" + +/* * By replacing the regular queued_spin_trylock() with the function below, * it will be called once when a lock waiter enter the PV slowpath before * being queued. By allowing one lock stealing attempt here when the pending @@ -65,9 +70,11 @@ struct pv_node { static inline bool pv_queued_spin_steal_lock(struct qspinlock *lock) { struct __qspinlock *l = (void *)lock; + int ret = !(atomic_read(&lock->val) & _Q_LOCKED_PENDING_MASK) && + (cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0); - return !(atomic_read(&lock->val) & _Q_LOCKED_PENDING_MASK) && - (cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0); + qstat_inc(qstat_pv_lock_stealing, ret); + return ret; } /* @@ -138,11 +145,6 @@ static __always_inline int trylock_clear_pending(struct qspinlock *lock) #endif /* _Q_PENDING_BITS == 8 */ /* - * Include queued spinlock statistics code - */ -#include "qspinlock_stat.h" - -/* * Lock and MCS node addresses hash table for fast lookup * * Hashing is done on a per-cacheline basis to minimize the need to access diff --git a/kernel/locking/qspinlock_stat.h b/kernel/locking/qspinlock_stat.h index 640dcec..869988d 100644 --- a/kernel/locking/qspinlock_stat.h +++ b/kernel/locking/qspinlock_stat.h @@ -279,19 +279,6 @@ static inline void __pv_wait(u8 *ptr, u8 val) #define pv_kick(c) __pv_kick(c) #define pv_wait(p, v) __pv_wait(p, v) -/* - * PV unfair trylock count tracking function - */ -static inline int qstat_spin_steal_lock(struct qspinlock *lock) -{ - int ret = pv_queued_spin_steal_lock(lock); - - qstat_inc(qstat_pv_lock_stealing, ret); - return ret; -} -#undef queued_spin_trylock -#define queued_spin_trylock(l) qstat_spin_steal_lock(l) - #else /* CONFIG_QUEUED_LOCK_STAT */ static inline void qstat_inc(enum qlock_stats stat, bool cond) { }