From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alex Kogan Subject: [PATCH 1/3] locking/qspinlock: Make arch_mcs_spin_unlock_contended more generic Date: Wed, 30 Jan 2019 22:01:33 -0500 Message-ID: <20190131030136.56999-2-alex.kogan@oracle.com> References: <20190131030136.56999-1-alex.kogan@oracle.com> Return-path: In-Reply-To: <20190131030136.56999-1-alex.kogan@oracle.com> Sender: linux-kernel-owner@vger.kernel.org To: linux@armlinux.org.uk, peterz@infradead.org, mingo@redhat.com, will.deacon@arm.com, arnd@arndb.de, longman@redhat.com, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: steven.sistare@oracle.com, daniel.m.jordan@oracle.com, alex.kogan@oracle.com, dave.dice@oracle.com, rahul.x.yadav@oracle.com List-Id: linux-arch.vger.kernel.org The arch_mcs_spin_unlock_contended macro should accept the value to be stored into the lock argument as another argument. This allows using the same macro in cases where the value to be stored is different from 1. Signed-off-by: Alex Kogan Reviewed-by: Steve Sistare --- arch/arm/include/asm/mcs_spinlock.h | 4 ++-- kernel/locking/mcs_spinlock.h | 6 +++--- kernel/locking/qspinlock.c | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/arm/include/asm/mcs_spinlock.h b/arch/arm/include/asm/mcs_spinlock.h index 529d2cf4d06f..ae6d763477f4 100644 --- a/arch/arm/include/asm/mcs_spinlock.h +++ b/arch/arm/include/asm/mcs_spinlock.h @@ -14,9 +14,9 @@ do { \ wfe(); \ } while (0) \ -#define arch_mcs_spin_unlock_contended(lock) \ +#define arch_mcs_spin_unlock_contended(lock, val) \ do { \ - smp_store_release(lock, 1); \ + smp_store_release(lock, (val)); \ dsb_sev(); \ } while (0) diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h index 5e10153b4d3c..bc6d3244e1af 100644 --- a/kernel/locking/mcs_spinlock.h +++ b/kernel/locking/mcs_spinlock.h @@ -41,8 +41,8 @@ do { \ * operations in the critical section has been completed before * unlocking. */ -#define arch_mcs_spin_unlock_contended(l) \ - smp_store_release((l), 1) +#define arch_mcs_spin_unlock_contended(l, val) \ + smp_store_release((l), (val)) #endif /* @@ -115,7 +115,7 @@ void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *node) } /* Pass lock to next waiter. */ - arch_mcs_spin_unlock_contended(&next->locked); + arch_mcs_spin_unlock_contended(&next->locked, 1); } #endif /* __LINUX_MCS_SPINLOCK_H */ diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 8a8c3c208c5e..fc88e9685bdf 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -545,7 +545,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) if (!next) next = smp_cond_load_relaxed(&node->next, (VAL)); - arch_mcs_spin_unlock_contended(&next->locked); + arch_mcs_spin_unlock_contended(&next->locked, 1); pv_kick_node(lock, next); release: -- 2.11.0 (Apple Git-81) From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from aserp2130.oracle.com ([141.146.126.79]:32964 "EHLO aserp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725535AbfAaDMx (ORCPT ); Wed, 30 Jan 2019 22:12:53 -0500 From: Alex Kogan Subject: [PATCH 1/3] locking/qspinlock: Make arch_mcs_spin_unlock_contended more generic Date: Wed, 30 Jan 2019 22:01:33 -0500 Message-ID: <20190131030136.56999-2-alex.kogan@oracle.com> In-Reply-To: <20190131030136.56999-1-alex.kogan@oracle.com> References: <20190131030136.56999-1-alex.kogan@oracle.com> Sender: linux-arch-owner@vger.kernel.org List-ID: To: linux@armlinux.org.uk, peterz@infradead.org, mingo@redhat.com, will.deacon@arm.com, arnd@arndb.de, longman@redhat.com, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: steven.sistare@oracle.com, daniel.m.jordan@oracle.com, alex.kogan@oracle.com, dave.dice@oracle.com, rahul.x.yadav@oracle.com Message-ID: <20190131030133.Kk7_CAzx8K71zgphPvVoyWkwWnhL7M9JwAzHD5HwjTs@z> The arch_mcs_spin_unlock_contended macro should accept the value to be stored into the lock argument as another argument. This allows using the same macro in cases where the value to be stored is different from 1. Signed-off-by: Alex Kogan Reviewed-by: Steve Sistare --- arch/arm/include/asm/mcs_spinlock.h | 4 ++-- kernel/locking/mcs_spinlock.h | 6 +++--- kernel/locking/qspinlock.c | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/arm/include/asm/mcs_spinlock.h b/arch/arm/include/asm/mcs_spinlock.h index 529d2cf4d06f..ae6d763477f4 100644 --- a/arch/arm/include/asm/mcs_spinlock.h +++ b/arch/arm/include/asm/mcs_spinlock.h @@ -14,9 +14,9 @@ do { \ wfe(); \ } while (0) \ -#define arch_mcs_spin_unlock_contended(lock) \ +#define arch_mcs_spin_unlock_contended(lock, val) \ do { \ - smp_store_release(lock, 1); \ + smp_store_release(lock, (val)); \ dsb_sev(); \ } while (0) diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h index 5e10153b4d3c..bc6d3244e1af 100644 --- a/kernel/locking/mcs_spinlock.h +++ b/kernel/locking/mcs_spinlock.h @@ -41,8 +41,8 @@ do { \ * operations in the critical section has been completed before * unlocking. */ -#define arch_mcs_spin_unlock_contended(l) \ - smp_store_release((l), 1) +#define arch_mcs_spin_unlock_contended(l, val) \ + smp_store_release((l), (val)) #endif /* @@ -115,7 +115,7 @@ void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *node) } /* Pass lock to next waiter. */ - arch_mcs_spin_unlock_contended(&next->locked); + arch_mcs_spin_unlock_contended(&next->locked, 1); } #endif /* __LINUX_MCS_SPINLOCK_H */ diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 8a8c3c208c5e..fc88e9685bdf 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -545,7 +545,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) if (!next) next = smp_cond_load_relaxed(&node->next, (VAL)); - arch_mcs_spin_unlock_contended(&next->locked); + arch_mcs_spin_unlock_contended(&next->locked, 1); pv_kick_node(lock, next); release: -- 2.11.0 (Apple Git-81)