From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753479AbcDNHZH (ORCPT ); Thu, 14 Apr 2016 03:25:07 -0400 Received: from g2t4619.austin.hp.com ([15.73.212.82]:46899 "EHLO g2t4619.austin.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751710AbcDNHZF (ORCPT ); Thu, 14 Apr 2016 03:25:05 -0400 Message-ID: <1460618018.2871.25.camel@j-VirtualBox> Subject: [RFC] arm64: Implement WFE based spin wait for MCS spinlocks From: Jason Low To: Will Deacon Cc: Peter Zijlstra , Linus Torvalds , "linux-kernel@vger.kernel.org" , "mingo@redhat.com" , "paulmck@linux.vnet.ibm.com" , terry.rudd@hpe.com, "Long, Wai Man" , "boqun.feng@gmail.com" , "dave@stgolabs.net" , jason.low2@hp.com Date: Thu, 14 Apr 2016 00:13:38 -0700 Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.10.4-0ubuntu2 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Use WFE to avoid most spinning with MCS spinlocks. This is implemented with the new cmpwait() mechanism for comparing and waiting for the MCS locked value to change using LDXR + WFE. Signed-off-by: Jason Low --- arch/arm64/include/asm/mcs_spinlock.h | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) create mode 100644 arch/arm64/include/asm/mcs_spinlock.h diff --git a/arch/arm64/include/asm/mcs_spinlock.h b/arch/arm64/include/asm/mcs_spinlock.h new file mode 100644 index 0000000..d295d9d --- /dev/null +++ b/arch/arm64/include/asm/mcs_spinlock.h @@ -0,0 +1,21 @@ +#ifndef __ASM_MCS_SPINLOCK_H +#define __ASM_MCS_SPINLOCK_H + +#define arch_mcs_spin_lock_contended(l) \ +do { \ + int locked_val; \ + for (;;) { \ + locked_val = READ_ONCE(*l); \ + if (locked_val) \ + break; \ + cmpwait(l, locked_val); \ + } \ + smp_rmb(); \ +} while (0) + +#define arch_mcs_spin_unlock_contended(l) \ +do { \ + smp_store_release(l, 1); \ +} while (0) + +#endif /* __ASM_MCS_SPINLOCK_H */ -- 2.1.4