From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Paul E. McKenney" Subject: [PATCH tools/memory-model 05/17] arm64: Remove smp_mb() from arch_spin_is_locked() Date: Mon, 16 Apr 2018 09:12:33 -0700 Message-ID: <1523895165-17576-5-git-send-email-paulmck@linux.vnet.ibm.com> References: <20180416161209.GA6895@linux.vnet.ibm.com> Return-path: In-Reply-To: <20180416161209.GA6895@linux.vnet.ibm.com> Sender: linux-kernel-owner@vger.kernel.org To: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org Cc: mingo@kernel.org, stern@rowland.harvard.edu, parri.andrea@gmail.com, will.deacon@arm.com, peterz@infradead.org, boqun.feng@gmail.com, npiggin@gmail.com, dhowells@redhat.com, j.alglave@ucl.ac.uk, luc.maranget@inria.fr, akiyks@gmail.com, Andrea Parri , Catalin Marinas , Ingo Molnar , "Paul E. McKenney" , Linus Torvalds List-Id: linux-arch.vger.kernel.org From: Andrea Parri Commit 38b850a73034f ("arm64: spinlock: order spin_{is_locked,unlock_wait} against local locks") added an smp_mb() to arch_spin_is_locked(), in order "to ensure that the lock value is always loaded after any other locks have been taken by the current CPU", and reported one example (the "insane case" in ipc/sem.c) relying on such guarantee. It is however understood that spin_is_locked() is not required to provide such an ordering guarantee (a guarantee that is currently not provided by all the implementations/archs), and that callers relying on such ordering should instead insert suitable memory barriers before acting on the result of spin_is_locked(). Following a recent auditing [1] of the callers of {,raw_}spin_is_locked(), revealing that none of them are relying on the ordering guarantee anymore, this commit removes the leading smp_mb() from the primitive thus reverting 38b850a73034f. [1] https://marc.info/?l=linux-kernel&m=151981440005264&w=2 https://marc.info/?l=linux-kernel&m=152042843808540&w=2 https://marc.info/?l=linux-kernel&m=152043346110262&w=2 Signed-off-by: Andrea Parri Acked-by: Will Deacon Cc: Catalin Marinas Cc: Will Deacon Cc: Ingo Molnar Cc: Peter Zijlstra Cc: "Paul E. McKenney" Cc: Linus Torvalds Signed-off-by: Paul E. McKenney --- arch/arm64/include/asm/spinlock.h | 5 ----- 1 file changed, 5 deletions(-) diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h index ebdae15d665d..26c5bd7d88d8 100644 --- a/arch/arm64/include/asm/spinlock.h +++ b/arch/arm64/include/asm/spinlock.h @@ -122,11 +122,6 @@ static inline int arch_spin_value_unlocked(arch_spinlock_t lock) static inline int arch_spin_is_locked(arch_spinlock_t *lock) { - /* - * Ensure prior spin_lock operations to other locks have completed - * on this CPU before we test whether "lock" is locked. - */ - smp_mb(); /* ^^^ */ return !arch_spin_value_unlocked(READ_ONCE(*lock)); } -- 2.5.2 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:55400 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753014AbeDPQLr (ORCPT ); Mon, 16 Apr 2018 12:11:47 -0400 Received: from pps.filterd (m0098394.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w3GG99PA134889 for ; Mon, 16 Apr 2018 12:11:46 -0400 Received: from e19.ny.us.ibm.com (e19.ny.us.ibm.com [129.33.205.209]) by mx0a-001b2d01.pphosted.com with ESMTP id 2hcxcxjh4r-1 (version=TLSv1.2 cipher=AES256-SHA256 bits=256 verify=NOT) for ; Mon, 16 Apr 2018 12:11:45 -0400 Received: from localhost by e19.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 16 Apr 2018 12:11:44 -0400 From: "Paul E. McKenney" Subject: [PATCH tools/memory-model 05/17] arm64: Remove smp_mb() from arch_spin_is_locked() Date: Mon, 16 Apr 2018 09:12:33 -0700 In-Reply-To: <20180416161209.GA6895@linux.vnet.ibm.com> References: <20180416161209.GA6895@linux.vnet.ibm.com> Message-ID: <1523895165-17576-5-git-send-email-paulmck@linux.vnet.ibm.com> Sender: linux-arch-owner@vger.kernel.org List-ID: To: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org Cc: mingo@kernel.org, stern@rowland.harvard.edu, parri.andrea@gmail.com, will.deacon@arm.com, peterz@infradead.org, boqun.feng@gmail.com, npiggin@gmail.com, dhowells@redhat.com, j.alglave@ucl.ac.uk, luc.maranget@inria.fr, akiyks@gmail.com, Andrea Parri , Catalin Marinas , Ingo Molnar , "Paul E. McKenney" , Linus Torvalds Message-ID: <20180416161233.zzw9Jxlby3LO4WfdwODZAFUgLMZwKYTVntHYRgXVA_Y@z> From: Andrea Parri Commit 38b850a73034f ("arm64: spinlock: order spin_{is_locked,unlock_wait} against local locks") added an smp_mb() to arch_spin_is_locked(), in order "to ensure that the lock value is always loaded after any other locks have been taken by the current CPU", and reported one example (the "insane case" in ipc/sem.c) relying on such guarantee. It is however understood that spin_is_locked() is not required to provide such an ordering guarantee (a guarantee that is currently not provided by all the implementations/archs), and that callers relying on such ordering should instead insert suitable memory barriers before acting on the result of spin_is_locked(). Following a recent auditing [1] of the callers of {,raw_}spin_is_locked(), revealing that none of them are relying on the ordering guarantee anymore, this commit removes the leading smp_mb() from the primitive thus reverting 38b850a73034f. [1] https://marc.info/?l=linux-kernel&m=151981440005264&w=2 https://marc.info/?l=linux-kernel&m=152042843808540&w=2 https://marc.info/?l=linux-kernel&m=152043346110262&w=2 Signed-off-by: Andrea Parri Acked-by: Will Deacon Cc: Catalin Marinas Cc: Will Deacon Cc: Ingo Molnar Cc: Peter Zijlstra Cc: "Paul E. McKenney" Cc: Linus Torvalds Signed-off-by: Paul E. McKenney --- arch/arm64/include/asm/spinlock.h | 5 ----- 1 file changed, 5 deletions(-) diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h index ebdae15d665d..26c5bd7d88d8 100644 --- a/arch/arm64/include/asm/spinlock.h +++ b/arch/arm64/include/asm/spinlock.h @@ -122,11 +122,6 @@ static inline int arch_spin_value_unlocked(arch_spinlock_t lock) static inline int arch_spin_is_locked(arch_spinlock_t *lock) { - /* - * Ensure prior spin_lock operations to other locks have completed - * on this CPU before we test whether "lock" is locked. - */ - smp_mb(); /* ^^^ */ return !arch_spin_value_unlocked(READ_ONCE(*lock)); } -- 2.5.2