From mboxrd@z Thu Jan 1 00:00:00 1970 From: Will Deacon Subject: Re: Behaviour of smp_mb__{before,after}_spin* and acquire/release Date: Tue, 20 Jan 2015 10:38:40 +0000 Message-ID: <20150120103840.GB24303@arm.com> References: <20150113163353.GE31784@arm.com> <20150120093443.GA11596@twins.programming.kicks-ass.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Received: from foss-mx-na.foss.arm.com ([217.140.108.86]:38818 "EHLO foss-mx-na.foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752555AbbATKio (ORCPT ); Tue, 20 Jan 2015 05:38:44 -0500 Content-Disposition: inline In-Reply-To: <20150120093443.GA11596@twins.programming.kicks-ass.net> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Peter Zijlstra Cc: "paulmck@linux.vnet.ibm.com" , "torvalds@linux-foundation.org" , "oleg@redhat.com" , "benh@kernel.crashing.org" , "linux-kernel@vger.kernel.org" , "linux-arch@vger.kernel.org" On Tue, Jan 20, 2015 at 09:34:43AM +0000, Peter Zijlstra wrote: > On Tue, Jan 13, 2015 at 04:33:54PM +0000, Will Deacon wrote: > > I started dusting off a series I've been working to implement a relaxed > > atomic API in Linux (i.e. things like atomic_read(v, ACQUIRE)) but I'm > > having trouble making sense of the ordering semantics we have in mainline > > today: > > > 2. Does smp_mb__after_unlock_lock order smp_store_release against > > smp_load_acquire? Again, Documentation/memory-barriers.txt puts > > these operations into the RELEASE and ACQUIRE classes respectively, > > but since smp_mb__after_unlock_lock is a NOP everywhere other than > > PowerPC, I don't think this is enforced by the current code. > > Yeah, wasn't Paul going to talk to Ben about that? PPC is the only arch > that has the weak ACQUIRE/RELEASE for its spinlocks. Indeed, and I'd love to kill that, especially as its really confusing when we have other ACQUIRE/RELEASE functions (like your smp_* accessors) that do need explicit barriers for general RELEASE->ACQUIRE ordering. If people start using smp_mb__after_unlock_lock for *that*, then other architectures will need to implement it as a barrier and penalise their spinlocks in doing so. > > Most > > architectures follow the pattern used by asm-generic/barrier.h: > > > > release: smp_mb(); STORE > > acquire: LOAD; smp_mb(); > > > > which doesn't provide any release -> acquire ordering afaict. > > Only when combined on the same address, if the LOAD observes the result > of the STORE we can guarantee the rest of the ordering. And if you > build a locking primitive with them (or circular lists or whatnot) you > have that extra condition. > > But yes, I see your argument that this implementation is weak like the > PPC. I'm absolutely fine with that, I'd just like to make sure that it's documented so that people don't use smp_mb__after_unlock_lock() to order smp_store_release -> smp_load_acquire. I'll have a crack at a Documentation patch if you don't beat me to it... Will