From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751735AbbDAP0N (ORCPT ); Wed, 1 Apr 2015 11:26:13 -0400 Received: from e31.co.us.ibm.com ([32.97.110.149]:48145 "EHLO e31.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751053AbbDAP0J (ORCPT ); Wed, 1 Apr 2015 11:26:09 -0400 Date: Wed, 1 Apr 2015 08:26:05 -0700 From: "Paul E. McKenney" To: Will Deacon Cc: linux-kernel@vger.kernel.org, Oleg Nesterov , Peter Zijlstra , linuxppc-dev@lists.ozlabs.org Subject: Re: [RESEND PATCH] documentation: memory-barriers: fix smp_mb__before_spinlock() semantics Message-ID: <20150401152605.GP9023@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <1427791181-21952-1-git-send-email-will.deacon@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1427791181-21952-1-git-send-email-will.deacon@arm.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15040115-8236-0000-0000-00000A7F7F86 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 31, 2015 at 09:39:41AM +0100, Will Deacon wrote: > Our current documentation claims that, when followed by an ACQUIRE, > smp_mb__before_spinlock() orders prior loads against subsequent loads > and stores, which isn't actually true. > > Fix the documentation to state that this sequence orders only prior > stores against subsequent loads and stores. > > Cc: Oleg Nesterov > Cc: "Paul E. McKenney" > Cc: Peter Zijlstra > Signed-off-by: Will Deacon > --- > > Could somebody pick this up please? I guess I could route it via the arm64 > tree with an Ack, but I'd rather it went through Paul or -tip. Queued for 4.2, along with a separate patch for PowerPC that make it so that PowerPC actually behaves as described below. ;-) Thanx, Paul > Documentation/memory-barriers.txt | 7 +++---- > 1 file changed, 3 insertions(+), 4 deletions(-) > > diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt > index ca2387ef27ab..fa28a0c1e2b1 100644 > --- a/Documentation/memory-barriers.txt > +++ b/Documentation/memory-barriers.txt > @@ -1768,10 +1768,9 @@ for each construct. These operations all imply certain barriers: > > Memory operations issued before the ACQUIRE may be completed after > the ACQUIRE operation has completed. An smp_mb__before_spinlock(), > - combined with a following ACQUIRE, orders prior loads against > - subsequent loads and stores and also orders prior stores against > - subsequent stores. Note that this is weaker than smp_mb()! The > - smp_mb__before_spinlock() primitive is free on many architectures. > + combined with a following ACQUIRE, orders prior stores against > + subsequent loads and stores. Note that this is weaker than smp_mb()! > + The smp_mb__before_spinlock() primitive is free on many architectures. > > (2) RELEASE operation implication: ------------------------------------------------------------------------ powerpc: Fix smp_mb__before_spinlock() Currently, smp_mb__before_spinlock() is defined to be smp_wmb() in core code, but this is not sufficient on PowerPC. This patch therefore supplies an override for the generic definition to strengthen smp_mb__before_spinlock() to smp_mb(), as is needed on PowerPC. Signed-off-by: Paul E. McKenney Cc: diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h index a3bf5be111ff..1124f59b8df4 100644 --- a/arch/powerpc/include/asm/barrier.h +++ b/arch/powerpc/include/asm/barrier.h @@ -89,5 +89,6 @@ do { \ #define smp_mb__before_atomic() smp_mb() #define smp_mb__after_atomic() smp_mb() +#define smp_mb__before_spinlock() smp_mb() #endif /* _ASM_POWERPC_BARRIER_H */