From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752783AbbDAPvP (ORCPT ); Wed, 1 Apr 2015 11:51:15 -0400 Received: from mx1.redhat.com ([209.132.183.28]:34853 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752117AbbDAPvN (ORCPT ); Wed, 1 Apr 2015 11:51:13 -0400 Date: Wed, 1 Apr 2015 17:50:55 +0200 From: Oleg Nesterov To: "Paul E. McKenney" Cc: Will Deacon , linux-kernel@vger.kernel.org, Peter Zijlstra Subject: Re: [RESEND PATCH] documentation: memory-barriers: fix smp_mb__before_spinlock() semantics Message-ID: <20150401155055.GC30586@redhat.com> References: <1427791181-21952-1-git-send-email-will.deacon@arm.com> <20150331175050.GA14778@redhat.com> <20150401153108.GQ9023@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150401153108.GQ9023@linux.vnet.ibm.com> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 04/01, Paul E. McKenney wrote: > > If Will agrees, like the following? Looks good to me, thanks ;) > documentation: memory-barriers: Fix smp_mb__before_spinlock() semantics > > Our current documentation claims that, when followed by an ACQUIRE, > smp_mb__before_spinlock() orders prior loads against subsequent loads > and stores, which isn't the intent. This commit therefore fixes the > documentation to state that this sequence orders only prior stores > against subsequent loads and stores. > > In addition, the original intent of smp_mb__before_spinlock() was to only > order prior loads against subsequent stores, however, people have started > using it as if it ordered prior loads against subsequent loads and stores. > This commit therefore also updates smp_mb__before_spinlock()'s header > comment to reflect this new reality. > > Cc: Oleg Nesterov > Cc: "Paul E. McKenney" > Cc: Peter Zijlstra > Signed-off-by: Will Deacon > Signed-off-by: Paul E. McKenney > > diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt > index 6974f1c2b4e1..52c320e3f107 100644 > --- a/Documentation/memory-barriers.txt > +++ b/Documentation/memory-barriers.txt > @@ -1784,10 +1784,9 @@ for each construct. These operations all imply certain barriers: > > Memory operations issued before the ACQUIRE may be completed after > the ACQUIRE operation has completed. An smp_mb__before_spinlock(), > - combined with a following ACQUIRE, orders prior loads against > - subsequent loads and stores and also orders prior stores against > - subsequent stores. Note that this is weaker than smp_mb()! The > - smp_mb__before_spinlock() primitive is free on many architectures. > + combined with a following ACQUIRE, orders prior stores against > + subsequent loads and stores. Note that this is weaker than smp_mb()! > + The smp_mb__before_spinlock() primitive is free on many architectures. > > (2) RELEASE operation implication: > > diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h > index 3e18379dfa6f..0063b24b4f36 100644 > --- a/include/linux/spinlock.h > +++ b/include/linux/spinlock.h > @@ -120,7 +120,7 @@ do { \ > /* > * Despite its name it doesn't necessarily has to be a full barrier. > * It should only guarantee that a STORE before the critical section > - * can not be reordered with a LOAD inside this section. > + * can not be reordered with LOADs and STOREs inside this section. > * spin_lock() is the one-way barrier, this LOAD can not escape out > * of the region. So the default implementation simply ensures that > * a STORE can not move into the critical section, smp_wmb() should >