From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Paul E. McKenney" Subject: Re: [PATCH v5 tip/core/locking 6/7] locking: Add an smp_mb__after_unlock_lock() for UNLOCK+LOCK barrier Date: Tue, 10 Dec 2013 12:11:54 -0800 Message-ID: <20131210201154.GF4208@linux.vnet.ibm.com> References: <20131210012738.GA24317@linux.vnet.ibm.com> <1386638883-25379-1-git-send-email-paulmck@linux.vnet.ibm.com> <1386638883-25379-6-git-send-email-paulmck@linux.vnet.ibm.com> <20131210123726.GE13532@twins.programming.kicks-ass.net> <20131210174508.GC10311@leaf> Reply-To: paulmck@linux.vnet.ibm.com Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Received: from e32.co.us.ibm.com ([32.97.110.150]:49419 "EHLO e32.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750776Ab3LJUMA (ORCPT ); Tue, 10 Dec 2013 15:12:00 -0500 Received: from /spool/local by e32.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 10 Dec 2013 13:11:59 -0700 Content-Disposition: inline In-Reply-To: <20131210174508.GC10311@leaf> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Josh Triplett Cc: Peter Zijlstra , linux-kernel@vger.kernel.org, mingo@kernel.org, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, niv@us.ibm.com, tglx@linutronix.de, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, darren@dvhart.com, fweisbec@gmail.com, sbw@mit.edu, Linux-Arch , Ingo Molnar , Oleg Nesterov , Linus Torvalds On Tue, Dec 10, 2013 at 09:45:08AM -0800, Josh Triplett wrote: > On Tue, Dec 10, 2013 at 01:37:26PM +0100, Peter Zijlstra wrote: > > On Mon, Dec 09, 2013 at 05:28:02PM -0800, Paul E. McKenney wrote: > > > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h > > > index f89da808ce31..abf645799991 100644 > > > --- a/arch/powerpc/include/asm/barrier.h > > > +++ b/arch/powerpc/include/asm/barrier.h > > > @@ -84,4 +84,6 @@ do { \ > > > ___p1; \ > > > }) > > > > > > +#define smp_mb__after_unlock_lock() do { } while (0) > > > + > > > #endif /* _ASM_POWERPC_BARRIER_H */ > > > > Didn't ben said ppc actually violates the current unlock+lock assumtion > > and therefore this barrier woulnd't actually be a nop on ppc > > Or, ppc could fix its lock primitives to preserve the unlock+lock > assumption, and avoid subtle breakage across half the kernel. Indeed. However, another motivation for this change was the difficulty in proving that x86 really provided the equivalent of a full barrier for the MCS lock handoff case: http://www.spinics.net/lists/linux-mm/msg65653.html Thanx, Paul