From mboxrd@z Thu Jan 1 00:00:00 1970 From: Nicholas Mc Guire Subject: Re: allow preemption in check_task_state Date: Mon, 10 Feb 2014 19:12:03 +0100 Message-ID: <20140210181203.GA17914@opentech.at> References: <20140210153856.GD20017@opentech.at> <20140210111106.67438d1e@gandalf.local.home> <20140210171712.GA17517@opentech.at> <20140210173833.GQ9987@twins.programming.kicks-ass.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Steven Rostedt , linux-rt-users@vger.kernel.org, LKML , Sebastian Andrzej Siewior , Carsten Emde , Thomas Gleixner , Andreas Platschek To: Peter Zijlstra Return-path: Content-Disposition: inline In-Reply-To: <20140210173833.GQ9987@twins.programming.kicks-ass.net> Sender: linux-kernel-owner@vger.kernel.org List-Id: linux-rt-users.vger.kernel.org On Mon, 10 Feb 2014, Peter Zijlstra wrote: > On Mon, Feb 10, 2014 at 06:17:12PM +0100, Nicholas Mc Guire wrote: > > maybe I'm missing/missunderstanding something here but > > pi_unlock -> arch_spin_unlock is a full mb() > > Nope, arch_spin_unlock() on x86 is a single add[wb] without LOCK prefix. > > The lock and unlock primitives are in general specified to have ACQUIRE > resp. RELEASE semantics. > > See Documentation/memory-barriers.txt for far too much head-hurting > details. I did check that - but from the code check it seems to me to be using a lock prefix in the fast __add() path and an explicit smp_add() in the slow path (arch/x86/include/asm/spinlock.h:arch_spin_unlock) the __add from arch/x86/include/asm/cmpxchg.h does lock or am I missinterpreting this ? the other archs I believe were all doing explicit mb()/smp_mb() in the arch_spin_unlock - will go check this again. thx! hofrat