public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Manfred Spraul <manfred@colorfullife.com>
Cc: Will Deacon <will.deacon@arm.com>,
	benh@kernel.crashing.org, paulmck@linux.vnet.ibm.com,
	Ingo Molnar <mingo@elte.hu>, Boqun Feng <boqun.feng@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	1vier1@web.de, Davidlohr Bueso <dave@stgolabs.net>
Subject: Re: [PATCH 1/4] spinlock: Document memory barrier rules
Date: Thu, 1 Sep 2016 10:44:35 +0200	[thread overview]
Message-ID: <20160901084435.GN10153@twins.programming.kicks-ass.net> (raw)
In-Reply-To: <80de24e3-fa01-a6d6-99e9-afd1e831e07b@colorfullife.com>

On Wed, Aug 31, 2016 at 08:32:18PM +0200, Manfred Spraul wrote:
> On 08/31/2016 06:40 PM, Will Deacon wrote:

> >The litmus test then looks a bit like:
> >
> >CPUm:
> >
> >LOCK(x)
> >smp_mb();
> >RyAcq=0
> >
> >
> >CPUn:
> >
> >Wy=1
> >smp_mb();
> >UNLOCK_WAIT(x)
> Correct.
> >
> >which I think can be simplified to:
> >
> >
> >LOCK(x)
> I thought that here a barrier is required, because Ry=0 could be before
> store of the lock.
> >Ry=0
> RyAcq instead of Ry would required due to the unlock at the end of the
> critical section
> CpuN: <...>
>           WyRelease=0
> for the litmus test irrelevant.
> >Wy=1
> >smp_mb(); // Note that this is implied by spin_unlock_wait on PPC and arm64
> >LOCK(x)   // spin_unlock_wait behaves like lock; unlock
> >UNLOCK(x)
> 
> >[I've removed a bunch of barriers here, that I don't think are necessary
> >  for the guarantees you're after]
> >
> >and the question is "Can both CPUs proceed?".
> >
> >Looking at the above, then I don't think that they can. Whilst CPUm can
> >indeed speculate the Ry=0 before successfully taking the lock, if CPUn
> >observes CPUm's read, then it must also observe the lock being held wrt
> >the spin_lock API. That is because a successful LOCK operation by CPUn
> >would force CPUm to replay its LL/SC loop and therefore discard its
> >speculation of y.
> >
> >What am I missing? The code snippet seems to have too many barriers to me!

> spin_unlock_wait() is not necessarily lock()+unlock().
> It can be a simple Rx, or now RxAcq.

Can be, normally, yes. But on power and arm64, the only architectures on
which the ACQUIRE is 'funny' they do the 'pointless' ll/sc cycle in
spin_unlock_wait() to 'fix' things.

So for both power and arm64, you can in fact model spin_unlock_wait()
as LOCK+UNLOCK.

All the other archs have (so far) 'sensible' ACQUIRE semantics and all
this is moot.

[ MIPS _could_ possibly do the 'interesting' ACQUIRE too, but so far
  hasn't introduced their fancy barriers. ]

The other interesting case is qspinlock, which does the unordered store
in software, but that is after a necessary atomic (ACQUIRE) operation to
enqueue, which we exploit in the queued_spin_unlock_wait() to order
against. So that too is good, assuming the ACQUIRE is good.

Once Power/ARM64 use qspinlock, they'll need to be careful again.

  reply	other threads:[~2016-09-01  8:45 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-28 11:56 [PATCH 0/4] Clarify/standardize memory barriers for lock/unlock Manfred Spraul
2016-08-28 11:56 ` [PATCH 1/4] spinlock: Document memory barrier rules Manfred Spraul
2016-08-28 11:56   ` [PATCH 2/4] barrier.h: Move smp_mb__after_unlock_lock to barrier.h Manfred Spraul
2016-08-28 11:56     ` [PATCH 3/4] net/netfilter/nf_conntrack_core: update memory barriers Manfred Spraul
2016-08-28 11:56       ` [PATCH 4/4] qspinlock for x86: smp_mb__after_spin_lock() is free Manfred Spraul
2016-08-29 10:52         ` Peter Zijlstra
2016-08-29 10:51       ` [PATCH 3/4] net/netfilter/nf_conntrack_core: update memory barriers Peter Zijlstra
2016-08-28 13:43     ` [PATCH 2/4] barrier.h: Move smp_mb__after_unlock_lock to barrier.h Paul E. McKenney
2016-08-28 16:31       ` Manfred Spraul
2016-08-28 18:00       ` Manfred Spraul
2016-08-28 14:41     ` kbuild test robot
2016-08-28 17:43       ` [PATCH 2/4 v3] spinlock.h: Move smp_mb__after_unlock_lock to spinlock.h Manfred Spraul
2016-08-29 10:48   ` [PATCH 1/4] spinlock: Document memory barrier rules Peter Zijlstra
2016-08-29 12:54     ` Manfred Spraul
2016-08-29 13:44       ` Peter Zijlstra
2016-08-31  4:59         ` Manfred Spraul
2016-08-31 15:40           ` Peter Zijlstra
2016-08-31 16:40             ` Will Deacon
2016-08-31 18:32               ` Manfred Spraul
2016-09-01  8:44                 ` Peter Zijlstra [this message]
2016-09-01 11:04                   ` Manfred Spraul
2016-09-01 11:19                     ` Will Deacon
2016-09-01 11:51                     ` Peter Zijlstra
2016-09-01 14:05                       ` Boqun Feng
2016-08-29 10:53 ` [PATCH 0/4] Clarify/standardize memory barriers for lock/unlock Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160901084435.GN10153@twins.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=1vier1@web.de \
    --cc=akpm@linux-foundation.org \
    --cc=benh@kernel.crashing.org \
    --cc=boqun.feng@gmail.com \
    --cc=dave@stgolabs.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=manfred@colorfullife.com \
    --cc=mingo@elte.hu \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox