From: Peter Zijlstra <peterz@infradead.org>
To: Will Deacon <will.deacon@arm.com>
Cc: Waiman Long <longman@redhat.com>,
linux-kernel@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, mingo@kernel.org,
boqun.feng@gmail.com, paulmck@linux.vnet.ibm.com,
catalin.marinas@arm.com
Subject: Re: [PATCH 02/10] locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath
Date: Thu, 20 Sep 2018 18:08:32 +0200 [thread overview]
Message-ID: <20180920160832.GZ24124@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20180409171959.GB9661@arm.com>
On Mon, Apr 09, 2018 at 06:19:59PM +0100, Will Deacon wrote:
> On Mon, Apr 09, 2018 at 05:54:20PM +0200, Peter Zijlstra wrote:
> > On Mon, Apr 09, 2018 at 03:54:09PM +0100, Will Deacon wrote:
> > > +/**
> > > + * set_pending_fetch_acquire - set the pending bit and return the old lock
> > > + * value with acquire semantics.
> > > + * @lock: Pointer to queued spinlock structure
> > > + *
> > > + * *,*,* -> *,1,*
> > > + */
> > > +static __always_inline u32 set_pending_fetch_acquire(struct qspinlock *lock)
> > > +{
> > > + u32 val = xchg_relaxed(&lock->pending, 1) << _Q_PENDING_OFFSET;
smp_mb();
> > > + val |= (atomic_read_acquire(&lock->val) & ~_Q_PENDING_MASK);
> > > + return val;
> > > +}
> > > @@ -289,18 +315,26 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> > > return;
> > >
> > > /*
> > > - * If we observe any contention; queue.
> > > + * If we observe queueing, then queue ourselves.
> > > */
> > > - if (val & ~_Q_LOCKED_MASK)
> > > + if (val & _Q_TAIL_MASK)
> > > goto queue;
> > >
> > > /*
> > > + * We didn't see any queueing, so have one more try at snatching
> > > + * the lock in case it became available whilst we were taking the
> > > + * slow path.
> > > + */
> > > + if (queued_spin_trylock(lock))
> > > + return;
> > > +
> > > + /*
> > > * trylock || pending
> > > *
> > > * 0,0,0 -> 0,0,1 ; trylock
> > > * 0,0,1 -> 0,1,1 ; pending
> > > */
> > > + val = set_pending_fetch_acquire(lock);
> > > if (!(val & ~_Q_LOCKED_MASK)) {
> >
> > So, if I remember that partial paper correctly, the atomc_read_acquire()
> > can see 'arbitrary' old values for everything except the pending byte,
> > which it just wrote and will fwd into our load, right?
> >
> > But I think coherence requires the read to not be older than the one
> > observed by the trylock before (since it uses c-cas its acquire can be
> > elided).
> >
> > I think this means we can miss a concurrent unlock vs the fetch_or. And
> > I think that's fine, if we still see the lock set we'll needlessly 'wait'
> > for it go become unlocked.
>
> Ah, but there is a related case that doesn't work. If the lock becomes
> free just before we set pending, then another CPU can succeed on the
> fastpath. We'll then set pending, but the lockword we get back may still
> have the locked byte of 0, so two people end up holding the lock.
>
> I think it's worth giving this a go with the added trylock, but I can't
> see a way to avoid the atomic_fetch_or at the moment.
So IIRC the addition of the smp_mb() above should ensure the @val load
is later than the @pending store.
Which makes the thing work again, right?
Now, obviously you don't actually want that on ARM64, but I can do that
on x86 just fine (our xchg() implies smp_mb() after all).
Another approach might be to use something like:
val = xchg_relaxed(&lock->locked_pending, _Q_PENDING_VAL | _Q_LOCKED_VAL);
val |= atomic_read_acquire(&lock->val) & _Q_TAIL_MASK;
combined with something like:
/* 0,0,0 -> 0,1,1 - we won trylock */
if (!(val & _Q_LOCKED_MASK)) {
clear_pending(lock);
return;
}
/* 0,0,1 -> 0,1,1 - we won pending */
if (!(val & ~_Q_LOCKED_MASK)) {
...
}
/* *,0,1 -> *,1,1 - we won pending, but there's queueing */
if (!(val & _Q_PENDING_VAL))
clear_pending(lock);
...
Hmmm?
next prev parent reply other threads:[~2018-09-20 16:08 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-04-05 16:58 [PATCH 00/10] kernel/locking: qspinlock improvements Will Deacon
2018-04-05 16:58 ` [PATCH 01/10] locking/qspinlock: Don't spin on pending->locked transition in slowpath Will Deacon
2018-04-05 16:58 ` [PATCH 02/10] locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath Will Deacon
2018-04-05 17:07 ` Peter Zijlstra
2018-04-06 15:08 ` Will Deacon
2018-04-05 17:13 ` Peter Zijlstra
2018-04-05 21:16 ` Waiman Long
2018-04-06 15:08 ` Will Deacon
2018-04-06 20:50 ` Waiman Long
2018-04-06 21:09 ` Paul E. McKenney
2018-04-07 8:47 ` Peter Zijlstra
2018-04-07 23:37 ` Paul E. McKenney
2018-04-09 10:58 ` Will Deacon
2018-04-07 9:07 ` Peter Zijlstra
2018-04-09 10:58 ` Will Deacon
2018-04-09 14:54 ` Will Deacon
2018-04-09 15:54 ` Peter Zijlstra
2018-04-09 17:19 ` Will Deacon
2018-04-10 9:35 ` Peter Zijlstra
2018-09-20 16:08 ` Peter Zijlstra [this message]
2018-09-20 16:22 ` Peter Zijlstra
2018-04-09 19:33 ` Waiman Long
2018-04-09 17:55 ` Waiman Long
2018-04-10 13:49 ` Sasha Levin
2018-04-05 16:59 ` [PATCH 03/10] locking/qspinlock: Kill cmpxchg loop when claiming lock from head of queue Will Deacon
2018-04-05 17:19 ` Peter Zijlstra
2018-04-06 10:54 ` Will Deacon
2018-04-05 16:59 ` [PATCH 04/10] locking/qspinlock: Use atomic_cond_read_acquire Will Deacon
2018-04-05 16:59 ` [PATCH 05/10] locking/mcs: Use smp_cond_load_acquire() in mcs spin loop Will Deacon
2018-04-05 16:59 ` [PATCH 06/10] barriers: Introduce smp_cond_load_relaxed and atomic_cond_read_relaxed Will Deacon
2018-04-05 17:22 ` Peter Zijlstra
2018-04-06 10:55 ` Will Deacon
2018-04-05 16:59 ` [PATCH 07/10] locking/qspinlock: Use smp_cond_load_relaxed to wait for next node Will Deacon
2018-04-05 16:59 ` [PATCH 08/10] locking/qspinlock: Merge struct __qspinlock into struct qspinlock Will Deacon
2018-04-07 5:23 ` Boqun Feng
2018-04-05 16:59 ` [PATCH 09/10] locking/qspinlock: Make queued_spin_unlock use smp_store_release Will Deacon
2018-04-05 16:59 ` [PATCH 10/10] locking/qspinlock: Elide back-to-back RELEASE operations with smp_wmb() Will Deacon
2018-04-05 17:28 ` Peter Zijlstra
2018-04-06 11:34 ` Will Deacon
2018-04-06 13:05 ` Andrea Parri
2018-04-06 15:27 ` Will Deacon
2018-04-06 15:49 ` Andrea Parri
2018-04-07 5:47 ` Boqun Feng
2018-04-09 10:47 ` Will Deacon
2018-04-06 13:22 ` [PATCH 00/10] kernel/locking: qspinlock improvements Andrea Parri
2018-04-11 10:20 ` Catalin Marinas
2018-04-11 15:39 ` Andrea Parri
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180920160832.GZ24124@hirez.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=boqun.feng@gmail.com \
--cc=catalin.marinas@arm.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=longman@redhat.com \
--cc=mingo@kernel.org \
--cc=paulmck@linux.vnet.ibm.com \
--cc=will.deacon@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox