From: Will Deacon <will.deacon@arm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: mingo@kernel.org, linux-kernel@vger.kernel.org,
longman@redhat.com, andrea.parri@amarulasolutions.com,
tglx@linutronix.de
Subject: Re: [RFC][PATCH 2/3] locking/qspinlock: Rework some comments
Date: Mon, 1 Oct 2018 18:17:08 +0100 [thread overview]
Message-ID: <20181001171707.GE13918@arm.com> (raw)
In-Reply-To: <20180926111307.457488877@infradead.org>
On Wed, Sep 26, 2018 at 01:01:19PM +0200, Peter Zijlstra wrote:
> While working my way through the code again; I felt the comments could
> use help.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
> kernel/locking/qspinlock.c | 40 ++++++++++++++++++++++++++++------------
> 1 file changed, 28 insertions(+), 12 deletions(-)
>
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -326,16 +326,23 @@ void queued_spin_lock_slowpath(struct qs
> /*
> * trylock || pending
> *
> - * 0,0,0 -> 0,0,1 ; trylock
> - * 0,0,1 -> 0,1,1 ; pending
> + * 0,0,* -> 0,1,* -> 0,0,1 pending, trylock
> */
> val = atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val);
> +
> /*
> - * If we observe any contention; undo and queue.
> + * If we observe contention, there was a concurrent lock.
Nit: I think "concurrent lock" is confusing here, because that implies to
me that the lock was actually taken behind our back, which isn't necessarily
the case. How about "there is a concurrent locker"?
> + *
> + * Undo and queue; our setting of PENDING might have made the
> + * n,0,0 -> 0,0,0 transition fail and it will now be waiting
> + * on @next to become !NULL.
> */
Hmm, but it could also fail another concurrent set of PENDING (and the lock
could just be held the entire time).
> if (unlikely(val & ~_Q_LOCKED_MASK)) {
> +
> + /* Undo PENDING if we set it. */
> if (!(val & _Q_PENDING_MASK))
> clear_pending(lock);
> +
> goto queue;
> }
>
> @@ -466,7 +473,7 @@ void queued_spin_lock_slowpath(struct qs
> * claim the lock:
> *
> * n,0,0 -> 0,0,1 : lock, uncontended
> - * *,*,0 -> *,*,1 : lock, contended
> + * *,0,0 -> *,0,1 : lock, contended
Pending can be set behind our back in the contended case, in which case
we take the lock with a single byte store and don't clear pending. You
mention this in the updated comment below, but I think we should leave this
comment alone.
Will
> *
> * If the queue head is the only one in the queue (lock value == tail)
> * and nobody is pending, clear the tail code and grab the lock.
> @@ -474,16 +481,25 @@ void queued_spin_lock_slowpath(struct qs
> */
>
> /*
> - * In the PV case we might already have _Q_LOCKED_VAL set.
> + * In the PV case we might already have _Q_LOCKED_VAL set, because
> + * of lock stealing; therefore we must also allow:
> *
> - * The atomic_cond_read_acquire() call above has provided the
> - * necessary acquire semantics required for locking.
> - */
> - if (((val & _Q_TAIL_MASK) == tail) &&
> - atomic_try_cmpxchg_relaxed(&lock->val, &val, _Q_LOCKED_VAL))
> - goto release; /* No contention */
> + * n,0,1 -> 0,0,1
> + *
> + * Note: at this point: (val & _Q_PENDING_MASK) == 0, because of the
> + * above wait condition, therefore any concurrent setting of
> + * PENDING will make the uncontended transition fail.
> + */
> + if ((val & _Q_TAIL_MASK) == tail) {
> + if (atomic_try_cmpxchg_relaxed(&lock->val, &val, _Q_LOCKED_VAL))
> + goto release; /* No contention */
> + }
>
> - /* Either somebody is queued behind us or _Q_PENDING_VAL is set */
> + /*
> + * Either somebody is queued behind us or _Q_PENDING_VAL got set
> + * which will then detect the remaining tail and queue behind us
> + * ensuring we'll see a @next.
> + */
> set_locked(lock);
>
> /*
>
>
next prev parent reply other threads:[~2018-10-01 17:16 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-09-26 11:01 [RFC][PATCH 0/3] locking/qspinlock: Improve determinism for x86 Peter Zijlstra
2018-09-26 11:01 ` [RFC][PATCH 1/3] locking/qspinlock: Re-order code Peter Zijlstra
2018-10-01 17:17 ` Will Deacon
2018-09-26 11:01 ` [RFC][PATCH 2/3] locking/qspinlock: Rework some comments Peter Zijlstra
2018-10-01 17:17 ` Will Deacon [this message]
2018-10-01 19:10 ` Peter Zijlstra
2018-10-02 13:20 ` Will Deacon
2018-10-02 13:43 ` Peter Zijlstra
2018-09-26 11:01 ` [RFC][PATCH 3/3] locking/qspinlock: Optimize for x86 Peter Zijlstra
2018-09-26 16:30 ` Waiman Long
2018-09-26 17:54 ` Peter Zijlstra
2018-09-27 7:29 ` Peter Zijlstra
2018-09-26 20:52 ` Andrea Parri
2018-09-27 7:17 ` Peter Zijlstra
2018-09-27 7:47 ` Andrea Parri
2018-09-27 7:59 ` Peter Zijlstra
2018-09-27 8:13 ` Andrea Parri
2018-09-27 8:57 ` Peter Zijlstra
2018-09-27 12:16 ` David Laight
2018-10-01 17:17 ` Will Deacon
2018-10-01 20:00 ` Peter Zijlstra
2018-10-02 13:19 ` Will Deacon
2018-10-02 14:14 ` Peter Zijlstra
2018-10-02 12:31 ` Andrea Parri
2018-10-02 13:22 ` Will Deacon
2018-10-02 13:44 ` Andrea Parri
2018-09-26 15:01 ` [RFC][PATCH 0/3] locking/qspinlock: Improve determinism " Sebastian Andrzej Siewior
2018-09-26 15:08 ` Thomas Gleixner
2018-09-26 15:38 ` Sebastian Andrzej Siewior
2018-09-26 16:20 ` Waiman Long
2018-09-26 17:51 ` Peter Zijlstra
2018-09-26 23:21 ` Waiman Long
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181001171707.GE13918@arm.com \
--to=will.deacon@arm.com \
--cc=andrea.parri@amarulasolutions.com \
--cc=linux-kernel@vger.kernel.org \
--cc=longman@redhat.com \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).