From: Will Deacon <will.deacon@arm.com>
To: Waiman Long <waiman.long@hp.com>
Cc: "linux-arch@vger.kernel.org" <linux-arch@vger.kernel.org>,
"peterz@infradead.org" <peterz@infradead.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"paulmck@linux.vnet.ibm.com" <paulmck@linux.vnet.ibm.com>,
"mingo@kernel.org" <mingo@kernel.org>
Subject: Re: [PATCH v4 6/8] locking/qrwlock: make use of acquire/release/relaxed atomics
Date: Tue, 4 Aug 2015 12:20:25 +0100 [thread overview]
Message-ID: <20150804112025.GA10067@arm.com> (raw)
In-Reply-To: <55BFD3D6.8000905@hp.com>
Hi Waiman,
Thanks for having a look.
On Mon, Aug 03, 2015 at 09:49:26PM +0100, Waiman Long wrote:
> On 08/03/2015 01:02 PM, Will Deacon wrote:
> > The qrwlock implementation is slightly heavy in its use of memory
> > barriers, mainly through the use of cmpxchg and _return atomics, which
> > imply full barrier semantics.
> >
> > This patch modifies the qrwlock code to use the more relaxed atomic
> > routines so that we can reduce the unnecessary barrier overhead on
> > weakly-ordered architectures.
> >
> > Signed-off-by: Will Deacon<will.deacon@arm.com>
[...]
> > @@ -74,8 +74,9 @@ void queued_read_lock_slowpath(struct qrwlock *lock, u32 cnts)
> > * Readers in interrupt context will get the lock immediately
> > * if the writer is just waiting (not holding the lock yet).
> > * The rspin_until_writer_unlock() function returns immediately
> > - * in this case. Otherwise, they will spin until the lock
> > - * is available without waiting in the queue.
> > + * in this case. Otherwise, they will spin (with ACQUIRE
> > + * semantics) until the lock is available without waiting in
> > + * the queue.
> > */
> > rspin_until_writer_unlock(lock, cnts);
> > return;
> > @@ -97,7 +98,13 @@ void queued_read_lock_slowpath(struct qrwlock *lock, u32 cnts)
> > while (atomic_read(&lock->cnts)& _QW_WMASK)
> > cpu_relax_lowlatency();
> >
> > - cnts = atomic_add_return(_QR_BIAS,&lock->cnts) - _QR_BIAS;
> > + cnts = atomic_add_return_relaxed(_QR_BIAS,&lock->cnts) - _QR_BIAS;
> > +
> > + /*
> > + * The ACQUIRE semantics of the spinning code ensure that
> > + * accesses can't leak upwards out of our subsequent critical
> > + * section.
> > + */
>
> Maybe you should be more specific to mention the arch_spin_lock() call
> above. Other than that,
Actually, I think you've uncovered a bug! Initially, I based this on top
of my qrwlock series that made the acquire unconditional in
rspin_until_writer_unlock, but you (reasonably) objected to the extra
overhead on the interrupt path, so now we only get an acquire if the
initial test of (cnts & _QW_WMASK) == _QW_LOCKED) succeeds.
So actually, the atomic_add_return needs to be made an
atomic_add_return_acquire. I'll make that change and adjust the comment
accordingly.
Fixup below.
Cheers,
Will
--->8
diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c
index fb4ef2d636f2..1724eac4c84b 100644
--- a/kernel/locking/qrwlock.c
+++ b/kernel/locking/qrwlock.c
@@ -98,13 +98,12 @@ void queued_read_lock_slowpath(struct qrwlock *lock, u32 cnts)
while (atomic_read(&lock->cnts) & _QW_WMASK)
cpu_relax_lowlatency();
- cnts = atomic_add_return_relaxed(_QR_BIAS, &lock->cnts) - _QR_BIAS;
-
/*
- * The ACQUIRE semantics of the spinning code ensure that
- * accesses can't leak upwards out of our subsequent critical
- * section.
+ * The ACQUIRE semantics of the following spinning code ensure
+ * that accesses can't leak upwards out of our subsequent critical
+ * section in the case that the lock is currently held for write.
*/
+ cnts = atomic_add_return_acquire(_QR_BIAS, &lock->cnts) - _QR_BIAS;
rspin_until_writer_unlock(lock, cnts);
/*
next prev parent reply other threads:[~2015-08-04 11:20 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-08-03 17:02 [PATCH v4 0/8] Add generic support for relaxed atomics Will Deacon
2015-08-03 17:02 ` [PATCH v4 1/8] atomics: add acquire/release/relaxed variants of some atomic operations Will Deacon
2015-08-03 17:26 ` Peter Zijlstra
2015-08-03 18:21 ` Will Deacon
2015-08-03 17:02 ` [PATCH v4 2/8] asm-generic: rework atomic-long.h to avoid bulk code duplication Will Deacon
2015-08-03 17:02 ` [PATCH v4 3/8] asm-generic: add relaxed/acquire/release variants for atomic_long_t Will Deacon
2015-08-03 17:02 ` [PATCH v4 4/8] lockref: remove homebrew cmpxchg64_relaxed macro definition Will Deacon
2015-08-03 17:02 ` [PATCH v4 5/8] locking/qrwlock: implement queue_write_unlock using smp_store_release Will Deacon
2015-08-03 20:44 ` Waiman Long
2015-08-03 17:02 ` [PATCH v4 6/8] locking/qrwlock: make use of acquire/release/relaxed atomics Will Deacon
2015-08-03 20:49 ` Waiman Long
2015-08-04 11:20 ` Will Deacon [this message]
2015-08-03 17:02 ` [PATCH v4 7/8] include/llist: use linux/atomic.h instead of asm/cmpxchg.h Will Deacon
2015-08-03 17:02 ` [PATCH v4 8/8] ARM: atomics: define our SMP atomics in terms of _relaxed operations Will Deacon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150804112025.GA10067@arm.com \
--to=will.deacon@arm.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=paulmck@linux.vnet.ibm.com \
--cc=peterz@infradead.org \
--cc=waiman.long@hp.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox