From: andrea.parri@amarulasolutions.com (Andrea Parri)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH 00/10] kernel/locking: qspinlock improvements
Date: Fri, 6 Apr 2018 15:22:49 +0200 [thread overview]
Message-ID: <20180406132249.GA7071@andrea> (raw)
In-Reply-To: <1522947547-24081-1-git-send-email-will.deacon@arm.com>
On Thu, Apr 05, 2018 at 05:58:57PM +0100, Will Deacon wrote:
> Hi all,
>
> I've been kicking the tyres further on qspinlock and with this set of patches
> I'm happy with the performance and fairness properties. In particular, the
> locking algorithm now guarantees forward progress whereas the implementation
> in mainline can starve threads indefinitely in cmpxchg loops.
>
> Catalin has also implemented a model of this using TLA to prove that the
> lock is fair, although this doesn't take the memory model into account:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/kernel-tla.git/commit/
Nice! I'll dig into this formalization, but my guess is that our model
(and axiomatic models "a-la-herd", in general) are not well-suited when
it comes to study properties such as fairness, liveness...
Did you already think about this?
Andrea
>
> I'd still like to get more benchmark numbers and wider exposure before
> enabling this for arm64, but my current testing is looking very promising.
> This series, along with the arm64-specific patches, is available at:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git/log/?h=qspinlock
>
> Cheers,
>
> Will
>
> --->8
>
> Jason Low (1):
> locking/mcs: Use smp_cond_load_acquire() in mcs spin loop
>
> Will Deacon (9):
> locking/qspinlock: Don't spin on pending->locked transition in
> slowpath
> locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath
> locking/qspinlock: Kill cmpxchg loop when claiming lock from head of
> queue
> locking/qspinlock: Use atomic_cond_read_acquire
> barriers: Introduce smp_cond_load_relaxed and atomic_cond_read_relaxed
> locking/qspinlock: Use smp_cond_load_relaxed to wait for next node
> locking/qspinlock: Merge struct __qspinlock into struct qspinlock
> locking/qspinlock: Make queued_spin_unlock use smp_store_release
> locking/qspinlock: Elide back-to-back RELEASE operations with
> smp_wmb()
>
> arch/x86/include/asm/qspinlock.h | 19 ++-
> arch/x86/include/asm/qspinlock_paravirt.h | 3 +-
> include/asm-generic/barrier.h | 27 ++++-
> include/asm-generic/qspinlock.h | 2 +-
> include/asm-generic/qspinlock_types.h | 32 ++++-
> include/linux/atomic.h | 2 +
> kernel/locking/mcs_spinlock.h | 10 +-
> kernel/locking/qspinlock.c | 191 ++++++++++--------------------
> kernel/locking/qspinlock_paravirt.h | 34 ++----
> 9 files changed, 141 insertions(+), 179 deletions(-)
>
> --
> 2.1.4
>
next prev parent reply other threads:[~2018-04-06 13:22 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-04-05 16:58 [PATCH 00/10] kernel/locking: qspinlock improvements Will Deacon
2018-04-05 16:58 ` [PATCH 01/10] locking/qspinlock: Don't spin on pending->locked transition in slowpath Will Deacon
2018-04-05 16:58 ` [PATCH 02/10] locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath Will Deacon
2018-04-05 17:07 ` Peter Zijlstra
2018-04-06 15:08 ` Will Deacon
2018-04-05 17:13 ` Peter Zijlstra
2018-04-05 21:16 ` Waiman Long
2018-04-06 15:08 ` Will Deacon
2018-04-06 20:50 ` Waiman Long
2018-04-06 21:09 ` Paul E. McKenney
2018-04-07 8:47 ` Peter Zijlstra
2018-04-07 23:37 ` Paul E. McKenney
2018-04-09 10:58 ` Will Deacon
2018-04-07 9:07 ` Peter Zijlstra
2018-04-09 10:58 ` Will Deacon
2018-04-09 14:54 ` Will Deacon
2018-04-09 15:54 ` Peter Zijlstra
2018-04-09 17:19 ` Will Deacon
2018-04-10 9:35 ` Peter Zijlstra
2018-09-20 16:08 ` Peter Zijlstra
2018-09-20 16:22 ` Peter Zijlstra
2018-04-09 19:33 ` Waiman Long
2018-04-09 17:55 ` Waiman Long
2018-04-10 13:49 ` Sasha Levin
2018-04-05 16:59 ` [PATCH 03/10] locking/qspinlock: Kill cmpxchg loop when claiming lock from head of queue Will Deacon
2018-04-05 17:19 ` Peter Zijlstra
2018-04-06 10:54 ` Will Deacon
2018-04-05 16:59 ` [PATCH 04/10] locking/qspinlock: Use atomic_cond_read_acquire Will Deacon
2018-04-05 16:59 ` [PATCH 05/10] locking/mcs: Use smp_cond_load_acquire() in mcs spin loop Will Deacon
2018-04-05 16:59 ` [PATCH 06/10] barriers: Introduce smp_cond_load_relaxed and atomic_cond_read_relaxed Will Deacon
2018-04-05 17:22 ` Peter Zijlstra
2018-04-06 10:55 ` Will Deacon
2018-04-05 16:59 ` [PATCH 07/10] locking/qspinlock: Use smp_cond_load_relaxed to wait for next node Will Deacon
2018-04-05 16:59 ` [PATCH 08/10] locking/qspinlock: Merge struct __qspinlock into struct qspinlock Will Deacon
2018-04-07 5:23 ` Boqun Feng
2018-04-05 16:59 ` [PATCH 09/10] locking/qspinlock: Make queued_spin_unlock use smp_store_release Will Deacon
2018-04-05 16:59 ` [PATCH 10/10] locking/qspinlock: Elide back-to-back RELEASE operations with smp_wmb() Will Deacon
2018-04-05 17:28 ` Peter Zijlstra
2018-04-06 11:34 ` Will Deacon
2018-04-06 13:05 ` Andrea Parri
2018-04-06 15:27 ` Will Deacon
2018-04-06 15:49 ` Andrea Parri
2018-04-07 5:47 ` Boqun Feng
2018-04-09 10:47 ` Will Deacon
2018-04-06 13:22 ` Andrea Parri [this message]
2018-04-11 10:20 ` [PATCH 00/10] kernel/locking: qspinlock improvements Catalin Marinas
2018-04-11 15:39 ` Andrea Parri
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180406132249.GA7071@andrea \
--to=andrea.parri@amarulasolutions.com \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).