From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Waiman Long <Waiman.Long@hp.com>
Cc: Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
Arnd Bergmann <arnd@arndb.de>,
linux-arch@vger.kernel.org, x86@kernel.org,
linux-kernel@vger.kernel.org,
Peter Zijlstra <peterz@infradead.org>,
Steven Rostedt <rostedt@goodmis.org>,
Andrew Morton <akpm@linux-foundation.org>,
Michel Lespinasse <walken@google.com>,
Andi Kleen <andi@firstfloor.org>, Rik van Riel <riel@redhat.com>,
Linus Torvalds <torvalds@linux-foundation.org>,
Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
George Spelvin <linux@horizon.com>,
Tim Chen <tim.c.chen@linux.intel.com>,
"Aswin Chandramouleeswaran\"" <aswin@hp.com>,
Scott J Norton <scott.norton@hp.com>
Subject: Re: [PATCH v8 0/4] qrwlock: Introducing a queue read/write lock implementation
Date: Thu, 9 Jan 2014 22:42:36 -0800 [thread overview]
Message-ID: <20140110064236.GV10038@linux.vnet.ibm.com> (raw)
In-Reply-To: <1389200376-62792-1-git-send-email-Waiman.Long@hp.com>
On Wed, Jan 08, 2014 at 11:59:32AM -0500, Waiman Long wrote:
> v7->v8:
> - Use atomic_t functions (which are implemented in all arch's) to
> modify reader counts.
> - Use smp_load_acquire() & smp_store_release() for barriers.
> - Further tuning in slowpath performance.
This version looks good to me. You now have my Reviewed-by on all
the patches.
Thanx, Paul
> v6->v7:
> - Remove support for unfair lock, so only fair qrwlock will be provided.
> - Move qrwlock.c to the kernel/locking directory.
>
> v5->v6:
> - Modify queue_read_can_lock() to avoid false positive result.
> - Move the two slowpath functions' performance tuning change from
> patch 4 to patch 1.
> - Add a new optional patch to use the new smp_store_release() function
> if that is merged.
>
> v4->v5:
> - Fix wrong definitions for QW_MASK_FAIR & QW_MASK_UNFAIR macros.
> - Add an optional patch 4 which should only be applied after the
> mcs_spinlock.h header file is merged.
>
> v3->v4:
> - Optimize the fast path with better cold cache behavior and
> performance.
> - Removing some testing code.
> - Make x86 use queue rwlock with no user configuration.
>
> v2->v3:
> - Make read lock stealing the default and fair rwlock an option with
> a different initializer.
> - In queue_read_lock_slowpath(), check irq_count() and force spinning
> and lock stealing in interrupt context.
> - Unify the fair and classic read-side code path, and make write-side
> to use cmpxchg with 2 different writer states. This slows down the
> write lock fastpath to make the read side more efficient, but is
> still slightly faster than a spinlock.
>
> v1->v2:
> - Improve lock fastpath performance.
> - Optionally provide classic read/write lock behavior for backward
> compatibility.
> - Use xadd instead of cmpxchg for fair reader code path to make it
> immute to reader contention.
> - Run more performance testing.
>
> As mentioned in the LWN article http://lwn.net/Articles/364583/,
> the read/write lock suffer from an unfairness problem that it is
> possible for a stream of incoming readers to block a waiting writer
> from getting the lock for a long time. Also, a waiting reader/writer
> contending a rwlock in local memory will have a higher chance of
> acquiring the lock than a reader/writer in remote node.
>
> This patch set introduces a queue-based read/write lock implementation
> that can largely solve this unfairness problem.
>
> The read lock slowpath will check if the reader is in an interrupt
> context. If so, it will force lock spinning and stealing without
> waiting in a queue. This is to ensure the read lock will be granted
> as soon as possible.
>
> The queue write lock can also be used as a replacement for ticket
> spinlocks that are highly contended if lock size increase is not
> an issue.
>
> This patch set currently provides queue read/write lock support on
> x86 architecture only. Support for other architectures can be added
> later on once architecture specific support infrastructure is added
> and proper testing is done.
>
> The optional patch 3 has a dependency on the the mcs_spinlock.h
> header file which has not been merged yet. So this patch should only
> be applied after the mcs_spinlock.h header file is merged.
>
> The optional patch 4 use the new smp_store_release() and
> smp_load_acquire() functions which are being reviewed & not merged yet.
>
> Waiman Long (4):
> qrwlock: A queue read/write lock implementation
> qrwlock x86: Enable x86 to use queue read/write lock
> qrwlock: Use the mcs_spinlock helper functions for MCS queuing
> qrwlock: Use smp_store_release() in write_unlock()
>
> arch/x86/Kconfig | 1 +
> arch/x86/include/asm/spinlock.h | 2 +
> arch/x86/include/asm/spinlock_types.h | 4 +
> include/asm-generic/qrwlock.h | 203 +++++++++++++++++++++++++++++++++
> kernel/Kconfig.locks | 7 +
> kernel/locking/Makefile | 1 +
> kernel/locking/qrwlock.c | 191 +++++++++++++++++++++++++++++++
> 7 files changed, 409 insertions(+), 0 deletions(-)
> create mode 100644 include/asm-generic/qrwlock.h
> create mode 100644 kernel/locking/qrwlock.c
>
next prev parent reply other threads:[~2014-01-10 6:42 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-01-08 16:59 [PATCH v8 0/4] qrwlock: Introducing a queue read/write lock implementation Waiman Long
2014-01-08 16:59 ` [PATCH v8 1/4] qrwlock: A " Waiman Long
2014-01-10 6:41 ` Paul E. McKenney
2014-01-08 16:59 ` [PATCH v8 2/4] qrwlock x86: Enable x86 to use queue read/write lock Waiman Long
2014-01-08 16:59 ` [PATCH v8 3/4] qrwlock: Use the mcs_spinlock helper functions for MCS queuing Waiman Long
2014-01-08 16:59 ` [PATCH v8 4/4] qrwlock: Use smp_store_release() in write_unlock() Waiman Long
2014-01-08 16:59 ` Waiman Long
2014-01-10 6:42 ` Paul E. McKenney [this message]
2014-01-10 15:34 ` [PATCH v8 0/4] qrwlock: Introducing a queue read/write lock implementation Waiman Long
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140110064236.GV10038@linux.vnet.ibm.com \
--to=paulmck@linux.vnet.ibm.com \
--cc=Waiman.Long@hp.com \
--cc=akpm@linux-foundation.org \
--cc=andi@firstfloor.org \
--cc=arnd@arndb.de \
--cc=aswin@hp.com \
--cc=hpa@zytor.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux@horizon.com \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=raghavendra.kt@linux.vnet.ibm.com \
--cc=riel@redhat.com \
--cc=rostedt@goodmis.org \
--cc=scott.norton@hp.com \
--cc=tglx@linutronix.de \
--cc=tim.c.chen@linux.intel.com \
--cc=torvalds@linux-foundation.org \
--cc=walken@google.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).