From: Will Deacon <will.deacon@arm.com>
To: Waiman Long <waiman.long@hp.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>, Arnd Bergmann <arnd@arndb.de>,
"linux-arch@vger.kernel.org" <linux-arch@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Scott J Norton <scott.norton@hp.com>,
Douglas Hatch <doug.hatch@hp.com>
Subject: Re: [PATCH v5 3/3] locking/qrwlock: Don't contend with readers when setting _QW_WAITING
Date: Tue, 23 Jun 2015 09:37:56 +0100 [thread overview]
Message-ID: <20150623083756.GD31504@arm.com> (raw)
In-Reply-To: <5588CB2C.108@hp.com>
On Tue, Jun 23, 2015 at 03:57:48AM +0100, Waiman Long wrote:
> On 06/22/2015 12:21 PM, Will Deacon wrote:
> > On Fri, Jun 19, 2015 at 04:50:02PM +0100, Waiman Long wrote:
> >> The current cmpxchg() loop in setting the _QW_WAITING flag for writers
> >> in queue_write_lock_slowpath() will contend with incoming readers
> >> causing possibly extra cmpxchg() operations that are wasteful. This
> >> patch changes the code to do a byte cmpxchg() to eliminate contention
> >> with new readers.
> > [...]
> >
> >> diff --git a/arch/x86/include/asm/qrwlock.h b/arch/x86/include/asm/qrwlock.h
> >> index a8810bf..5678b0a 100644
> >> --- a/arch/x86/include/asm/qrwlock.h
> >> +++ b/arch/x86/include/asm/qrwlock.h
> >> @@ -7,8 +7,7 @@
> >> #define queued_write_unlock queued_write_unlock
> >> static inline void queued_write_unlock(struct qrwlock *lock)
> >> {
> >> - barrier();
> >> - ACCESS_ONCE(*(u8 *)&lock->cnts) = 0;
> >> + smp_store_release(&lock->wmode, 0);
> >> }
> >> #endif
> > I reckon you could actually use this in the asm-generic header and remove
> > the x86 arch version altogether. Most architectures support single-copy
> > atomic byte access and those that don't (alpha?) can just not use qrwlock
> > (or override write_unlock with atomic_sub).
> >
> > I already have a patch making this change, so I'm happy either way.
>
> Yes, I am aware of that. If you have a patch to make that change, I am
> fine with that too.
Tell you what; I'll rebase my patches on top of yours and post them after
the merge window.
Will
next prev parent reply other threads:[~2015-06-23 8:38 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-19 15:49 [PATCH v5 0/3] locking/qrwlock: More optimizations in qrwlock Waiman Long
2015-06-19 15:49 ` Waiman Long
2015-06-19 15:50 ` [PATCH v5 1/3] locking/qrwlock: Rename functions to queued_*() Waiman Long
2015-06-19 15:50 ` Waiman Long
2015-06-19 15:50 ` [PATCH v5 2/3] locking/qrwlock: Better optimization for interrupt context readers Waiman Long
2015-06-19 15:50 ` Waiman Long
2015-06-19 15:50 ` [PATCH v5 3/3] locking/qrwlock: Don't contend with readers when setting _QW_WAITING Waiman Long
2015-06-19 15:50 ` Waiman Long
2015-06-22 16:21 ` Will Deacon
2015-06-22 16:21 ` Will Deacon
2015-06-23 2:57 ` Waiman Long
2015-06-23 2:57 ` Waiman Long
2015-06-23 8:37 ` Will Deacon [this message]
2015-06-25 18:35 ` Peter Zijlstra
2015-06-25 20:33 ` Waiman Long
2015-06-26 11:14 ` Will Deacon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150623083756.GD31504@arm.com \
--to=will.deacon@arm.com \
--cc=arnd@arndb.de \
--cc=doug.hatch@hp.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=scott.norton@hp.com \
--cc=waiman.long@hp.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox