From: Waiman Long <waiman.long@hp.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>, Arnd Bergmann <arnd@arndb.de>,
Thomas Gleixner <tglx@linutronix.de>,
linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org,
Will Deacon <will.deacon@arm.com>,
Scott J Norton <scott.norton@hp.com>,
Douglas Hatch <doug.hatch@hp.com>
Subject: Re: [PATCH 4/4] locking/qrwlock: Use direct MCS lock/unlock in slowpath
Date: Tue, 07 Jul 2015 17:59:59 -0400 [thread overview]
Message-ID: <559C4BDF.3020605@hp.com> (raw)
In-Reply-To: <20150707112449.GR3644@twins.programming.kicks-ass.net>
On 07/07/2015 07:24 AM, Peter Zijlstra wrote:
> On Mon, Jul 06, 2015 at 11:43:06AM -0400, Waiman Long wrote:
>> Lock waiting in the qrwlock uses the spinlock (qspinlock for x86)
>> as the waiting queue. This is slower than using MCS lock directly
>> because of the extra level of indirection causing more atomics to
>> be used as well as 2 waiting threads spinning on the lock cacheline
>> instead of only one.
> This needs a better explanation. Didn't we find with the qspinlock thing
> that the pending spinner improved performance on light loads?
>
> Taking it out seems counter intuitive, we could very much like these two
> the be the same.
Yes, for lightly loaded case, using raw_spin_lock should have an
advantage. It is a different matter when the lock is highly contended.
In this case, having the indirection in qspinlock will make it slower. I
struggle myself as to whether to duplicate the locking code in qrwlock.
So I send this patch out to test the water. I won't insist if you think
this is not a good idea, but I do want to get the previous 2 patches in
which should not be controversial.
Cheers,
Longman
next prev parent reply other threads:[~2015-07-07 21:59 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-07-06 15:43 [PATCH 0/4] locking/qrwlock: Improve qrwlock performance Waiman Long
2015-07-06 15:43 ` [PATCH 1/4] locking/qrwlock: Better optimization for interrupt context readers Waiman Long
2015-07-06 15:43 ` [PATCH 2/4] locking/qrwlock: Reduce reader/writer to reader lock transfer latency Waiman Long
2015-07-06 18:23 ` Will Deacon
2015-07-06 19:49 ` Waiman Long
2015-07-06 19:49 ` Waiman Long
2015-07-07 9:17 ` Will Deacon
2015-07-07 9:17 ` Will Deacon
2015-07-07 11:17 ` Peter Zijlstra
2015-07-07 11:49 ` Will Deacon
2015-07-07 14:30 ` Waiman Long
2015-07-07 17:27 ` Will Deacon
2015-07-07 18:10 ` Will Deacon
2015-07-07 21:29 ` Waiman Long
2015-07-08 9:52 ` Peter Zijlstra
2015-07-08 9:52 ` Peter Zijlstra
2015-07-08 17:19 ` Will Deacon
2015-07-06 15:43 ` [PATCH 3/4] locking/qrwlock: Reduce writer to writer " Waiman Long
2015-07-06 15:43 ` [PATCH 4/4] locking/qrwlock: Use direct MCS lock/unlock in slowpath Waiman Long
2015-07-07 11:24 ` Peter Zijlstra
2015-07-07 21:59 ` Waiman Long [this message]
2015-07-07 21:59 ` Waiman Long
2015-07-07 22:13 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=559C4BDF.3020605@hp.com \
--to=waiman.long@hp.com \
--cc=arnd@arndb.de \
--cc=doug.hatch@hp.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=scott.norton@hp.com \
--cc=tglx@linutronix.de \
--cc=will.deacon@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).