From: Gregory Haskins <gregory.haskins.ml@gmail.com>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Gregory Haskins <gregory.haskins@gmail.com>,
Nick Piggin <npiggin@suse.de>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Andi Kleen <ak@suse.de>, Ingo Molnar <mingo@elte.hu>
Subject: Re: [patch 1/4] x86: FIFO ticket spinlocks
Date: Fri, 02 Nov 2007 10:24:22 -0400 [thread overview]
Message-ID: <472B3316.2070802@gmail.com> (raw)
In-Reply-To: <alpine.LFD.0.999.0711010935060.3342@woody.linux-foundation.org>
Linus Torvalds wrote:
>
> On Thu, 1 Nov 2007, Gregory Haskins wrote:
>> I had observed this phenomenon on some 8-ways here as well, but I didn't
>> have the bandwidth to code something up. Thumbs up!
>
> Can you test under interesting loads?
Sure thing. Ill try this next week.
>
> We're interested in:
> - is the unfairness fix really noticeable (or does it just move the
> problem somewhere else, and there is no real change in behaviour)
> - what is the performance impact?
>
> In particular, unfair spinlocks have the potential to perform much better.
I see where you are going here, and I mostly agree. I think the key is
that "given equal contention, let the guy with the hottest cache win".
The problem with the current implementation is that the spinlocks have
no way to gauge the details of the contention. They can only gauge
instantaneous snapshots of state as viewed by each TSL invocation, which
effectively resets your position each time.
On the flip side, Nick's patches take the opposite extreme. If a lock
is contended, get in line. ;) This has the desirable property of
avoiding starvation. However, it will also tend to cause more bouncing
since you are virtually guaranteed not to re-win the contended lock, as
you point out next.
> Not so much because the spinlock itself acts all that differently, but
> because being unfair also fundmanetally tends to keep the data structures
> that are *protected* by the spinlock on just one CPU.
My issue here is that this behavior can also be described as precisely
part of the problem being addressed: That is, both CPUs presumably
*want/need* access to the data or they wouldn't be taking the spinlock
to begin with. So its really not a question of keeping the structures
on one cpu per se (at least, not for unbounded durations or the system
won't operate properly).
Rather, I think the key is to minimize the impact by bouncing things
intelligently. ;) I.e. If all things are equal, favor the hottest task
so the data only bounces once instead of twice. Outside of this
condition, operate strict FIFO. If we can reasonably calculate when
this optimization is possible, we will have the best of both worlds. I
have some ideas about ways to extend Nicks algorithm to support this
which I will submit ASAP.
I think the rest of what you said is very fair: Prove that it's a
problem, this concept helps, and we don't make things worse ;)
Will do, ASAP.
Regards,
-Greg
next prev parent reply other threads:[~2007-11-02 14:24 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-11-01 14:01 [patch 0/4] ticket spinlocks for x86 Nick Piggin
2007-11-01 14:02 ` [patch 1/4] spinlock: lockbreak cleanup Nick Piggin
2007-11-01 14:06 ` Peter Zijlstra
2007-11-01 14:29 ` Nick Piggin
2007-11-01 15:39 ` Lee Schermerhorn
2007-11-01 15:46 ` Ingo Molnar
2007-11-01 15:53 ` Nick Piggin
2007-11-01 14:03 ` [patch 1/4] x86: FIFO ticket spinlocks Nick Piggin
2007-11-01 14:40 ` Gregory Haskins
2007-11-01 16:38 ` Linus Torvalds
2007-11-02 0:35 ` Rik van Riel
2007-11-02 1:19 ` Linus Torvalds
2007-11-02 2:01 ` Rik van Riel
2007-11-02 6:42 ` Nick Piggin
2007-11-02 14:05 ` Rik van Riel
2007-11-02 22:37 ` Nick Piggin
2007-11-02 15:33 ` Ingo Molnar
2007-11-07 8:46 ` Nick Piggin
2007-11-02 14:24 ` Gregory Haskins [this message]
2007-11-01 20:01 ` Chuck Ebbert
2007-11-02 0:00 ` Nick Piggin
2007-11-02 16:22 ` Chuck Ebbert
2007-11-02 16:51 ` Linus Torvalds
2007-11-02 23:01 ` Nick Piggin
2007-11-03 0:56 ` Chuck Ebbert
2007-11-03 3:41 ` Nick Piggin
2007-11-01 14:04 ` [patch 3/4] x86: spinlock.h merge prep Nick Piggin
2007-11-01 14:05 ` [patch 4/4] x86: spinlock.h merge Nick Piggin
2007-11-03 22:36 ` [patch 0/4] ticket spinlocks for x86 Jeremy Fitzhardinge
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=472B3316.2070802@gmail.com \
--to=gregory.haskins.ml@gmail.com \
--cc=ak@suse.de \
--cc=gregory.haskins@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=npiggin@suse.de \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox