From: Joel Schopp <jschopp@austin.ibm.com>
To: Ingo Molnar <mingo@elte.hu>
Cc: Olof Johansson <olof@lixom.net>,
lkml <linux-kernel@vger.kernel.org>,
Linus Torvalds <torvalds@osdl.org>, Andrew Morton <akpm@osdl.org>,
Arjan van de Ven <arjan@infradead.org>,
Nicolas Pitre <nico@cam.org>,
Jes Sorensen <jes@trained-monkey.org>,
Al Viro <viro@ftp.linux.org.uk>, Oleg Nesterov <oleg@tv-sign.ru>,
David Howells <dhowells@redhat.com>,
Alan Cox <alan@lxorguk.ukuu.org.uk>,
Christoph Hellwig <hch@infradead.org>, Andi Kleen <ak@suse.de>,
Russell King <rmk+lkml@arm.linux.org.uk>,
Anton Blanchard <anton@samba.org>,
PPC64-dev <linuxppc64-dev@ozlabs.org>
Subject: Re: PowerPC fastpaths for mutex subsystem
Date: Wed, 11 Jan 2006 11:44:47 -0600 [thread overview]
Message-ID: <43C5440F.2060503@austin.ibm.com> (raw)
In-Reply-To: <20060110230917.GA25285@elte.hu>
> ok. I'll really need to look at "vmstat" output from these. We could
> easily make the mutex slowpath behave like ppc64 semaphores, via the
> attached (untested) patch, but i really think it's the wrong thing to
> do, because it overloads the system with runnable tasks in an
> essentially unlimited fashion [== overscheduling] - they'll all contend
> for the same single mutex.
>
> in synthetic workloads on idle systems it such overscheduling can help,
> because the 'luck factor' of the 'thundering herd' of tasks can generate
> a higher total throughput - at the expense of system efficiency. At 8
> CPUs i already measured a net performance loss at 3 tasks! So i think
> the current 'at most 2 tasks runnable' approach of mutexes is the right
> one on a broad range of hardware.
>
> still, i'll try a different patch tomorrow, to keep the number of 'in
> flight' tasks within a certain limit (say at 2) - i suspect that would
> close the performance gap too, on this test.
The fundamental problem is that there is a relatively major latency to wake
somebody up, and for them to actually run so they can acquire a lock. In an
ideal world there would always be a single waiter running trying to acquire the
lock at the moment it was unlocked and not running until then.
There are better solutions than just madly throwing more waiters in flight on an
unlock. Here's three possibilities off the top of my head:
1) It is possible to have a hybrid lock that spins a single waiting thread and
sleeps waiters 2..n, so there is always a waiter running trying to acquire the
lock. It solves the latency problem if the lock is held a length of time at
least as long as it takes to wake up the next waiter. But the spinning waiter
burns some cpu to buy the decreased latency.
2) You could also do the classic spin for awhile and then sleep method. This
essentially turns low latency locks into spinlocks but still sleeps locks which
are held longer and/or are much more contested.
3) There is the option to look at cpu idleness of the current cpu and spin or
sleep based on that.
4) Accept that we have a cpu efficient high latency lock and use it appropriately.
I'm not saying any of these 4 is what we should do. I'm just trying to say
there are options out there that don't involve thundering hurds and luck to
address the problem.
next prev parent reply other threads:[~2006-01-11 17:45 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-01-04 14:41 [patch 00/21] mutex subsystem, -V14 Ingo Molnar
2006-01-04 23:45 ` Joel Schopp
2006-01-05 2:38 ` Nicolas Pitre
2006-01-05 2:51 ` Linus Torvalds
2006-01-05 3:21 ` Nick Piggin
2006-01-05 3:39 ` Anton Blanchard
2006-01-05 18:04 ` Jesse Barnes
2006-01-05 14:40 ` Ingo Molnar
2006-01-05 16:21 ` Linus Torvalds
2006-01-05 22:03 ` Ingo Molnar
2006-01-05 22:17 ` Linus Torvalds
2006-01-05 22:43 ` Ingo Molnar
2006-01-06 3:49 ` Keith Owens
2006-01-06 7:34 ` Denis Vlasenko
2006-01-05 14:35 ` Ingo Molnar
2006-01-05 16:42 ` Joel Schopp
2006-01-05 22:21 ` Ingo Molnar
2006-01-05 23:06 ` Joel Schopp
2006-01-05 23:26 ` Linus Torvalds
2006-01-05 23:36 ` Joel Schopp
2006-01-05 23:42 ` Ingo Molnar
2006-01-06 0:29 ` Olof Johansson
2006-01-07 17:49 ` PowerPC fastpaths for mutex subsystem Joel Schopp
2006-01-07 22:37 ` Andrew Morton
2006-01-08 7:43 ` Anton Blanchard
2006-01-08 8:00 ` Andrew Morton
2006-01-08 8:23 ` Anton Blanchard
2006-01-09 11:13 ` David Howells
2006-01-08 9:48 ` Ingo Molnar
2006-01-10 22:31 ` Joel Schopp
2006-01-10 23:09 ` Ingo Molnar
2006-01-11 10:52 ` Ingo Molnar
2006-01-11 17:44 ` Joel Schopp [this message]
2006-01-08 10:43 ` Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=43C5440F.2060503@austin.ibm.com \
--to=jschopp@austin.ibm.com \
--cc=ak@suse.de \
--cc=akpm@osdl.org \
--cc=alan@lxorguk.ukuu.org.uk \
--cc=anton@samba.org \
--cc=arjan@infradead.org \
--cc=dhowells@redhat.com \
--cc=hch@infradead.org \
--cc=jes@trained-monkey.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxppc64-dev@ozlabs.org \
--cc=mingo@elte.hu \
--cc=nico@cam.org \
--cc=oleg@tv-sign.ru \
--cc=olof@lixom.net \
--cc=rmk+lkml@arm.linux.org.uk \
--cc=torvalds@osdl.org \
--cc=viro@ftp.linux.org.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox