public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@elte.hu>
To: Linus Torvalds <torvalds@osdl.org>
Cc: Andi Kleen <ak@suse.de>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Andrew Morton <akpm@osdl.org>,
	Arjan van de Ven <arjanv@infradead.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Alan Cox <alan@lxorguk.ukuu.org.uk>,
	Christoph Hellwig <hch@infradead.org>,
	David Howells <dhowells@redhat.com>,
	Alexander Viro <viro@ftp.linux.org.uk>,
	Oleg Nesterov <oleg@tv-sign.ru>,
	Benjamin LaHaise <bcrl@kvack.org>
Subject: Re: [patch 00/15] Generic Mutex Subsystem
Date: Mon, 19 Dec 2005 20:55:53 +0100	[thread overview]
Message-ID: <20051219195553.GA14155@elte.hu> (raw)
In-Reply-To: <Pine.LNX.4.64.0512191053400.4827@g5.osdl.org>


* Linus Torvalds <torvalds@osdl.org> wrote:

> On Mon, 19 Dec 2005, Ingo Molnar wrote:
> > 
> > in fact, generic mutexes are _more_ fair than struct semaphore in their 
> > wait logic. In the stock semaphore implementation, when a waiter is 
> > woken up, it will retry the lock, and if it fails, it goes back to the 
> > _tail_ of the queue again - waiting one full cycle again.
> 
> Ingo, I don't think that is true.
> 
> It shouldn't be true, at least. The whole point with the "sleeper" 
> count was to not have that happen. Of course, bugs happen, so I won't 
> guarantee that's actually true, but ..

you are right, i based my comments on observed behavior, not on the 
code's intentions.

I havent actually traced the behavior of semaphores (being mostly 
interested in mutexes ;-), but fairness algorithms always show up as 
heavy context-switchers on SMP (because other CPUs queue themselves as 
waiters, and wakeups go across CPUs all the time), and i'm quite sure 
that contended scenarios with the current semaphore implementation never 
overscheduled. Hence current semaphores are very likely not fair, and 
sleepers roundrobin back to the queue quite often.

but i've got some measurements of how fairness plays out in practice.  
The mutex based ops go:

 mars:~> ./test-mutex V 16 10
 8 CPUs, running 16 parallel test-tasks.
 checking VFS performance.
 avg ops/sec:               85130
 average cost per op:       206.59 usecs
 average deviance per op:   319.08 usecs

note the 'average latency of an op' (in absolute time), and the standard 
deviation (which has been measured by summing up the deltas between 
subsequent latencies, and divided by the total number of ops).

With semaphores the results go like this:

 mars:~> ./test-mutex V 16 10
 8 CPUs, running 16 parallel test-tasks.
 checking VFS performance.
 avg ops/sec:               34381
 average cost per op:       512.13 usecs
 average deviance per op:   573.10 usecs

look that even though the ratio between the per op cost and the standard 
deviance looks to be a bit better for semaphores, the pure fact that it 
was much slower lengthened its standard deviance to well above that of 
the mutex's.

So even if they are fairer, if the system ends up being slower, fairness 
(==observed fluctuations, and resulting injustices) suffers more as a 
result than due to the queueing logic. I'd chose this 200 +/- 150 usecs 
behavior over 500 +/- 280 usecs behavior - even though the latter has 
smaller relative fluctuations.

(although i'm still unsure which one is fairer, because i couldnt create 
a scenario that is comparable in terms of fairness comparisons: the 
mutex based workloads are always more efficient, and as a result they 
schedule into the idle thread more often, which creates additional noise 
and may be a reason why its standard deviation is higher. The semaphore 
workloads are more saturated, which may flatten its standard deviation.)

> If you are woken up as a waiter on a semaphore, you shouldn't fail to 
> get it. You will be woken first, and nobody else will get at it, 
> because the count has been kept negative or zero even by the waiters, 
> so that a fast-path user shouldn't be able to get the lock without 
> going through the slow path and adding itself (last) to the list.
> 
> But hey, somebody should test it with <n> kernel threads that just do 
> down()/up() and some make-believe work in between to make sure there 
> really is contention, and count how many times they actually get the 
> semaphore. That code has been changed so many times that it may not 
> work the way it is advertized ;)
> 
> [ Oh.  I'm looking at the semaphore code, and I realize that we have a 
>   "wake_up(&sem->wait)" in the __down() path because we had some race long 
>   ago that we fixed by band-aiding over it. Which means that we wake up 
>   sleepers that shouldn't be woken up. THAT may well be part of the 
>   performance problem.. The semaphores are really meant to wake up just 
>   one at a time, but because of that race hack they'll wake up _two_ at a 
>   time - once by up(), once by down().
> 
>   That also destroys the fairness. Does anybody remember why it's that 
>   way? ]
> 
> Ho humm.. That's interesting.

hm, removing that wakeup quickly causes hung test-tasks. (i booted an 
all-mutexes kernel, and did the testing on arch_semaphores, using the 
modified semaphore-sleepers.c code. The test ran for a few seconds, so 
semaphores werent _totally_ broken, but there's some clear race in terms 
of not waking up a sleeper.)

and even considering that the current semaphore implementation may have 
a fairness bug, i cannot imagine that making it more fair would also 
speed it up. So it could end up having an even larger performance gap to 
the mutex implementation. But in any case, it should be an interesting 
comparison! I personally find the semaphore implementation clever but 
too complex, maybe that's a reason why such bugs might be hiding there.  
(possibly for many years already ...)

In any case, i concur with you that the fairness design of the two is 
not comparable, and that semaphores _should_ be fairer.

	Ingo

  parent reply	other threads:[~2005-12-19 19:56 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-12-19  1:34 [patch 00/15] Generic Mutex Subsystem Ingo Molnar
2005-12-19  4:22 ` Andi Kleen
2005-12-19  4:28   ` Steven Rostedt
2005-12-19  4:31     ` Andi Kleen
2005-12-19  6:24   ` Linus Torvalds
2005-12-19 12:56     ` Steven Rostedt
2005-12-19 16:55       ` Ingo Molnar
2005-12-19 15:50     ` Ingo Molnar
2005-12-19 19:11       ` Linus Torvalds
2005-12-19 19:25         ` Benjamin LaHaise
2005-12-19 19:55           ` Linus Torvalds
2005-12-21 16:42             ` Oleg Nesterov
2006-01-10 10:28               ` Balbir Singh
2006-01-10 18:03                 ` Oleg Nesterov
2006-01-11  6:33                   ` Balbir Singh
2006-01-11  9:22                     ` Oleg Nesterov
2005-12-19 20:11           ` Ingo Molnar
2005-12-19 20:19             ` Benjamin LaHaise
2005-12-19 20:32             ` Russell King
2005-12-19 20:57               ` Ingo Molnar
2005-12-19 19:55         ` Ingo Molnar [this message]
2005-12-19 20:12           ` Linus Torvalds
2005-12-19 23:37             ` Christoph Hellwig
2005-12-20  8:03             ` Nick Piggin
2005-12-20  8:06               ` Arjan van de Ven
2005-12-20  8:21                 ` Nick Piggin
2005-12-20  8:36                   ` Arjan van de Ven
2005-12-20  8:48                     ` Nick Piggin
2005-12-19 16:22   ` Ingo Molnar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20051219195553.GA14155@elte.hu \
    --to=mingo@elte.hu \
    --cc=ak@suse.de \
    --cc=akpm@osdl.org \
    --cc=alan@lxorguk.ukuu.org.uk \
    --cc=arjanv@infradead.org \
    --cc=bcrl@kvack.org \
    --cc=dhowells@redhat.com \
    --cc=hch@infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=oleg@tv-sign.ru \
    --cc=rostedt@goodmis.org \
    --cc=torvalds@osdl.org \
    --cc=viro@ftp.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox