linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
To: Rik van Riel <riel@redhat.com>
Cc: linux-kernel@vger.kernel.org, aquini@redhat.com,
	walken@google.com, eric.dumazet@gmail.com, lwoodman@redhat.com,
	jeremy@goop.org, Jan Beulich <JBeulich@novell.com>,
	knoel@redhat.com, chegu_vinod@hp.com, mingo@redhat.com
Subject: Re: [PATCH 0/5] x86,smp: make ticket spinlock proportional backoff w/ auto tuning
Date: Wed, 09 Jan 2013 18:20:35 +0530	[thread overview]
Message-ID: <50ED679B.9020306@linux.vnet.ibm.com> (raw)
In-Reply-To: <20130108172632.1126898a@annuminas.surriel.com>

On 01/09/2013 03:56 AM, Rik van Riel wrote:
> Many spinlocks are embedded in data structures; having many CPUs
> pounce on the cache line the lock is in will slow down the lock
> holder, and can cause system performance to fall off a cliff.
>
> The paper "Non-scalable locks are dangerous" is a good reference:
>
> 	http://pdos.csail.mit.edu/papers/linux:lock.pdf
>
> In the Linux kernel, spinlocks are optimized for the case of
> there not being contention. After all, if there is contention,
> the data structure can be improved to reduce or eliminate
> lock contention.
>
> Likewise, the spinlock API should remain simple, and the
> common case of the lock not being contended should remain
> as fast as ever.
>
> However, since spinlock contention should be fairly uncommon,
> we can add functionality into the spinlock slow path that keeps
> system performance from falling off a cliff when there is lock
> contention.
>
> Proportional delay in ticket locks is delaying the time between
> checking the ticket based on a delay factor, and the number of
> CPUs ahead of us in the queue for this lock. Checking the lock
> less often allows the lock holder to continue running, resulting
> in better throughput and preventing performance from dropping
> off a cliff.
>
> The test case has a number of threads locking and unlocking a
> semaphore. With just one thread, everything sits in the CPU
> cache and throughput is around 2.6 million operations per
> second, with a 5-10% variation.
>
> Once a second thread gets involved, data structures bounce
> from CPU to CPU, and performance deteriorates to about 1.25
> million operations per second, with a 5-10% variation.
>
> However, as more and more threads get added to the mix,
> performance with the vanilla kernel continues to deteriorate.
> Once I hit 24 threads, on a 24 CPU, 4 node test system,
> performance is down to about 290k operations/second.
>
> With a proportional backoff delay added to the spinlock
> code, performance with 24 threads goes up to about 400k
> operations/second with a 50x delay, and about 900k operations/second
> with a 250x delay. However, with a 250x delay, performance with
> 2-5 threads is worse than with a 50x delay.
>
> Making the code auto-tune the delay factor results in a system
> that performs well with both light and heavy lock contention,
> and should also protect against the (likely) case of the fixed
> delay factor being wrong for other hardware.
>
> The attached graph shows the performance of the multi threaded
> semaphore lock/unlock test case, with 1-24 threads, on the
> vanilla kernel, with 10x, 50x, and 250x proportional delay,
> as well as the v1 patch series with autotuning for 2x and 2.7x
> spinning before the lock is obtained, and with the v2 series.
>
> The v2 series integrates several ideas from Michel Lespinasse
> and Eric Dumazet, which should result in better throughput and
> nicer behaviour in situations with contention on multiple locks.
>
> For the v3 series, I tried out all the ideas suggested by
> Michel. They made perfect sense, but in the end it turned
> out they did not work as well as the simple, aggressive
> "try to make the delay longer" policy I have now. Several
> small bug fixes and cleanups have been integrated.
>
> Performance is within the margin of error of v2, so the graph
> has not been update.
>
> Please let me know if you manage to break this code in any way,
> so I can fix it...
>

Patch series does not show anymore weird behaviour because of the
underflow (pointed by Michael) and looks fine.

I ran kernbench on 32 core (mx3850) machine with 3.8-rc2 base.
x base_3.8rc2
+ rik_backoff
     N           Min           Max        Median           Avg        Stddev
x   8       222.977        231.16       227.735       227.388     3.1512986
+   8        218.75       232.347      229.1035     228.25425     4.2730225
No difference proven at 95.0% confidence

The run did not show much difference. But I believe a spinlock stress
test would have shown the benefit.
I 'll start running benchmarks now on kvm guests and comeback with report.


  parent reply	other threads:[~2013-01-09 12:52 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-01-08 22:26 [PATCH 0/5] x86,smp: make ticket spinlock proportional backoff w/ auto tuning Rik van Riel
2013-01-08 22:30 ` [PATCH 3/5] x86,smp: auto tune spinlock backoff delay factor Rik van Riel
2013-01-10  3:13   ` Rafael Aquini
2013-01-10 12:49   ` Michel Lespinasse
2013-01-08 22:31 ` [PATCH 4/5] x86,smp: keep spinlock delay values per hashed spinlock address Rik van Riel
2013-01-10  3:14   ` Rafael Aquini
2013-01-10 13:01   ` Michel Lespinasse
2013-01-10 13:05     ` Rik van Riel
2013-01-10 13:15       ` Michel Lespinasse
2013-01-08 22:32 ` [DEBUG PATCH 5/5] x86,smp: add debugging code to track spinlock delay value Rik van Riel
2013-01-08 22:32 ` [PATCH 2/5] x86,smp: proportional backoff for ticket spinlocks Rik van Riel
2013-01-08 22:50   ` Eric Dumazet
2013-01-08 22:54     ` Rik van Riel
2013-01-10  2:30   ` Rafael Aquini
2013-01-08 22:32 ` [PATCH 1/5] x86,smp: move waiting on contended ticket lock out of line Rik van Riel
2013-01-08 22:43   ` Eric Dumazet
2013-01-10 17:38   ` Raghavendra K T
2013-01-09 12:50 ` Raghavendra K T [this message]
2013-01-10  2:27   ` [PATCH 0/5] x86,smp: make ticket spinlock proportional backoff w/ auto tuning Rafael Aquini
2013-01-10 17:36     ` Raghavendra K T
2013-01-11 20:11       ` Rik van Riel
2013-01-13 18:07         ` Raghavendra K T
2013-01-10 15:19 ` Mike Galbraith
2013-01-10 15:31   ` Rik van Riel
2013-01-10 19:30     ` Mike Galbraith
2013-01-24 13:28       ` Ingo Molnar
2013-01-10 22:24 ` Chegu Vinod

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=50ED679B.9020306@linux.vnet.ibm.com \
    --to=raghavendra.kt@linux.vnet.ibm.com \
    --cc=JBeulich@novell.com \
    --cc=aquini@redhat.com \
    --cc=chegu_vinod@hp.com \
    --cc=eric.dumazet@gmail.com \
    --cc=jeremy@goop.org \
    --cc=knoel@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lwoodman@redhat.com \
    --cc=mingo@redhat.com \
    --cc=riel@redhat.com \
    --cc=walken@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).