linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Rik van Riel <riel@redhat.com>
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: linux-kernel@vger.kernel.org, aquini@redhat.com,
	walken@google.com, lwoodman@redhat.com, jeremy@goop.org,
	Jan Beulich <JBeulich@novell.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [RFC PATCH 3/3 -v2] x86,smp: auto tune spinlock backoff delay factor
Date: Fri, 21 Dec 2012 21:57:40 -0500	[thread overview]
Message-ID: <50D521A4.7050509@redhat.com> (raw)
In-Reply-To: <1356137332.21834.8155.camel@edumazet-glaptop>

On 12/21/2012 07:48 PM, Eric Dumazet wrote:
> On Fri, 2012-12-21 at 18:56 -0500, Rik van Riel wrote:
>> Argh, the first one had a typo in it that did not influence
>> performance with fewer threads running, but that made things
>> worse with more than a dozen threads...
>>
>> Please let me know if you can break these patches.
>> ---8<---
>> Subject: x86,smp: auto tune spinlock backoff delay factor
>
>> +#define MIN_SPINLOCK_DELAY 1
>> +#define MAX_SPINLOCK_DELAY 1000
>> +DEFINE_PER_CPU(int, spinlock_delay) = { MIN_SPINLOCK_DELAY };
>
> Using a single spinlock_delay per cpu assumes there is a single
> contended spinlock on the machine, or that contended
> spinlocks protect the same critical section.

The goal is to reduce bus traffic, and keep total
system performance from falling through the floor.

If we have one lock that takes N cycles to acquire,
and a second contended lock that takes N*2 cycles
to acquire, checking the first lock fewer times
before acquisition, and the second lock more times,
should still result in similar average system
throughput.

I suspect this approach should work well if we have
multiple contended locks in the system.

> Given that we probably know where the contended spinlocks are, couldnt
> we use a real scalable implementation for them ?

The scalable locks tend to have a slightly more
complex locking API, resulting in a slightly
higher overhead in the non-contended (normal)
case.  That means we cannot use them everywhere.

Also, scalable locks merely make sure that N+1
CPUs perform the same as N CPUs when there is
lock contention.  They do not cause the system
to actually scale.

For actual scalability, the data structure would
need to be changed, so locking requirements are
better.

> A known contended one is the Qdisc lock in network layer. We added a
> second lock (busylock) to lower a bit the pressure on a separate cache
> line, but a scalable lock would be much better...

My locking patches are meant for dealing with the
offenders we do not know about, to make sure that
system performance does not fall off a cliff when
we run into a surprise.

Known scalability bugs we can fix.

Unknown ones should not cause somebody's system
to fail.

> I guess there are patent issues...

At least one of the scalable lock implementations has been
known since 1991, so there should not be any patent issues
with that one.


  reply	other threads:[~2012-12-22  2:54 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-12-21 23:49 [RFC PATCH 0/3] x86,smp: make ticket spinlock proportional backoff w/ auto tuning Rik van Riel
2012-12-21 23:50 ` [RFC PATCH 1/3] x86,smp: move waiting on contended lock out of line Rik van Riel
2012-12-22  3:05   ` Steven Rostedt
2012-12-22  4:40   ` Michel Lespinasse
2012-12-22  4:48     ` Rik van Riel
2012-12-23 22:52   ` Rafael Aquini
2012-12-21 23:51 ` [RFC PATCH 2/3] x86,smp: proportional backoff for ticket spinlocks Rik van Riel
2012-12-22  3:07   ` Steven Rostedt
2012-12-22  3:14     ` Steven Rostedt
2012-12-22  3:47       ` Rik van Riel
2012-12-22  4:44   ` Michel Lespinasse
2012-12-23 22:55   ` Rafael Aquini
2012-12-21 23:51 ` [RFC PATCH 3/3] x86,smp: auto tune spinlock backoff delay factor Rik van Riel
2012-12-21 23:56   ` [RFC PATCH 3/3 -v2] " Rik van Riel
2012-12-22  0:18     ` Eric Dumazet
2012-12-22  2:43       ` Rik van Riel
2012-12-22  0:48     ` Eric Dumazet
2012-12-22  2:57       ` Rik van Riel [this message]
2012-12-22  3:29     ` Eric Dumazet
2012-12-22  3:44       ` Rik van Riel
2012-12-22  3:33     ` Steven Rostedt
2012-12-22  3:50       ` Rik van Riel
2012-12-26 19:10         ` Eric Dumazet
2012-12-26 19:27           ` Eric Dumazet
2012-12-26 19:51           ` Rik van Riel
2012-12-27  6:07             ` Michel Lespinasse
2012-12-27 14:27               ` Eric Dumazet
2012-12-27 14:35                 ` Rik van Riel
2012-12-27 18:41                   ` Jan Beulich
2012-12-27 19:09                     ` Rik van Riel
2013-01-03  9:05                       ` Jan Beulich
2013-01-03 13:24                         ` Steven Rostedt
2013-01-03 13:35                           ` Eric Dumazet
2013-01-03 15:32                             ` Steven Rostedt
2013-01-03 16:10                               ` Eric Dumazet
2013-01-03 16:45                                 ` Steven Rostedt
2013-01-03 17:54                                   ` Eric Dumazet
2012-12-27 18:49                   ` Eric Dumazet
2012-12-27 19:31                     ` Rik van Riel
2012-12-29  0:42                       ` Eric Dumazet
2012-12-29 10:27           ` Michel Lespinasse
2013-01-03 18:17             ` Eric Dumazet
2012-12-22  0:47   ` [RFC PATCH 3/3] " David Daney
2012-12-22  2:51     ` Rik van Riel
2012-12-22  3:49       ` Steven Rostedt
2012-12-22  3:58         ` Rik van Riel
2012-12-23 23:08           ` Rafael Aquini
2012-12-22  5:42   ` Michel Lespinasse
2012-12-22 14:32     ` Rik van Riel
2013-01-02  0:06 ` ticket spinlock proportional backoff experiments Michel Lespinasse
2013-01-02  0:09   ` [PATCH 1/2] x86,smp: simplify __ticket_spin_lock Michel Lespinasse
2013-01-02 15:31     ` Rik van Riel
2013-01-02  0:10   ` [PATCH 2/2] x86,smp: proportional backoff for ticket spinlocks Michel Lespinasse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=50D521A4.7050509@redhat.com \
    --to=riel@redhat.com \
    --cc=JBeulich@novell.com \
    --cc=aquini@redhat.com \
    --cc=eric.dumazet@gmail.com \
    --cc=jeremy@goop.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lwoodman@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=walken@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).