public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Nick Piggin <npiggin@suse.de>
To: Valdis.Kletnieks@vt.edu
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Andi Kleen <ak@suse.de>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Ingo Molnar <mingo@elte.hu>,
	linux-arch@vger.kernel.org,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: [patch 2/2] x86_64: ticket lock spinlock
Date: Thu, 9 Aug 2007 03:40:12 +0200	[thread overview]
Message-ID: <20070809014012.GA12539@wotan.suse.de> (raw)
In-Reply-To: <20906.1186594318@turing-police.cc.vt.edu>

On Wed, Aug 08, 2007 at 01:31:58PM -0400, Valdis.Kletnieks@vt.edu wrote:
> On Wed, 08 Aug 2007 06:24:44 +0200, Nick Piggin said:
> 
> > After this, we can no longer spin on any locks with preempt enabled,
> > and cannot reenable interrupts when spinning on an irq safe lock, because
> > at that point we have already taken a ticket and the would deadlock if
> > the same CPU tries to take the lock again.  These are hackish anyway: if
> > the lock happens to be called under a preempt or interrupt disabled section,
> > then it will just have the same latency problems. The real fix is to keep
> > critical sections short, and ensure locks are reasonably fair (which this
> > patch does).
> 
> Any guesstimates how often we do that sort of hackish thing currently, and
> how hard it will be to debug each one?  "Deadlock if the same CPU tries to
> take the lock again" is pretty easy to notice - are there more subtle failure
> modes (larger loops of locks, etc)?

I'll try to explain better:

The old spinlocks re-enable preemption and interrupts while they spin
waiting for a held lock. This was done because people noticed some
long latencies while spinning. The problem however is that preemption
and interrupts can only be re-enabled if they were enabled before the
spin_lock call. So if you have code that perhaps takes nested locks,
or locks while interrupts are already disabled, then you get the latency
problems back.

So the non-hack fix is to keep critical sections short (which is what
we've been working at forever), and to have relatively fair locks
(which is what this patch does).

A side-effect of this patch is that it can no longer enable preemption
or ints while spinning, so my changelog is a rationale of why that
shouldn't be a big problem.



  reply	other threads:[~2007-08-09  1:40 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-08-08  4:22 [patch 1/2] spinlock: lockbreak cleanup Nick Piggin
2007-08-08  4:24 ` [patch 2/2] x86_64: ticket lock spinlock Nick Piggin
2007-08-08 10:26   ` Andi Kleen
2007-08-09  1:42     ` Nick Piggin
2007-08-09  9:54       ` Andi Kleen
2007-08-08 17:31   ` Valdis.Kletnieks
2007-08-09  1:40     ` Nick Piggin [this message]
2007-08-11  0:07 ` [patch 1/2] spinlock: lockbreak cleanup Andi Kleen
2007-08-13  7:52   ` Nick Piggin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20070809014012.GA12539@wotan.suse.de \
    --to=npiggin@suse.de \
    --cc=Valdis.Kletnieks@vt.edu \
    --cc=ak@suse.de \
    --cc=akpm@linux-foundation.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox