linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Michel Lespinasse <walken@google.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>,
	Rik van Riel <riel@redhat.com>, Ingo Molnar <mingo@redhat.com>,
	David Howells <dhowells@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Eric Dumazet <edumazet@google.com>,
	"Eric W. Biederman" <ebiederm@xmission.com>,
	Manfred Spraul <manfred@colorfullife.com>,
	linux-kernel@vger.kernel.org, john.stultz@linaro.org
Subject: Re: [RFC PATCH 1/6] kernel: implement queue spinlock API
Date: Thu, 7 Feb 2013 21:03:42 -0800	[thread overview]
Message-ID: <20130208050342.GA23362@linux.vnet.ibm.com> (raw)
In-Reply-To: <20130208043643.GN2545@linux.vnet.ibm.com>

On Thu, Feb 07, 2013 at 08:36:43PM -0800, Paul E. McKenney wrote:
> On Thu, Feb 07, 2013 at 07:48:33PM -0800, Michel Lespinasse wrote:
> > On Thu, Feb 7, 2013 at 4:40 PM, Paul E. McKenney
> > <paulmck@linux.vnet.ibm.com> wrote:
> > > On Thu, Feb 07, 2013 at 04:03:54PM -0800, Eric Dumazet wrote:
> > >> It adds yet another memory write to store the node pointer in the
> > >> lock...
> > >>
> > >> I suspect it's going to increase false sharing.
> > >
> > > On the other hand, compared to straight MCS, it reduces the need to
> > > pass the node address around.  Furthermore, the node pointer is likely
> > > to be in the same cache line as the lock word itself, and finally
> > > some architectures can do a double-pointer store.
> > >
> > > Of course, it might well be slower, but it seems like it is worth
> > > giving it a try.
> > 
> > Right. Another nice point about this approach is that there needs to
> > be only one node per spinning CPU, so the node pointers (both tail and
> > next) might be replaced with CPU identifiers, which would bring the
> > spinlock size down to the same as with the ticket spinlock (which in
> > turns makes it that much more likely that we'll have atomic stores of
> > that size).
> 
> Good point!  I must admit that this is one advantage of having the
> various _irq spinlock acquisition primitives disable irqs before
> spinning.  ;-)

Right...  For spinlocks that -don't- disable irqs, you need to deal with
the possibility that a CPU gets interrupted while spinning, and the
interrupt handler also tries to acquire a queued lock.  One way to deal
with this is to have a node per CPUxirq.  Of course, if interrupts
handlers always disable irqs when acquiring a spinlock, then you only
need CPUx2.

							Thanx, Paul


  reply	other threads:[~2013-02-08  5:04 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-01-22 23:13 [RFC PATCH 0/6] fast queue spinlocks Michel Lespinasse
2013-01-22 23:13 ` [RFC PATCH 1/6] kernel: implement queue spinlock API Michel Lespinasse
2013-02-07 22:34   ` Paul E. McKenney
2013-02-07 22:56     ` Eric Dumazet
2013-02-07 23:53       ` Paul E. McKenney
2013-02-07 23:58       ` Michel Lespinasse
2013-02-08  0:03         ` Eric Dumazet
2013-02-08  0:40           ` Paul E. McKenney
2013-02-08  3:48             ` Michel Lespinasse
2013-02-08  4:36               ` Paul E. McKenney
2013-02-08  5:03                 ` Paul E. McKenney [this message]
2013-02-08  5:11                   ` Michel Lespinasse
2013-02-08 16:17                     ` Paul E. McKenney
2013-02-07 23:14     ` John Stultz
2013-02-08  0:35     ` Michel Lespinasse
2013-01-22 23:13 ` [RFC PATCH 2/6] net: convert qdisc busylock to use " Michel Lespinasse
2013-01-22 23:13 ` [RFC PATCH 3/6] ipc: convert ipc objects " Michel Lespinasse
2013-01-22 23:13 ` [RFC PATCH 4/6] kernel: faster queue spinlock implementation Michel Lespinasse
2013-01-23 21:55   ` Rik van Riel
2013-01-23 23:52     ` Michel Lespinasse
2013-01-24  0:18   ` Eric Dumazet
2013-01-25 20:30   ` [RFC PATCH 7/6] kernel: document fast queue spinlocks Rik van Riel
2013-01-22 23:13 ` [RFC PATCH 5/6] net: qdisc busylock updates to account for queue spinlock api change Michel Lespinasse
2013-01-22 23:13 ` [RFC PATCH 6/6] ipc: object locking " Michel Lespinasse
2013-01-22 23:17 ` [RFC PATCH 0/6] fast queue spinlocks Michel Lespinasse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130208050342.GA23362@linux.vnet.ibm.com \
    --to=paulmck@linux.vnet.ibm.com \
    --cc=dhowells@redhat.com \
    --cc=ebiederm@xmission.com \
    --cc=edumazet@google.com \
    --cc=eric.dumazet@gmail.com \
    --cc=john.stultz@linaro.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=manfred@colorfullife.com \
    --cc=mingo@redhat.com \
    --cc=riel@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=walken@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).