public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Christoph Lameter <cl@linux-foundation.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Pekka Enberg <penberg@cs.helsinki.fi>,
	Ingo Molnar <mingo@elte.hu>,
	Jeremy Fitzhardinge <jeremy@goop.org>,
	Andi Kleen <andi@firstfloor.org>,
	"Pallipadi, Venkatesh" <venkatesh.pallipadi@intel.com>,
	Suresh Siddha <suresh.b.siddha@intel.com>,
	Jens Axboe <jens.axboe@oracle.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 2/2] smp_call_function: use rwlocks on queues rather?than rcu
Date: Tue, 26 Aug 2008 06:40:30 -0700	[thread overview]
Message-ID: <20080826134030.GD7097@linux.vnet.ibm.com> (raw)
In-Reply-To: <200808261513.40586.nickpiggin@yahoo.com.au>

On Tue, Aug 26, 2008 at 03:13:40PM +1000, Nick Piggin wrote:
> On Tuesday 26 August 2008 01:46, Christoph Lameter wrote:
> > Peter Zijlstra wrote:
> > > If we combine these two cases, and flip the counter as soon as we've
> > > enqueued one callback, unless we're already waiting for a grace period
> > > to end - which gives us a longer window to collect callbacks.
> > >
> > > And then the rcu_read_unlock() can do:
> > >
> > >   if (dec_and_zero(my_counter) && my_index == dying)
> > >     raise_softirq(RCU)
> > >
> > > to fire off the callback stuff.
> > >
> > > /me ponders - there must be something wrong with that...
> > >
> > > Aaah, yes, the dec_and_zero is non trivial due to the fact that its a
> > > distributed counter. Bugger..
> >
> > Then lets make it per cpu. If we get the cpu ops in then dec_and_zero would
> > be very cheap.
> 
> Let's be very careful before making rcu read locks costly. Any reduction
> in grace periods would be great, but IMO RCU should not be used in cases
> where performance depends on the freed data remaining in cache.

Indeed!

But if you were in a situation where read-side overhead was irrelevant
(perhaps a mythical machine with zero-cost atomics and cache misses),
then one approach would be to combine Oleg Nesterov's QRCU with the
callback processing from Andrea Arcangeli's implementation from the 2001
timeframe.  Of course, if your cache misses really were zero cost,
then you wouldn't care about the data remaining in cache.  So maybe
a machine were cache misses to other CPUs' caches are free, but misses
to main memory are horribly expensive?

Anyway, the trick would be to adapt QRCU (http://lkml.org/lkml/2007/2/25/18)
to store the index in the task structure (as opposed to returning it
from rcu_read_lock()), and have a single global queue of callbacks,
guarded by a global lock.  Then rcu_read_unlock() can initiate callback
processing if the counter decrements down to zero, and call_rcu() would
also initiate a counter switch in the case where the non-current counter
was zero -- and this operation would be guarded by the same lock that
guards the callback queue.

But I doubt that this would be satisfactory on 4,096-CPU machines.
At least not in most cases.  ;-)

							Thanx, Paul

  reply	other threads:[~2008-08-26 13:40 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-08-22  0:29 [PATCH 2/2] smp_call_function: use rwlocks on queues rather than rcu Jeremy Fitzhardinge
2008-08-22  1:53 ` Nick Piggin
2008-08-22  6:28 ` Ingo Molnar
2008-08-22  7:06   ` Pekka Enberg
2008-08-22  7:12     ` Ingo Molnar
2008-08-22  9:12       ` Nick Piggin
2008-08-22 14:01     ` Christoph Lameter
2008-08-22 15:11       ` Paul E. McKenney
2008-08-22 17:14         ` Christoph Lameter
2008-08-22 18:29           ` Paul E. McKenney
2008-08-22 18:33             ` Andi Kleen
2008-08-22 18:35               ` Jeremy Fitzhardinge
2008-08-23  7:34                 ` Andi Kleen
2008-08-24  4:55                   ` Jeremy Fitzhardinge
2008-08-24  9:01                     ` Andi Kleen
2008-08-22 22:40               ` Paul E. McKenney
2008-08-22 18:36             ` Christoph Lameter
2008-08-22 19:52               ` Paul E. McKenney
2008-08-22 20:03                 ` Christoph Lameter
2008-08-22 20:53                   ` Paul E. McKenney
2008-08-25 10:31                     ` Peter Zijlstra
2008-08-25 15:12                       ` Paul E. McKenney
2008-08-25 15:22                         ` Peter Zijlstra
2008-08-25 15:46                           ` Christoph Lameter
2008-08-25 15:51                             ` Peter Zijlstra
2008-08-26 13:43                               ` Paul E. McKenney
2008-08-26 14:07                                 ` Peter Zijlstra
2008-08-27 15:16                                   ` Paul E. McKenney
2008-08-25 20:04                             ` Paul E. McKenney
2008-08-26  5:13                             ` Nick Piggin
2008-08-26 13:40                               ` Paul E. McKenney [this message]
2008-08-25 15:44                         ` Christoph Lameter
2008-08-25 20:05                           ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080826134030.GD7097@linux.vnet.ibm.com \
    --to=paulmck@linux.vnet.ibm.com \
    --cc=andi@firstfloor.org \
    --cc=cl@linux-foundation.org \
    --cc=jens.axboe@oracle.com \
    --cc=jeremy@goop.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=nickpiggin@yahoo.com.au \
    --cc=penberg@cs.helsinki.fi \
    --cc=peterz@infradead.org \
    --cc=rusty@rustcorp.com.au \
    --cc=suresh.b.siddha@intel.com \
    --cc=venkatesh.pallipadi@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox