public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jeremy Fitzhardinge <jeremy@goop.org>
To: Andi Kleen <andi@firstfloor.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Christoph Lameter <cl@linux-foundation.org>,
	Pekka Enberg <penberg@cs.helsinki.fi>,
	Ingo Molnar <mingo@elte.hu>,
	Nick Piggin <nickpiggin@yahoo.com.au>,
	"Pallipadi, Venkatesh" <venkatesh.pallipadi@intel.com>,
	Suresh Siddha <suresh.b.siddha@intel.com>,
	Jens Axboe <jens.axboe@oracle.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 2/2] smp_call_function: use rwlocks on queues rather than rcu
Date: Sat, 23 Aug 2008 21:55:38 -0700	[thread overview]
Message-ID: <48B0E9CA.4040808@goop.org> (raw)
In-Reply-To: <20080823073457.GV23334@one.firstfloor.org>

Andi Kleen wrote:
> On Fri, Aug 22, 2008 at 11:35:46AM -0700, Jeremy Fitzhardinge wrote:
>   
>> Andi Kleen wrote:
>>     
>>> Right now my impression is that it is not well understood why
>>> the kmalloc makes the IPI that much slower. In theory a kmalloc
>>> shouldn't be all that slow, it's essentially just a 
>>> "disable interrupts; unlink object from cpu cache; enable interrupts"
>>> with some window dressing. kfree() is similar.
>>>
>>> Does it bounce a cache line on freeing perhaps?
>>>       
>> I think it's just an assumption that it would be slower.  Has anyone
>> measured it?
>>     
>
> It's likely slower than no kmalloc because
> there will be more instructions executed, the question is just how much.
>
>   
>> (Note: The measurements I posted do not cover this path, because it was
>> on a two cpu system, and it was always using the call-single path.)
>>     
>
> Ah so it was already 25% slower even without kmalloc? I thought
> that was with already. That doesn't sound good. Any idea where that slowdown 
> comes from?

Just longer code path, I think.  It calls the generic
smp_call_function_mask(), which then does a popcount on the cpu mask
(which it needs to do anyway), sees only one bit set, and then punts to
the smp_call_function_single() path.  If we maintained a cpus_online
count, then we could fast-path the call to smp_call_function_single() in
the two core/cpu case more efficiently (would still need to scan the
mask to extract the cpu number).

Or alternatively, maybe it isn't actually worth special casing
smp_call_function_single() with a multi-queue smp_call_function_mask()
implementation?

    J

  reply	other threads:[~2008-08-24  4:55 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-08-22  0:29 [PATCH 2/2] smp_call_function: use rwlocks on queues rather than rcu Jeremy Fitzhardinge
2008-08-22  1:53 ` Nick Piggin
2008-08-22  6:28 ` Ingo Molnar
2008-08-22  7:06   ` Pekka Enberg
2008-08-22  7:12     ` Ingo Molnar
2008-08-22  9:12       ` Nick Piggin
2008-08-22 14:01     ` Christoph Lameter
2008-08-22 15:11       ` Paul E. McKenney
2008-08-22 17:14         ` Christoph Lameter
2008-08-22 18:29           ` Paul E. McKenney
2008-08-22 18:33             ` Andi Kleen
2008-08-22 18:35               ` Jeremy Fitzhardinge
2008-08-23  7:34                 ` Andi Kleen
2008-08-24  4:55                   ` Jeremy Fitzhardinge [this message]
2008-08-24  9:01                     ` Andi Kleen
2008-08-22 22:40               ` Paul E. McKenney
2008-08-22 18:36             ` Christoph Lameter
2008-08-22 19:52               ` Paul E. McKenney
2008-08-22 20:03                 ` Christoph Lameter
2008-08-22 20:53                   ` Paul E. McKenney
2008-08-25 10:31                     ` Peter Zijlstra
2008-08-25 15:12                       ` Paul E. McKenney
2008-08-25 15:22                         ` Peter Zijlstra
2008-08-25 15:46                           ` Christoph Lameter
2008-08-25 15:51                             ` Peter Zijlstra
2008-08-26 13:43                               ` Paul E. McKenney
2008-08-26 14:07                                 ` Peter Zijlstra
2008-08-27 15:16                                   ` Paul E. McKenney
2008-08-25 20:04                             ` Paul E. McKenney
2008-08-26  5:13                             ` Nick Piggin
2008-08-26 13:40                               ` [PATCH 2/2] smp_call_function: use rwlocks on queues rather?than rcu Paul E. McKenney
2008-08-25 15:44                         ` [PATCH 2/2] smp_call_function: use rwlocks on queues rather than rcu Christoph Lameter
2008-08-25 20:05                           ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=48B0E9CA.4040808@goop.org \
    --to=jeremy@goop.org \
    --cc=andi@firstfloor.org \
    --cc=cl@linux-foundation.org \
    --cc=jens.axboe@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=nickpiggin@yahoo.com.au \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=penberg@cs.helsinki.fi \
    --cc=rusty@rustcorp.com.au \
    --cc=suresh.b.siddha@intel.com \
    --cc=venkatesh.pallipadi@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox