linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Benjamin Herrenschmidt <benh@kernel.crashing.org>
To: Liu Dave-r63238 <DaveLiu@freescale.com>
Cc: ppc-dev <linuxppc-dev@ozlabs.org>,
	Paul Mackerras <paulus@samba.org>,
	Kumar Gala <galak@gate.crashing.org>
Subject: RE: fsl booke MM vs. SMP questions
Date: Mon, 28 May 2007 19:24:08 +1000	[thread overview]
Message-ID: <1180344249.19517.74.camel@localhost.localdomain> (raw)
In-Reply-To: <995B09A8299C2C44B59866F6391D263595EF5F@zch01exm21.fsl.freescale.net>

On Mon, 2007-05-28 at 17:05 +0800, Liu Dave-r63238 wrote:
> Ben,
> 
> > You never "loose" IPIs in the sense that you always get at least 1
> > interrupt for N IPIs and it's up to software to make sure not to lose
> > any event. The linux kernel arch code usually handles that with a
> > synchronous IPI mecanism.
> 
> Due to the synchronous IPI mechanism for TLB invalidatation, it is
> very time exhausting, there are interrupt overhead and wait time for
> sync.

Yup, there is, though you can try to optimize it such that you only sync
the CPUs involved with the IPIs, which often are only few.

> I also noticed that tlb invalidation on the PowerPC 750 SMP system
> is using the IPI mechanism, that is because the 750 can not broadcast
> tlb invalidation ops.

Do we support that in linux ?
 
> If the broadcast tlbivax instruction is more effective than the IPI
> mechanism?
> 
> Did you evaluate the performance with the two different ways?

Not really... it depends on bus traffic, plus the need to spinlock the
broadcast tlbivax as well, etc..

I'm not working on real HW at the moment. I don't know what the exact
characteristics of your target HW are...

Ben.

  reply	other threads:[~2007-05-28  9:24 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-05-21  7:06 fsl booke MM vs. SMP questions Benjamin Herrenschmidt
     [not found] ` <1179741447.3660.7.camel@localhost.localdomain>
     [not found]   ` <1179742083.32247.689.camel@localhost.localdomain>
2007-05-21 11:37     ` Dave Liu
2007-05-21 22:07       ` Benjamin Herrenschmidt
2007-05-22  3:09         ` Benjamin Herrenschmidt
2007-05-22 10:56           ` Dave Liu
2007-05-22 22:42             ` Benjamin Herrenschmidt
2007-05-23  2:38               ` Dave Liu
2007-05-23  3:08                 ` Benjamin Herrenschmidt
2007-05-28  9:05                   ` Liu Dave-r63238
2007-05-28  9:24                     ` Benjamin Herrenschmidt [this message]
2007-05-28  9:37                       ` Liu Dave-r63238
2007-05-28 10:00                         ` Benjamin Herrenschmidt
2007-05-28 10:23                           ` Gabriel Paubert
2007-05-28 10:28                             ` Benjamin Herrenschmidt
2007-05-22  8:46         ` Gabriel Paubert
2007-05-22  9:14           ` Benjamin Herrenschmidt
2007-05-22 10:02             ` Gabriel Paubert
2007-05-22 10:05               ` Benjamin Herrenschmidt
2007-05-23  9:12                 ` Gabriel Paubert
2007-05-22  3:03 ` Kumar Gala

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1180344249.19517.74.camel@localhost.localdomain \
    --to=benh@kernel.crashing.org \
    --cc=DaveLiu@freescale.com \
    --cc=galak@gate.crashing.org \
    --cc=linuxppc-dev@ozlabs.org \
    --cc=paulus@samba.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).