linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Segher Boessenkool <segher@kernel.crashing.org>
To: Arnd Bergmann <arnd@arndb.de>
Cc: linuxppc-dev@ozlabs.org, Olaf Hering <olaf@aepfle.de>,
	cbe-oss-dev@ozlabs.org
Subject: Re: [Cbe-oss-dev] [patch 3/3] cell: prevent alignment interrupt on local store
Date: Thu, 12 Apr 2007 21:57:45 +0200	[thread overview]
Message-ID: <117b47cdd5a232f9cb57f421e285558d@kernel.crashing.org> (raw)
In-Reply-To: <200704122055.05048.arnd@arndb.de>

> I don't know how many versions of libc you are currently building, but 
> it
> probably makes sense to have at least one that uses altivec, and one 
> for
> in-order (e.g. cell) and out-of-order (e.g. power5) pipelines each.

Something compiled for one in-order CPU will not run
very well on any other in-order CPU; each has specific
hazards (like any CPU core, but on in-order it tends
to _hurt_ if you hit any).

(Almost) all "generic" optimisations for in-order cores
(schedule dependent insns far apart, ...) help even
*more* on OoOE cores since those tend to be wider.

The big issue on the Cell PPU is that it simply cannot
execute half of the insns in the PowerPC architecture
at a reasonable speed.


Segher

  reply	other threads:[~2007-04-12 19:57 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-04-10 11:15 [patch 3/3] cell: prevent alignment interrupt on local store Akinobu Mita
2007-04-10 12:52 ` Segher Boessenkool
2007-04-11  3:06   ` Akinobu Mita
2007-04-10 21:22 ` Benjamin Herrenschmidt
2007-04-11  2:56   ` Akinobu Mita
2007-04-11  3:30     ` Benjamin Herrenschmidt
2007-04-11 21:03       ` Segher Boessenkool
2007-04-12  4:23         ` Olaf Hering
2007-04-12  5:26           ` Benjamin Herrenschmidt
2007-04-12  6:33             ` Olaf Hering
2007-04-12  6:38               ` Benjamin Herrenschmidt
2007-04-12  8:31                 ` Gabriel Paubert
2007-04-12  8:48                   ` Benjamin Herrenschmidt
2007-04-12  6:50           ` Segher Boessenkool
2007-04-12  6:57             ` [Cbe-oss-dev] " Michael Ellerman
2007-04-12  7:07               ` Segher Boessenkool
2007-04-12 18:43           ` Arnd Bergmann
2007-04-12 18:55             ` Arnd Bergmann
2007-04-12 19:57               ` Segher Boessenkool [this message]
2007-04-12 19:52             ` Segher Boessenkool
2007-04-12 13:01   ` [RFC, PATCH] selection of CPU optimization Arnd Bergmann
2007-04-12 16:45     ` Kumar Gala
2007-04-12 17:26       ` [Cbe-oss-dev] " Arnd Bergmann
2007-04-12 18:17         ` Kumar Gala
2007-04-12 19:25           ` Arnd Bergmann
2007-04-12 20:04           ` Olof Johansson
2007-04-12 20:01             ` Segher Boessenkool
2007-04-12 20:22               ` Olof Johansson
2007-04-12 20:22                 ` Segher Boessenkool
2007-04-12 19:50         ` Segher Boessenkool
2007-04-13  0:10           ` Arnd Bergmann
2007-04-13  2:03             ` Olof Johansson
2007-04-13 18:43             ` Segher Boessenkool

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=117b47cdd5a232f9cb57f421e285558d@kernel.crashing.org \
    --to=segher@kernel.crashing.org \
    --cc=arnd@arndb.de \
    --cc=cbe-oss-dev@ozlabs.org \
    --cc=linuxppc-dev@ozlabs.org \
    --cc=olaf@aepfle.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).