linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Gabriel Paubert <paubert@iram.es>
To: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Bryan Rittmeyer <bryan@staidm.org>,
	Linux/PPC Development <linuxppc-dev@lists.linuxppc.org>
Subject: Re: IBM 750GX SMP on Marvell Discovery II or III?
Date: Fri, 14 May 2004 11:11:56 +0200	[thread overview]
Message-ID: <20040514091156.GA30680@iram.es> (raw)
In-Reply-To: <Pine.GSO.4.58.0405141001001.27660@waterleaf.sonytel.be>


On Fri, May 14, 2004 at 10:02:10AM +0200, Geert Uytterhoeven wrote:
>
> On Wed, 12 May 2004, Bryan Rittmeyer wrote:
> > On Wed, May 12, 2004 at 04:21:04PM +0200, Geert Uytterhoeven wrote:
> > > Solution: divide memory in pieces, run multiple instances of Linux, each on its
> > > own CPU and memory piece, and use a piece of uncached RAM for implementing
> > > communication channels between CPUs ;-)
> >
> > Non-cacheable I/O throughput on the 60x bus is horrid; might be better to put a
> > 1000Mbps NIC on each CPU and cable em together ;-\
>
> You can always put the real data in cacheable memory, and keep only some
> control descriptors in uncached memory. Needs some explicit cache handling, but
> should be faster.

No the problem was the coherency of instruction and data caches. Data
caches are just coherent, no shared state so you'd rather avoid having
two processors actively reading from the same cache lines, but that's
about all. Just map them through a non-execute segment so that you are
sure that the

Hmmm, now that I tinbk of it, this means that one processor fetching
an instruction line will invalidate the same cache line in the L2 cache
of the other processor. Which means that the L2 cache is actually
useless for sharing code and you might actually force it to only cache
data by fiddling with HID0.

Well, MEI caches are actually worse than what I believed for SMP. They
work well enough for UP with DMA.

	Gabriel

** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/

  reply	other threads:[~2004-05-14  9:11 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-05-10  7:28 IBM 750GX SMP on Marvell Discovery II or III? Amit Shah
2004-05-10 23:36 ` Paul Mackerras
2004-05-11  2:09   ` Dan Malek
2004-05-11  3:03     ` Paul Mackerras
2004-05-11 15:46       ` Dan Malek
2004-05-11 17:23         ` Huailin Chen
2004-05-11 17:31           ` Amit Shah
2004-05-11 20:51             ` Huailin Chen
2004-05-12  0:17             ` Paul Mackerras
2004-05-12  0:12         ` Paul Mackerras
2004-05-12  7:57           ` Giuliano Pochini
2004-05-12  8:00           ` Gabriel Paubert
2004-05-12 10:26             ` Benjamin Herrenschmidt
2004-05-12 11:53               ` Gabriel Paubert
2004-05-12 11:46             ` Paul Mackerras
2004-05-12 13:45               ` Gabriel Paubert
2004-05-12 14:21                 ` Geert Uytterhoeven
2004-05-12 14:30                   ` Amit Shah
2004-05-13  4:30                   ` Bryan Rittmeyer
2004-05-14  8:02                     ` Geert Uytterhoeven
2004-05-14  9:11                       ` Gabriel Paubert [this message]
2004-05-11  3:08   ` Huailin Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20040514091156.GA30680@iram.es \
    --to=paubert@iram.es \
    --cc=bryan@staidm.org \
    --cc=geert@linux-m68k.org \
    --cc=linuxppc-dev@lists.linuxppc.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).