From: Gabriel Paubert <paubert@iram.es>
To: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>,
Dan Malek <dan@embeddededge.com>, Amit Shah <amitshah@gmx.net>,
linuxppc-dev list <linuxppc-dev@lists.linuxppc.org>
Subject: Re: IBM 750GX SMP on Marvell Discovery II or III?
Date: Wed, 12 May 2004 13:53:30 +0200 [thread overview]
Message-ID: <20040512115330.GA30295@iram.es> (raw)
In-Reply-To: <1084357578.1933.28.camel@gaston>
On Wed, May 12, 2004 at 08:26:19PM +1000, Benjamin Herrenschmidt wrote:
>
> >
> > Are you sure? Since the cache lines are in the other processor memory,
> > they will be flushed to RAM when they are fetched by the processor,
> > provided that you can force the coherence bit on instruction fetches
> > (this is possible IIRC).
>
> Coherency of the data cache lines is one thing... getting the icbi
> broadcast is another. Normal coherency will not help if you don't get
> the icache of the other CPU to snoop your icbi and invalidate the trash
> it has in its icache.
>
> > As I said, I believe the real problem is multithreaded applications.
>
> Which isn't a simple problem...
Indeed, it is actually not solvable in a reasonable way, disabling
the icache being far too unreasonable ;-)
But my point was that Paul's example, one process being rescheduled
on another processor, is actually quite solvable (provided it is the
sole owner of the MM context). You don't lose much by flushing the
icache on a MEI system compared with the hardware overhead of all
the invalidations and flushing that will take place because of the
process switch.
Regards,
Gabriel
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
next prev parent reply other threads:[~2004-05-12 11:53 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-05-10 7:28 IBM 750GX SMP on Marvell Discovery II or III? Amit Shah
2004-05-10 23:36 ` Paul Mackerras
2004-05-11 2:09 ` Dan Malek
2004-05-11 3:03 ` Paul Mackerras
2004-05-11 15:46 ` Dan Malek
2004-05-11 17:23 ` Huailin Chen
2004-05-11 17:31 ` Amit Shah
2004-05-11 20:51 ` Huailin Chen
2004-05-12 0:17 ` Paul Mackerras
2004-05-12 0:12 ` Paul Mackerras
2004-05-12 7:57 ` Giuliano Pochini
2004-05-12 8:00 ` Gabriel Paubert
2004-05-12 10:26 ` Benjamin Herrenschmidt
2004-05-12 11:53 ` Gabriel Paubert [this message]
2004-05-12 11:46 ` Paul Mackerras
2004-05-12 13:45 ` Gabriel Paubert
2004-05-12 14:21 ` Geert Uytterhoeven
2004-05-12 14:30 ` Amit Shah
2004-05-13 4:30 ` Bryan Rittmeyer
2004-05-14 8:02 ` Geert Uytterhoeven
2004-05-14 9:11 ` Gabriel Paubert
2004-05-11 3:08 ` Huailin Chen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20040512115330.GA30295@iram.es \
--to=paubert@iram.es \
--cc=amitshah@gmx.net \
--cc=benh@kernel.crashing.org \
--cc=dan@embeddededge.com \
--cc=linuxppc-dev@lists.linuxppc.org \
--cc=paulus@samba.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).