From: Benjamin Herrenschmidt <benh@kernel.crashing.org>
To: Eugene Surovegin <ebs@ebshome.net>
Cc: debian-powerpc@lists.debian.org, linuxppc-dev@ozlabs.org
Subject: Re: Not coherent cache DMA for G3/G4 CPUs: clarification needed
Date: Fri, 21 Apr 2006 08:40:05 +1000 [thread overview]
Message-ID: <1145572806.4517.10.camel@localhost.localdomain> (raw)
In-Reply-To: <20060420211908.GC25755@gate.ebshome.net>
On Thu, 2006-04-20 at 14:19 -0700, Eugene Surovegin wrote:
> On Thu, Apr 20, 2006 at 02:13:21PM -0700, Eugene Surovegin wrote:
> > On Fri, Apr 21, 2006 at 07:06:13AM +1000, Benjamin Herrenschmidt wrote:
> > > Unfortunately, he has to do things a bit differently. He can't afford to
> > > have the kernel BAT mapping cover his non-cacheable pages. Thus he needs
> > > a reserved pool. Last I looked at our coherent code, it didn't reserve
> > > memory at all, just address space, thus assuming the CPU can handle
> > > having both a caheable and a non-cacheable mapping of the same pages...
> > > (On 6xx this is deadly even if you don't access those cacheable pages
> > > because the CPU prefetch may do it for you).
> >
> > Ben, is this _real_ problem on 6xx or just a theory? Does 6xx actually
> > prefetch beyond page boundary?
> >
> > So far, all "prefetching" I saw which broke non-coherent DMA was not
> > due to the CPU doing prefetching, but _software_ prefetching being
> > too aggressive.
>
> Even if this "prefetching" problem is real, instead of implementing
> separate pool for allocations which will be quite rare at best, just
> allocate guard space before your consistent memory and stop worrying
> about it.
Won't necessarily help with the speculative execution problem and in
fact, how do you do that in practice ?
Ben.
next prev parent reply other threads:[~2006-04-20 22:40 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-04-20 18:57 Not coherent cache DMA for G3/G4 CPUs: clarification needed Gerhard Pircher
2006-04-20 20:38 ` Eugene Surovegin
2006-04-20 20:56 ` Gerhard Pircher
2006-04-20 21:02 ` Eugene Surovegin
2006-04-20 21:10 ` Gerhard Pircher
2006-04-20 21:55 ` Eugene Surovegin
2006-04-20 22:08 ` Gerhard Pircher
2006-04-24 19:21 ` Mark A. Greer
2006-04-21 4:38 ` Benjamin Herrenschmidt
2006-04-21 8:03 ` Gerhard Pircher
2006-04-21 14:33 ` Brent Cook
2006-04-21 21:51 ` Benjamin Herrenschmidt
2006-04-27 21:31 ` Mark A. Greer
2006-04-27 21:53 ` Benjamin Herrenschmidt
2006-04-27 22:08 ` Mark A. Greer
2006-04-29 17:57 ` Gerhard Pircher
2006-04-20 21:06 ` Benjamin Herrenschmidt
2006-04-20 21:13 ` Eugene Surovegin
2006-04-20 21:19 ` Eugene Surovegin
2006-04-20 22:40 ` Benjamin Herrenschmidt [this message]
2006-04-20 22:39 ` Benjamin Herrenschmidt
2006-04-20 23:46 ` Gabriel Paubert
2006-04-21 0:09 ` Benjamin Herrenschmidt
2006-04-20 21:33 ` Eugene Surovegin
2006-04-20 22:41 ` Benjamin Herrenschmidt
2006-04-21 8:21 ` Gerhard Pircher
2006-04-20 21:03 ` Benjamin Herrenschmidt
2006-04-20 21:33 ` Gerhard Pircher
2006-04-20 22:07 ` Gabriel Paubert
2006-04-20 22:26 ` Gerhard Pircher
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1145572806.4517.10.camel@localhost.localdomain \
--to=benh@kernel.crashing.org \
--cc=debian-powerpc@lists.debian.org \
--cc=ebs@ebshome.net \
--cc=linuxppc-dev@ozlabs.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).