From: Matt Porter <mporter@kernel.crashing.org>
To: Bret Indrelee <Bret.Indrelee@qlogic.com>
Cc: Roland Dreier <roland@topspin.com>,
Matt Porter <mporter@kernel.crashing.org>,
Linux PPC Embedded mailing list
<linuxppc-embedded@lists.linuxppc.org>
Subject: Re: Any restrictions on DMA address boundry?
Date: Thu, 25 Sep 2003 12:56:53 -0700 [thread overview]
Message-ID: <20030925125653.A16002@home.com> (raw)
In-Reply-To: <Pine.LNX.4.44.0309251306450.12966-100000@spider.ancor.com>; from Bret.Indrelee@qlogic.com on Thu, Sep 25, 2003 at 01:15:15PM -0500
On Thu, Sep 25, 2003 at 01:15:15PM -0500, Bret Indrelee wrote:
> I've read through the old thread about short DMAs, but still there are
> things that aren't clear to me.
>
> What exactly is the issue?
The issue is confined to processors which do not support hardware
snooping of the external bus. In this case, management of the
caches is performed in software. Software cache management is
has a granularity of the processor's cache line size. When a
buffer is allocated using the allowed methods (as defined in
DMA-mapping.txt) to obtain memory for use in DMA, there is
no guarantee that the buffer is cacheline aligned. Imagine
one buffer allocated for internal driver management and another
buffer allocated for inbound DMA transactions from a bus
master. They can share the same cacheline. Inbound DMA
read transaction occurs, the usual streaming DMA API call is
made, and the platform-specific implementation invalidates the
entire cacheline. This could corrupt the contents of the
allocated management buffer that is being accessed by the CPU
only.
> As I understand it, if there is a DMA to the same cache line as something
> that the CPU is referencing, you've got a cache problem.
On a certain type of processors, yes.
> Does it matter what type of transfer the PCI device is doing? If it
> always does 32-bit burst memory write transfers instead of memory
> write & invalidate does that make a difference?
>
> Right now, I'm interested in the PPC and x86 compatible (Pentium,
> Pentium 4, Geode) systems, trying to understand the differences and
> requirements of each.
Which PPCs? Classic PPC != PPC8xx != PPC40x != Book E PPC. :)
PPC8xx and PPC4xx require software cache coherency. If you want
your products to work on PPC44x (I hope you do, they are targetted
at markets where qlogic storage controllers are used) ensuring that
your DMA buffers are cacheline aligned at the head/tail is
important.
-Matt
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
next prev parent reply other threads:[~2003-09-25 19:56 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-09-22 15:13 Any restrictions on DMA address boundry? Bret Indrelee
2003-09-22 16:59 ` Matt Porter
2003-09-25 17:56 ` Roland Dreier
2003-09-25 18:15 ` Bret Indrelee
2003-09-25 18:26 ` Roland Dreier
2003-09-25 20:54 ` Eugene Surovegin
2003-09-25 19:56 ` Matt Porter [this message]
2003-09-25 20:20 ` Bret Indrelee
2003-09-25 20:55 ` Matt Porter
2003-09-25 21:27 ` Bret Indrelee
2003-09-25 20:25 ` Eugene Surovegin
2003-09-25 20:32 ` Bret Indrelee
2003-09-25 20:57 ` Matt Porter
2003-09-25 20:41 ` Matt Porter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20030925125653.A16002@home.com \
--to=mporter@kernel.crashing.org \
--cc=Bret.Indrelee@qlogic.com \
--cc=linuxppc-embedded@lists.linuxppc.org \
--cc=roland@topspin.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).