From: karl.beldan@gmail.com (Karl Beldan)
To: linux-arm-kernel@lists.infradead.org
Subject: About dma_sync_single_for_{cpu,device}
Date: Tue, 31 Jul 2012 08:45:57 +0200 [thread overview]
Message-ID: <20120731064557.GA4676@gobelin> (raw)
In-Reply-To: <20120730202401.GA4947@gobelin>
Hi,
(This is an email originally addressed to the linux-kernel
mailing-list.)
On our board we've got an MV78200 and a network device between which we
xfer memory chunks via the ddram with an external dma controller.
To handle these xfers we're using the dma API.
To tx a chunk of data from the SoC => network device, we :
- prepare a buffer with a leading header embedding a pattern,
- trigger the xfer and wait for an irq
// The device updates the pattern and then triggers an irq
- upon irq we check the pattern for the xfer completion
I was expecting the following to work:
addr = dma_map_single(dev, buffer, size, DMA_TO_DEVICE);
dma_sync_single_for_device(dev, buffer, pattern_size, DMA_FROM_DEVICE);
dev_send(buffer);
// wait for irq (don't peek in the buffer) ... got irq
dma_sync_single_for_cpu(dev, buffer, pattern_size, DMA_FROM_DEVICE);
if (!xfer_done(buffer)) // not RAM value
dma_sync_single_for_device(dev, buffer, pattern_size, DMA_FROM_DEVICE);
[...]
But this does not work (the buffer pattern does not reflect the ddram
value).
On the other hand, the following works:
[...]
// wait for irq (don't peek in the buffer) ... got irq
dma_sync_single_for_device(dev, buffer, pattern_size, DMA_FROM_DEVICE);
if (!xfer_done(buffer)) // RAM value
[...]
Looking at
dma-mapping.c:__dma_page_cpu_to_{dev,cpu}() and
proc-feroceon.S: feroceon_dma_{,un}map_area
this behavior is not surprising.
The sync_for_cpu calls the unmap which just invalidates the outer cache
while the sync_for_device invalidates both inner and outer.
It seems that:
- we need to invalidate after the RAM has been updated
- we need to invalidate with sync_single_for_device rather than
sync_single_for_cpu to check the value
Is it correct ?
Maybe the following comment in dma-mapping.c explains the situation :
/*
* The DMA API is built upon the notion of "buffer ownership". A buffer
* is either exclusively owned by the CPU (and therefore may be accessed
* by it) or exclusively owned by the DMA device. These helper functions
* represent the transitions between these two ownership states.
*
* Note, however, that on later ARMs, this notion does not work due to
* speculative prefetches. We model our approach on the assumption that
* the CPU does do speculative prefetches, which means we clean caches
* before transfers and delay cache invalidation until transfer completion.
*
*/
Thanks for your input,
Regards,
Karl
next parent reply other threads:[~2012-07-31 6:45 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20120730202401.GA4947@gobelin>
2012-07-31 6:45 ` Karl Beldan [this message]
2012-07-31 6:59 ` About dma_sync_single_for_{cpu,device} Clemens Ladisch
2012-07-31 7:27 ` Karl Beldan
2012-07-31 7:34 ` Clemens Ladisch
2012-07-31 8:30 ` Karl Beldan
2012-07-31 9:09 ` Russell King - ARM Linux
2012-07-31 19:31 ` Karl Beldan
2012-07-31 20:08 ` Russell King - ARM Linux
2012-08-01 6:50 ` Karl Beldan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120731064557.GA4676@gobelin \
--to=karl.beldan@gmail.com \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).