linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Felix Radensky <felix@embedded-sol.com>
To: "linuxppc-dev@ozlabs.org" <linuxppc-dev@ozlabs.org>
Subject: FSL DMA engine transfer to PCI memory
Date: Mon, 24 Jan 2011 23:47:22 +0200	[thread overview]
Message-ID: <4D3DF36A.5050609@embedded-sol.com> (raw)

Hi,

I'm trying to use FSL DMA engine to perform DMA transfer from
memory buffer obtained by kmalloc() to PCI memory. This is on
custom board based on P2020 running linux-2.6.35. The PCI
device is Altera FPGA, connected directly to SoC PCI-E controller.

01:00.0 Unassigned class [ff00]: Altera Corporation Unknown device 
0004 (rev 01)
         Subsystem: Altera Corporation Unknown device 0004
         Control: I/O- Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- 
ParErr- Stepping- SERR- FastB2B-
         Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast 
 >TAbort- <TAbort- <MAbort- >SERR- <PERR-
         Interrupt: pin A routed to IRQ 16
         Region 0: Memory at c0000000 (32-bit, non-prefetchable) 
[size=128K]
         Capabilities: [50] Message Signalled Interrupts: Mask- 64bit+ 
Queue=0/0 Enable-
                 Address: 0000000000000000  Data: 0000
         Capabilities: [78] Power Management version 3
                 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA 
PME(D0-,D1-,D2-,D3hot-,D3cold-)
                 Status: D0 PME-Enable- DSel=0 DScale=0 PME-
         Capabilities: [80] Express Endpoint IRQ 0
                 Device: Supported: MaxPayload 256 bytes, PhantFunc 0, 
ExtTag-
                 Device: Latency L0s <64ns, L1 <1us
                 Device: AtnBtn- AtnInd- PwrInd-
                 Device: Errors: Correctable- Non-Fatal- Fatal- 
Unsupported-
                 Device: RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
                 Device: MaxPayload 128 bytes, MaxReadReq 512 bytes
                 Link: Supported Speed 2.5Gb/s, Width x1, ASPM L0s, Port 1
                 Link: Latency L0s unlimited, L1 unlimited
                 Link: ASPM Disabled RCB 64 bytes CommClk- ExtSynch-
                 Link: Speed 2.5Gb/s, Width x1
         Capabilities: [100] Virtual Channel


I can successfully writel() to PCI memory via address obtained from 
pci_ioremap_bar().
Here's my DMA transfer routine

static int dma_transfer(struct dma_chan *chan, void *dst, void *src, 
size_t len)
{
     int rc = 0;
     dma_addr_t dma_src;
     dma_addr_t dma_dst;
     dma_cookie_t cookie;
     struct completion cmp;
     enum dma_status status;
     enum dma_ctrl_flags flags = 0;
     struct dma_device *dev = chan->device;
     struct dma_async_tx_descriptor *tx = NULL;
     unsigned long tmo = msecs_to_jiffies(FPGA_DMA_TIMEOUT_MS);

     dma_src = dma_map_single(dev->dev, src, len, DMA_TO_DEVICE);
     if (dma_mapping_error(dev->dev, dma_src)) {
         printk(KERN_ERR "Failed to map src for DMA\n");
         return -EIO;
     }

     dma_dst = (dma_addr_t)dst;

     flags = DMA_CTRL_ACK |
         DMA_COMPL_SRC_UNMAP_SINGLE  |
         DMA_COMPL_SKIP_DEST_UNMAP |
         DMA_PREP_INTERRUPT;

     tx = dev->device_prep_dma_memcpy(chan, dma_dst, dma_src, len, flags);
     if (!tx) {
         printk(KERN_ERR "%s: Failed to prepare DMA transfer\n",
                __FUNCTION__);
         dma_unmap_single(dev->dev, dma_src, len, DMA_TO_DEVICE);
         return -ENOMEM;
     }

     init_completion(&cmp);
     tx->callback = dma_callback;
     tx->callback_param = &cmp;
     cookie = tx->tx_submit(tx);

     if (dma_submit_error(cookie)) {
         printk(KERN_ERR "%s: Failed to start DMA transfer\n",
                __FUNCTION__);
         return -ENOMEM;
     }

     dma_async_issue_pending(chan);

     tmo = wait_for_completion_timeout(&cmp, tmo);
     status = dma_async_is_tx_complete(chan, cookie, NULL, NULL);

     if (tmo == 0) {
         printk(KERN_ERR "%s: Transfer timed out\n", __FUNCTION__);
         rc = -ETIMEDOUT;
     } else if (status != DMA_SUCCESS) {
         printk(KERN_ERR "%s: Transfer failed: status is %s\n",
                __FUNCTION__,
                status == DMA_ERROR ? "error" : "in progress");

         dev->device_control(chan, DMA_TERMINATE_ALL, 0);
         rc = -EIO;
     }

     return rc;
}

The destination address is PCI memory address returned by 
pci_ioremap_bar().
The transfer silently fails, destination buffer doesn't change 
contents, but no
error condition is reported.

What am I doing wrong ?

Thanks a lot in advance.

Felix.

             reply	other threads:[~2011-01-24 22:09 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-01-24 21:47 Felix Radensky [this message]
2011-01-24 22:26 ` FSL DMA engine transfer to PCI memory Ira W. Snyder
2011-01-24 23:39   ` Felix Radensky
2011-01-25  0:18     ` Ira W. Snyder
2011-01-25 14:32       ` Felix Radensky
2011-01-25 16:29         ` Ira W. Snyder
2011-01-25 16:34           ` David Laight
2011-01-25 19:57             ` Scott Wood
2011-01-26 10:18               ` David Laight
2011-01-26 19:09                 ` Scott Wood
2011-01-27  8:32           ` Felix Radensky
2011-01-27 16:34             ` Ira W. Snyder
2011-01-24 22:44 ` Scott Wood
2011-01-25  8:56 ` David Laight

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4D3DF36A.5050609@embedded-sol.com \
    --to=felix@embedded-sol.com \
    --cc=linuxppc-dev@ozlabs.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).