linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/4]  dmaengine: add support mmp-pdma
@ 2012-08-14  4:11 Zhangfei Gao
  2012-08-14  4:11 ` [PATCH v3 1/4] dmaengine: mmp-pdma support Zhangfei Gao
                   ` (3 more replies)
  0 siblings, 4 replies; 14+ messages in thread
From: Zhangfei Gao @ 2012-08-14  4:11 UTC (permalink / raw)
  To: linux-arm-kernel

v2->v3 
Use "#dma-channels", instead of "dma-channels", suggested by Arnd

v1->v2
Sync with Arnd and Haojoan, change compatible name to "marvell,pdma-1.0"
Used platfroms: pxa25x, pxa27x, pxa3xx, pxa93x, pxa168, pxa910, pxa688.
Use IP name rather than platform name

v0->v1
Update dt member and desc according to Arnd's suggestion
Use dma_slave_config.slave_id to transfer drcmr followed Vinod's suggestion

Patch 4 is upload as example
mtd: pxa3xx-nand: replace pxa_request_dma with dmaengine
We would like hold this patch currently
since pdma will grab irq and disable pxa_init_dma. 


mmp-pdma is added under dmaengine framework.
The final purpose is replacing arch/arm/plat-pxa/dma.c

Test on pxa910 with pxa3xx-nand and dmatest.ko

Zhangfei Gao (4):
  dmaengine: mmp-pdma support
  dmaengine: mmp_tdma: add dt support
  dmatest: add dmaengine_slave_config for DMA_MEMCPY
  mtd: pxa3xx-nand: replace pxa_request_dma with dmaengine

 Documentation/devicetree/bindings/dma/mmp-dma.txt |   75 ++
 drivers/dma/Kconfig                               |    7 +
 drivers/dma/Makefile                              |    1 +
 drivers/dma/dmatest.c                             |    4 +
 drivers/dma/mmp_pdma.c                            |  873 +++++++++++++++++++++
 drivers/dma/mmp_tdma.c                            |   51 +-
 drivers/mtd/nand/pxa3xx_nand.c                    |  113 ++--
 include/linux/platform_data/mmp_dma.h             |   19 +
 8 files changed, 1073 insertions(+), 70 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/dma/mmp-dma.txt
 create mode 100644 drivers/dma/mmp_pdma.c
 create mode 100644 include/linux/platform_data/mmp_dma.h

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 1/4] dmaengine: mmp-pdma support
  2012-08-14  4:11 [PATCH v3 0/4] dmaengine: add support mmp-pdma Zhangfei Gao
@ 2012-08-14  4:11 ` Zhangfei Gao
  2012-08-14  8:30   ` Russell King - ARM Linux
  2012-08-14  9:33   ` Arnd Bergmann
  2012-08-14  4:11 ` [PATCH v3 2/4] dmaengine: mmp_tdma: add dt support Zhangfei Gao
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 14+ messages in thread
From: Zhangfei Gao @ 2012-08-14  4:11 UTC (permalink / raw)
  To: linux-arm-kernel

1. virtual channel vs. physical channel
Virtual channel is managed by dmaengine
Physical channel handling resource, such as irq
Physical channel is alloced dynamically as descending priority,
freed immediately when irq done.
The availble highest priority physically channel will alwayes be alloced

Issue pending list -> alloc highest dma physically channel available -> dma done -> free physically channel

2. list: running list & pending list
submit: desc list -> pending list
issue_pending_list: if (IDLE) pending list -> running list; free pending list (RUN)
irq: free running list (IDLE)
     check pendlist -> pending list -> running list; free pending list (RUN)

3. irq:
Each list generate one irq, calling callback
One list may contain several desc chain, in such case, make sure only the last desc list generate irq.

4. async
Submit will add desc chain to pending list, which can be multi-called
If multi desc chain is submitted, only the last desc would generate irq -> call back
If IDLE, issue_pending_list start pending_list, transforming pendlist to running list
If RUN, irq will start pending list

5. test
5.1 pxa3xx_nand on pxa910
5.2 insmod dmatest.ko (threads_per_chan=y)
By default drivers/dma/dmatest.c test every channel and test memcpy with 1 threads per channel

Signed-off-by: Zhangfei Gao <zhangfei.gao@marvell.com>
---
 Documentation/devicetree/bindings/dma/mmp-dma.txt |   45 ++
 drivers/dma/Kconfig                               |    7 +
 drivers/dma/Makefile                              |    1 +
 drivers/dma/mmp_pdma.c                            |  873 +++++++++++++++++++++
 include/linux/platform_data/mmp_dma.h             |   19 +
 5 files changed, 945 insertions(+), 0 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/dma/mmp-dma.txt
 create mode 100644 drivers/dma/mmp_pdma.c
 create mode 100644 include/linux/platform_data/mmp_dma.h

diff --git a/Documentation/devicetree/bindings/dma/mmp-dma.txt b/Documentation/devicetree/bindings/dma/mmp-dma.txt
new file mode 100644
index 0000000..cbf40d6
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/mmp-dma.txt
@@ -0,0 +1,45 @@
+* MARVELL MMP DMA controller
+
+Marvell Peripheral DMA Controller
+Used platfroms: pxa688, pxa910, pxa3xx, etc
+
+Required properties:
+- compatible: Should be "marvell,pdma-1.0"
+- reg: Should contain DMA registers location and length.
+- interrupts: Either contain all of the per-channel DMA interrupts
+		or one irq for pdma device
+- #dma-channels: Number of DMA channels supported by the controller.
+
+"marvell,pdma-1.0"
+Used platfroms: pxa25x, pxa27x, pxa3xx, pxa93x, pxa168, pxa910, pxa688.
+
+Examples:
+
+/*
+ * Each channel has specific irq
+ * ICU parse out irq channel from ICU register,
+ * while DMA controller may not able to distinguish the irq channel
+ * Using this method, interrupt-parent is required as demuxer
+ * For example, pxa688 icu register 0x128, bit 0~15 is PDMA channel irq,
+ * 18~21 is ADMA irq
+ */
+pdma: dma-controller at d4000000 {
+	      compatible = "marvell,pdma-1.0";
+	      reg = <0xd4000000 0x10000>;
+	      interrupts = <0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15>;
+	      interrupt-parent = <&intcmux32>;
+	      #dma-channels = <16>;
+      };
+
+/*
+ * One irq for all channels
+ * Dmaengine driver (DMA controller) distinguish irq channel via
+ * parsing internal register
+ */
+pdma: dma-controller at d4000000 {
+	      compatible = "marvell,pdma-1.0";
+	      reg = <0xd4000000 0x10000>;
+	      interrupts = <47>;
+	      #dma-channels = <16>;
+      };
+
diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 8904bb6..663e26b 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -292,6 +292,13 @@ config MMP_TDMA
 
 	  Say Y here if you enabled MMP ADMA, otherwise say N.
 
+config MMP_PDMA
+	bool "MMP PDMA support"
+	depends on (ARCH_MMP || ARCH_PXA)
+	select DMA_ENGINE
+	help
+	  Support the MMP PDMA engine for PXA and MMP platfrom.
+
 config DMA_ENGINE
 	bool
 
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index eabe13c..27bd86e 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -32,3 +32,4 @@ obj-$(CONFIG_EP93XX_DMA) += ep93xx_dma.o
 obj-$(CONFIG_DMA_SA11X0) += sa11x0-dma.o
 obj-$(CONFIG_DMA_OMAP) += omap-dma.o
 obj-$(CONFIG_MMP_TDMA) += mmp_tdma.o
+obj-$(CONFIG_MMP_PDMA) += mmp_pdma.o
diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
new file mode 100644
index 0000000..ee82170
--- /dev/null
+++ b/drivers/dma/mmp_pdma.c
@@ -0,0 +1,873 @@
+/*
+ * Copyright 2012 Marvell International Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/interrupt.h>
+#include <linux/dma-mapping.h>
+#include <linux/slab.h>
+#include <linux/dmaengine.h>
+#include <linux/platform_device.h>
+#include <linux/device.h>
+#include <linux/platform_data/mmp_dma.h>
+#include <linux/dmapool.h>
+#include <linux/of_device.h>
+#include <linux/of.h>
+
+#include "dmaengine.h"
+
+#define DCSR		0x0000
+#define DALGN		0x00a0  /* DMA Alignment Register */
+#define DINT		0x00f0  /* DMA Interrupt Register */
+#define DDADR		0x0200
+#define DSADR		0x0204
+#define DTADR		0x0208
+#define DCMD		0x020c
+
+#define DCSR_RUN	(1 << 31)	/* Run Bit (read / write) */
+#define DCSR_NODESC	(1 << 30)	/* No-Descriptor Fetch (read / write) */
+#define DCSR_STOPIRQEN	(1 << 29)	/* Stop Interrupt Enable (read / write) */
+#define DCSR_REQPEND	(1 << 8)	/* Request Pending (read-only) */
+#define DCSR_STOPSTATE	(1 << 3)	/* Stop State (read-only) */
+#define DCSR_ENDINTR	(1 << 2)	/* End Interrupt (read / write) */
+#define DCSR_STARTINTR	(1 << 1)	/* Start Interrupt (read / write) */
+#define DCSR_BUSERR	(1 << 0)	/* Bus Error Interrupt (read / write) */
+
+#define DCSR_EORIRQEN	(1 << 28)       /* End of Receive Interrupt Enable (R/W) */
+#define DCSR_EORJMPEN	(1 << 27)       /* Jump to next descriptor on EOR */
+#define DCSR_EORSTOPEN	(1 << 26)       /* STOP on an EOR */
+#define DCSR_SETCMPST	(1 << 25)       /* Set Descriptor Compare Status */
+#define DCSR_CLRCMPST	(1 << 24)       /* Clear Descriptor Compare Status */
+#define DCSR_CMPST	(1 << 10)       /* The Descriptor Compare Status */
+#define DCSR_EORINTR	(1 << 9)        /* The end of Receive */
+
+#define DRCMR_MAPVLD	(1 << 7)	/* Map Valid (read / write) */
+#define DRCMR_CHLNUM	0x1f		/* mask for Channel Number (read / write) */
+
+#define DDADR_DESCADDR	0xfffffff0	/* Address of next descriptor (mask) */
+#define DDADR_STOP	(1 << 0)	/* Stop (read / write) */
+
+#define DCMD_INCSRCADDR	(1 << 31)	/* Source Address Increment Setting. */
+#define DCMD_INCTRGADDR	(1 << 30)	/* Target Address Increment Setting. */
+#define DCMD_FLOWSRC	(1 << 29)	/* Flow Control by the source. */
+#define DCMD_FLOWTRG	(1 << 28)	/* Flow Control by the target. */
+#define DCMD_STARTIRQEN	(1 << 22)	/* Start Interrupt Enable */
+#define DCMD_ENDIRQEN	(1 << 21)	/* End Interrupt Enable */
+#define DCMD_ENDIAN	(1 << 18)	/* Device Endian-ness. */
+#define DCMD_BURST8	(1 << 16)	/* 8 byte burst */
+#define DCMD_BURST16	(2 << 16)	/* 16 byte burst */
+#define DCMD_BURST32	(3 << 16)	/* 32 byte burst */
+#define DCMD_WIDTH1	(1 << 14)	/* 1 byte width */
+#define DCMD_WIDTH2	(2 << 14)	/* 2 byte width (HalfWord) */
+#define DCMD_WIDTH4	(3 << 14)	/* 4 byte width (Word) */
+#define DCMD_LENGTH	0x01fff		/* length mask (max = 8K - 1) */
+
+#define PDMA_ALIGNMENT		3
+#define PDMA_MAX_DESC_BYTES	0x1000
+
+struct mmp_pdma_desc_hw {
+	u32 ddadr;	/* Points to the next descriptor + flags */
+	u32 dsadr;	/* DSADR value for the current transfer */
+	u32 dtadr;	/* DTADR value for the current transfer */
+	u32 dcmd;	/* DCMD value for the current transfer */
+} __aligned(32);
+
+struct mmp_pdma_desc_sw {
+	struct mmp_pdma_desc_hw desc;
+	struct list_head node;
+	struct list_head tx_list;
+	struct dma_async_tx_descriptor async_tx;
+};
+
+struct mmp_pdma_phy;
+
+struct mmp_pdma_chan {
+	struct device *dev;		/* Channel device */
+	struct dma_chan chan;		/* DMA common channel */
+	struct dma_async_tx_descriptor desc;
+	struct mmp_pdma_phy *phy;
+	enum dma_transfer_direction dir;
+
+	/* channel's basic info */
+	struct tasklet_struct tasklet;
+	u32 dcmd;
+	u32 drcmr;
+	u32 dev_addr;
+
+	/* list for desc */
+	spinlock_t desc_lock;		/* Descriptor list lock */
+	struct list_head chain_pending;	/* Link descriptors queue for pending */
+	struct list_head chain_running;	/* Link descriptors queue for running */
+	bool idle;			/* channel statue machine */
+
+	struct dma_pool *desc_pool;	/* Descriptors pool */
+};
+
+struct mmp_pdma_phy {
+	int idx;
+	void __iomem *base;
+	struct mmp_pdma_chan *vchan;
+};
+
+struct mmp_pdma_device {
+	int				dma_channels;
+	void __iomem			*base;
+	struct device			*dev;
+	struct dma_device		device;
+	struct mmp_pdma_phy		*phy;
+};
+
+#define tx_to_mmp_pdma_desc(tx) container_of(tx, struct mmp_pdma_desc_sw, async_tx)
+#define to_mmp_pdma_desc(lh) container_of(lh, struct mmp_pdma_desc_sw, node)
+#define to_mmp_pdma_chan(dchan) container_of(dchan, struct mmp_pdma_chan, chan)
+#define to_mmp_pdma_dev(dmadev) container_of(dmadev, struct mmp_pdma_device, device)
+
+static void set_desc(struct mmp_pdma_phy *phy, dma_addr_t addr)
+{
+	u32 reg = (phy->idx << 4) + DDADR;
+
+	writel(addr, phy->base + reg);
+}
+
+static void enable_chan(struct mmp_pdma_phy *phy)
+{
+	u32 reg;
+
+	if (!phy->vchan)
+		return;
+
+	reg = phy->vchan->drcmr;
+	reg = (((reg) < 64) ? 0x0100 : 0x1100) + (((reg) & 0x3f) << 2);
+	writel(DRCMR_MAPVLD | phy->idx, phy->base + reg);
+
+	reg = (phy->idx << 2) + DCSR;
+	writel(readl(phy->base + reg) | DCSR_RUN,
+					phy->base + reg);
+}
+
+static void disable_chan(struct mmp_pdma_phy *phy)
+{
+	u32 reg;
+
+	if (phy) {
+		reg = (phy->idx << 2) + DCSR;
+		writel(readl(phy->base + reg) & ~DCSR_RUN,
+						phy->base + reg);
+	}
+}
+
+static int clear_chan_irq(struct mmp_pdma_phy *phy)
+{
+	u32 dcsr;
+	u32 dint = readl(phy->base + DINT);
+	u32 reg = (phy->idx << 2) + DCSR;
+
+	if (dint & BIT(phy->idx)) {
+		/* clear irq */
+		dcsr = readl(phy->base + reg);
+		writel(dcsr, phy->base + reg);
+		if ((dcsr & DCSR_BUSERR) && (phy->vchan))
+			dev_warn(phy->vchan->dev, "DCSR_BUSERR\n");
+		return 0;
+	}
+	return -EAGAIN;
+}
+
+static irqreturn_t mmp_pdma_chan_handler(int irq, void *dev_id)
+{
+	struct mmp_pdma_phy *phy = dev_id;
+
+	if (clear_chan_irq(phy) == 0) {
+		tasklet_schedule(&phy->vchan->tasklet);
+		return IRQ_HANDLED;
+	} else
+		return IRQ_NONE;
+}
+
+static irqreturn_t mmp_pdma_int_handler(int irq, void *dev_id)
+{
+	struct mmp_pdma_device *pdev = dev_id;
+	struct mmp_pdma_phy *phy;
+	u32 dint = readl(pdev->base + DINT);
+	int i, ret;
+	int irq_num = 0;
+
+	while (dint) {
+		i = __ffs(dint);
+		dint &= (dint - 1);
+		phy = &pdev->phy[i];
+		ret = mmp_pdma_chan_handler(irq, phy);
+		if (ret == IRQ_HANDLED)
+			irq_num++;
+	}
+
+	if (irq_num)
+		return IRQ_HANDLED;
+	else
+		return IRQ_NONE;
+}
+
+/* lookup free phy channel as descending priority */
+static struct mmp_pdma_phy *lookup_phy(struct mmp_pdma_chan *pchan)
+{
+	int prio, i;
+	struct mmp_pdma_device *pdev = to_mmp_pdma_dev(pchan->chan.device);
+	struct mmp_pdma_phy *phy;
+
+	/*
+	 * dma channel priorities
+	 * ch 0 - 3,  16 - 19  <--> (0)
+	 * ch 4 - 7,  20 - 23  <--> (1)
+	 * ch 8 - 11, 24 - 27  <--> (2)
+	 * ch 12 - 15, 28 - 31  <--> (3)
+	 */
+	for (prio = 0; prio <= (((pdev->dma_channels - 1) & 0xf) >> 2); prio++) {
+		for (i = 0; i < pdev->dma_channels; i++) {
+			if (prio != ((i & 0xf) >> 2))
+				continue;
+			phy = &pdev->phy[i];
+			if (!phy->vchan) {
+				phy->vchan = pchan;
+				return phy;
+			}
+		}
+	}
+
+	return NULL;
+}
+
+/* desc->tx_list ==> pending list */
+static void append_pending_queue(struct mmp_pdma_chan *chan,
+					struct mmp_pdma_desc_sw *desc)
+{
+	struct mmp_pdma_desc_sw *tail =
+				to_mmp_pdma_desc(chan->chain_pending.prev);
+
+	if (list_empty(&chan->chain_pending))
+		goto out_splice;
+
+	/* one irq per queue, even appended */
+	tail->desc.ddadr = desc->async_tx.phys;
+	tail->desc.dcmd &= ~DCMD_ENDIRQEN;
+
+	/* softly link to pending list */
+out_splice:
+	list_splice_tail_init(&desc->tx_list, &chan->chain_pending);
+}
+
+/**
+ * start_pending_queue - transfer any pending transactions
+ * pending list ==> running list
+ */
+static void start_pending_queue(struct mmp_pdma_chan *chan)
+{
+	struct mmp_pdma_desc_sw *desc;
+
+	/* still in running, irq will start the pending list */
+	if (!chan->idle) {
+		dev_dbg(chan->dev, "DMA controller still busy\n");
+		return;
+	}
+
+	if (list_empty(&chan->chain_pending)) {
+		/* chance to re-fetch phy channel with higher prio */
+		if (chan->phy) {
+			chan->phy->vchan = NULL;
+			chan->phy = NULL;
+		}
+		dev_dbg(chan->dev, "no pending list\n");
+		return;
+	}
+
+	if (!chan->phy) {
+		chan->phy = lookup_phy(chan);
+		if (!chan->phy) {
+			dev_dbg(chan->dev, "no free dma channel\n");
+			return;
+		}
+	}
+
+	/*
+	 * pending -> running
+	 * reintilize pending list
+	 */
+	desc = list_first_entry(&chan->chain_pending,
+				struct mmp_pdma_desc_sw, node);
+	list_splice_tail_init(&chan->chain_pending, &chan->chain_running);
+
+	/*
+	 * Program the descriptor's address into the DMA controller,
+	 * then start the DMA transaction
+	 */
+	set_desc(chan->phy, desc->async_tx.phys);
+	enable_chan(chan->phy);
+	chan->idle = false;
+}
+
+
+/* desc->tx_list ==> pending list */
+static dma_cookie_t mmp_pdma_tx_submit(struct dma_async_tx_descriptor *tx)
+{
+	struct mmp_pdma_chan *chan = to_mmp_pdma_chan(tx->chan);
+	struct mmp_pdma_desc_sw *desc = tx_to_mmp_pdma_desc(tx);
+	struct mmp_pdma_desc_sw *child;
+	unsigned long flags;
+	dma_cookie_t cookie = -EBUSY;
+
+	spin_lock_irqsave(&chan->desc_lock, flags);
+
+	list_for_each_entry(child, &desc->tx_list, node) {
+		cookie = dma_cookie_assign(&child->async_tx);
+	}
+
+	append_pending_queue(chan, desc);
+
+	spin_unlock_irqrestore(&chan->desc_lock, flags);
+
+	return cookie;
+}
+
+struct mmp_pdma_desc_sw *mmp_pdma_alloc_descriptor(struct mmp_pdma_chan *chan)
+{
+	struct mmp_pdma_desc_sw *desc;
+	dma_addr_t pdesc;
+
+	desc = dma_pool_alloc(chan->desc_pool, GFP_ATOMIC, &pdesc);
+	if (!desc) {
+		dev_err(chan->dev, "out of memory for link descriptor\n");
+		return NULL;
+	}
+
+	memset(desc, 0, sizeof(*desc));
+	INIT_LIST_HEAD(&desc->tx_list);
+	dma_async_tx_descriptor_init(&desc->async_tx, &chan->chan);
+	/* each desc has submit */
+	desc->async_tx.tx_submit = mmp_pdma_tx_submit;
+	desc->async_tx.phys = pdesc;
+
+	return desc;
+}
+
+/**
+ * mmp_pdma_alloc_chan_resources - Allocate resources for DMA channel.
+ *
+ * This function will create a dma pool for descriptor allocation.
+ * Request irq only when channel is requested
+ * Return - The number of allocated descriptors.
+ */
+
+static int mmp_pdma_alloc_chan_resources(struct dma_chan *dchan)
+{
+	struct mmp_pdma_chan *chan = to_mmp_pdma_chan(dchan);
+
+	if (chan->desc_pool)
+		return 1;
+
+	chan->desc_pool =
+		dma_pool_create(dev_name(&dchan->dev->device), chan->dev,
+				  sizeof(struct mmp_pdma_desc_sw),
+				  __alignof__(struct mmp_pdma_desc_sw), 0);
+	if (!chan->desc_pool) {
+		dev_err(chan->dev, "unable to allocate descriptor pool\n");
+		return -ENOMEM;
+	}
+	if (chan->phy) {
+		chan->phy->vchan = NULL;
+		chan->phy = NULL;
+	}
+	chan->idle = true;
+	chan->dev_addr = 0;
+	return 1;
+}
+
+static void mmp_pdma_free_desc_list(struct mmp_pdma_chan *chan,
+				  struct list_head *list)
+{
+	struct mmp_pdma_desc_sw *desc, *_desc;
+
+	list_for_each_entry_safe(desc, _desc, list, node) {
+		list_del(&desc->node);
+		dma_pool_free(chan->desc_pool, desc, desc->async_tx.phys);
+
+	}
+}
+
+static void mmp_pdma_free_chan_resources(struct dma_chan *dchan)
+{
+	struct mmp_pdma_chan *chan = to_mmp_pdma_chan(dchan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&chan->desc_lock, flags);
+	mmp_pdma_free_desc_list(chan, &chan->chain_pending);
+	mmp_pdma_free_desc_list(chan, &chan->chain_running);
+	spin_unlock_irqrestore(&chan->desc_lock, flags);
+
+	dma_pool_destroy(chan->desc_pool);
+	chan->desc_pool = NULL;
+	chan->idle = true;
+	chan->dev_addr = 0;
+	if (chan->phy) {
+		chan->phy->vchan = NULL;
+		chan->phy = NULL;
+	}
+	return;
+}
+
+static struct dma_async_tx_descriptor *
+mmp_pdma_prep_memcpy(struct dma_chan *dchan,
+	dma_addr_t dma_dst, dma_addr_t dma_src,
+	size_t len, unsigned long flags)
+{
+	struct mmp_pdma_chan *chan;
+	struct mmp_pdma_desc_sw *first = NULL, *prev = NULL, *new;
+	size_t copy = 0;
+
+	if (!dchan)
+		return NULL;
+
+	if (!len)
+		return NULL;
+
+	chan = to_mmp_pdma_chan(dchan);
+
+	do {
+		/* Allocate the link descriptor from DMA pool */
+		new = mmp_pdma_alloc_descriptor(chan);
+		if (!new) {
+			dev_err(chan->dev, "no memory for desc\n");
+			goto fail;
+		}
+
+		copy = min_t(size_t, len, PDMA_MAX_DESC_BYTES);
+
+		new->desc.dcmd = chan->dcmd | (DCMD_LENGTH & copy);
+		new->desc.dsadr = dma_src;
+		new->desc.dtadr = dma_dst;
+
+		if (!first)
+			first = new;
+		else
+			prev->desc.ddadr = new->async_tx.phys;
+
+		new->async_tx.cookie = 0;
+		async_tx_ack(&new->async_tx);
+
+		prev = new;
+		len -= copy;
+
+		if (chan->dir == DMA_MEM_TO_DEV) {
+			dma_src += copy;
+		} else if (chan->dir == DMA_DEV_TO_MEM) {
+			dma_dst += copy;
+		} else if (chan->dir == DMA_MEM_TO_MEM) {
+			dma_src += copy;
+			dma_dst += copy;
+		}
+
+		/* Insert the link descriptor to the LD ring */
+		list_add_tail(&new->node, &first->tx_list);
+	} while (len);
+
+	first->async_tx.flags = flags; /* client is in control of this ack */
+	first->async_tx.cookie = -EBUSY;
+
+	/* last desc and fire IRQ */
+	new->desc.ddadr = DDADR_STOP;
+	new->desc.dcmd |= DCMD_ENDIRQEN;
+
+	return &first->async_tx;
+
+fail:
+	if (first)
+		mmp_pdma_free_desc_list(chan, &first->tx_list);
+	return NULL;
+}
+
+static struct dma_async_tx_descriptor *
+mmp_pdma_prep_slave_sg(struct dma_chan *dchan, struct scatterlist *sgl,
+			 unsigned int sg_len, enum dma_transfer_direction dir,
+			 unsigned long flags, void *context)
+{
+	struct mmp_pdma_chan *chan = to_mmp_pdma_chan(dchan);
+	struct mmp_pdma_desc_sw *first = NULL, *prev = NULL, *new = NULL;
+	size_t len, avail;
+	struct scatterlist *sg;
+	dma_addr_t addr;
+	int i;
+
+	if ((sgl == NULL) || (sg_len == 0))
+		return NULL;
+
+	for_each_sg(sgl, sg, sg_len, i) {
+		addr = sg_dma_address(sg);
+		avail = sg_dma_len(sgl);
+
+		do {
+			len = min_t(size_t, avail, PDMA_MAX_DESC_BYTES);
+
+			/* allocate and populate the descriptor */
+			new = mmp_pdma_alloc_descriptor(chan);
+			if (!new) {
+				dev_err(chan->dev, "no memory for desc\n");
+				goto fail;
+			}
+
+			new->desc.dcmd = chan->dcmd | (DCMD_LENGTH & len);
+			if (dir == DMA_MEM_TO_DEV) {
+				new->desc.dsadr = addr;
+				new->desc.dtadr = chan->dev_addr;
+			} else {
+				new->desc.dsadr = chan->dev_addr;
+				new->desc.dtadr = addr;
+			}
+
+			if (!first)
+				first = new;
+			else
+				prev->desc.ddadr = new->async_tx.phys;
+
+			new->async_tx.cookie = 0;
+			async_tx_ack(&new->async_tx);
+			prev = new;
+
+			/* Insert the link descriptor to the LD ring */
+			list_add_tail(&new->node, &first->tx_list);
+
+			/* update metadata */
+			addr += len;
+			avail -= len;
+		} while (avail);
+	}
+
+	first->async_tx.cookie = -EBUSY;
+	first->async_tx.flags = flags;
+
+	/* last desc and fire IRQ */
+	new->desc.ddadr = DDADR_STOP;
+	new->desc.dcmd |= DCMD_ENDIRQEN;
+
+	return &first->async_tx;
+
+fail:
+	if (first)
+		mmp_pdma_free_desc_list(chan, &first->tx_list);
+	return NULL;
+}
+
+static int mmp_pdma_control(struct dma_chan *dchan, enum dma_ctrl_cmd cmd,
+		unsigned long arg)
+{
+	struct mmp_pdma_chan *chan = to_mmp_pdma_chan(dchan);
+	struct dma_slave_config *cfg = (void *)arg;
+	unsigned long flags;
+	int ret = 0;
+	u32 maxburst = 0, addr = 0;
+	enum dma_slave_buswidth width = DMA_SLAVE_BUSWIDTH_UNDEFINED;
+
+	if (!dchan)
+		return -EINVAL;
+
+	switch (cmd) {
+	case DMA_TERMINATE_ALL:
+		disable_chan(chan->phy);
+		if (chan->phy) {
+			chan->phy->vchan = NULL;
+			chan->phy = NULL;
+		}
+		spin_lock_irqsave(&chan->desc_lock, flags);
+		mmp_pdma_free_desc_list(chan, &chan->chain_pending);
+		mmp_pdma_free_desc_list(chan, &chan->chain_running);
+		spin_unlock_irqrestore(&chan->desc_lock, flags);
+		chan->idle = true;
+		break;
+	case DMA_SLAVE_CONFIG:
+		if (cfg->direction == DMA_DEV_TO_MEM) {
+			chan->dcmd = DCMD_INCTRGADDR | DCMD_FLOWSRC;
+			maxburst = cfg->src_maxburst;
+			width = cfg->src_addr_width;
+			addr = cfg->src_addr;
+		} else if (cfg->direction == DMA_MEM_TO_DEV) {
+			chan->dcmd = DCMD_INCSRCADDR | DCMD_FLOWTRG;
+			maxburst = cfg->dst_maxburst;
+			width = cfg->dst_addr_width;
+			addr = cfg->dst_addr;
+		} else if (cfg->direction == DMA_MEM_TO_MEM) {
+			chan->dcmd = DCMD_INCTRGADDR | DCMD_INCSRCADDR;
+			width = DMA_SLAVE_BUSWIDTH_UNDEFINED;
+			maxburst = 32;
+		}
+
+		if (width == DMA_SLAVE_BUSWIDTH_1_BYTE)
+			chan->dcmd |= DCMD_WIDTH1;
+		else if (width == DMA_SLAVE_BUSWIDTH_2_BYTES)
+			chan->dcmd |= DCMD_WIDTH2;
+		else if (width == DMA_SLAVE_BUSWIDTH_4_BYTES)
+			chan->dcmd |= DCMD_WIDTH4;
+
+		if (maxburst == 8)
+			chan->dcmd |= DCMD_BURST8;
+		else if (maxburst == 16)
+			chan->dcmd |= DCMD_BURST16;
+		else if (maxburst == 32)
+			chan->dcmd |= DCMD_BURST32;
+
+		if (cfg) {
+			chan->dir = cfg->direction;
+			chan->drcmr = cfg->slave_id;
+		}
+		chan->dev_addr = addr;
+		break;
+	default:
+		return -ENOSYS;
+	}
+
+	return ret;
+}
+
+static enum dma_status mmp_pdma_tx_status(struct dma_chan *dchan,
+			dma_cookie_t cookie, struct dma_tx_state *txstate)
+{
+	struct mmp_pdma_chan *chan = to_mmp_pdma_chan(dchan);
+	enum dma_status ret;
+	unsigned long flags;
+
+	spin_lock_irqsave(&chan->desc_lock, flags);
+	ret = dma_cookie_status(dchan, cookie, txstate);
+	spin_unlock_irqrestore(&chan->desc_lock, flags);
+
+	return ret;
+}
+
+/**
+ * mmp_pdma_issue_pending - Issue the DMA start command
+ * pending list ==> running list
+ */
+static void mmp_pdma_issue_pending(struct dma_chan *dchan)
+{
+	struct mmp_pdma_chan *chan = to_mmp_pdma_chan(dchan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&chan->desc_lock, flags);
+	start_pending_queue(chan);
+	spin_unlock_irqrestore(&chan->desc_lock, flags);
+}
+
+/*
+ * dma_do_tasklet
+ * Do call back
+ * Start pending list
+ */
+static void dma_do_tasklet(unsigned long data)
+{
+	struct mmp_pdma_chan *chan = (struct mmp_pdma_chan *)data;
+	struct mmp_pdma_desc_sw *desc, *_desc;
+	LIST_HEAD(chain_cleanup);
+	unsigned long flags;
+
+	/* submit pending list; callback for each desc; free desc */
+
+	spin_lock_irqsave(&chan->desc_lock, flags);
+
+	/* update the cookie if we have some descriptors to cleanup */
+	if (!list_empty(&chan->chain_running)) {
+		dma_cookie_t cookie;
+
+		desc = to_mmp_pdma_desc(chan->chain_running.prev);
+		cookie = desc->async_tx.cookie;
+		dma_cookie_complete(&desc->async_tx);
+
+		dev_dbg(chan->dev, "completed_cookie=%d\n", cookie);
+	}
+
+	/*
+	 * move the descriptors to a temporary list so we can drop the lock
+	 * during the entire cleanup operation
+	 */
+	list_splice_tail_init(&chan->chain_running, &chain_cleanup);
+
+	/* the hardware is now idle and ready for more */
+	chan->idle = true;
+
+	/* Start any pending transactions automatically */
+	start_pending_queue(chan);
+	spin_unlock_irqrestore(&chan->desc_lock, flags);
+
+	/* Run the callback for each descriptor, in order */
+	list_for_each_entry_safe(desc, _desc, &chain_cleanup, node) {
+		struct dma_async_tx_descriptor *txd = &desc->async_tx;
+
+		/* Remove from the list of transactions */
+		list_del(&desc->node);
+		/* Run the link descriptor callback function */
+		if (txd->callback)
+			txd->callback(txd->callback_param);
+
+		dma_pool_free(chan->desc_pool, desc, txd->phys);
+	}
+}
+
+static int __devexit mmp_pdma_remove(struct platform_device *op)
+{
+	struct mmp_pdma_device *pdev = platform_get_drvdata(op);
+
+	dma_async_device_unregister(&pdev->device);
+	return 0;
+}
+
+static int __devinit mmp_pdma_chan_init(struct mmp_pdma_device *pdev,
+							int idx, int irq)
+{
+	struct mmp_pdma_phy *phy  = &pdev->phy[idx];
+	struct mmp_pdma_chan *chan;
+	int ret;
+
+	chan = devm_kzalloc(pdev->dev,
+			sizeof(struct mmp_pdma_chan), GFP_KERNEL);
+	if (chan == NULL)
+		return -ENOMEM;
+
+	phy->idx = idx;
+	phy->base = pdev->base;
+
+	if (irq) {
+		ret = devm_request_irq(pdev->dev, irq,
+			mmp_pdma_chan_handler, IRQF_DISABLED, "pdma", phy);
+		if (ret) {
+			dev_err(pdev->dev, "channel request irq fail!\n");
+			return ret;
+		}
+	}
+
+	spin_lock_init(&chan->desc_lock);
+	chan->dev = pdev->dev;
+	chan->chan.device = &pdev->device;
+	tasklet_init(&chan->tasklet, dma_do_tasklet, (unsigned long)chan);
+	INIT_LIST_HEAD(&chan->chain_pending);
+	INIT_LIST_HEAD(&chan->chain_running);
+
+	/* register virt channel to dma engine */
+	list_add_tail(&chan->chan.device_node,
+			&pdev->device.channels);
+
+	return 0;
+}
+
+static struct of_device_id mmp_pdma_dt_ids[] = {
+	{ .compatible = "marvell,pdma-1.0", },
+	{}
+};
+MODULE_DEVICE_TABLE(of, mmp_pdma_dt_ids);
+
+static int __devinit mmp_pdma_probe(struct platform_device *op)
+{
+	struct mmp_pdma_device *pdev;
+	const struct of_device_id *of_id;
+	struct mmp_dma_platdata *pdata = dev_get_platdata(&op->dev);
+	struct resource *iores;
+	int i, ret, irq = 0;
+	int dma_channels = 0, irq_num = 0;
+
+	pdev = devm_kzalloc(&op->dev, sizeof(*pdev), GFP_KERNEL);
+	if (!pdev)
+		return -ENOMEM;
+	pdev->dev = &op->dev;
+
+	iores = platform_get_resource(op, IORESOURCE_MEM, 0);
+	if (!iores)
+		return -EINVAL;
+
+	pdev->base = devm_request_and_ioremap(pdev->dev, iores);
+	if (!pdev->base)
+		return -EADDRNOTAVAIL;
+
+	of_id = of_match_device(mmp_pdma_dt_ids, pdev->dev);
+	if (of_id)
+		of_property_read_u32(pdev->dev->of_node,
+				"#dma-channels", &dma_channels);
+	else if (pdata && pdata->dma_channels)
+		dma_channels = pdata->dma_channels;
+	else
+		dma_channels = 32;	/* default 32 channel */
+	pdev->dma_channels = dma_channels;
+
+	for (i = 0; i < dma_channels; i++) {
+		if (platform_get_irq(op, i) > 0)
+			irq_num++;
+	}
+
+	pdev->phy = devm_kzalloc(pdev->dev,
+		dma_channels * sizeof(struct mmp_pdma_chan), GFP_KERNEL);
+	if (pdev->phy == NULL)
+		return -ENOMEM;
+
+	INIT_LIST_HEAD(&pdev->device.channels);
+
+	if (irq_num != dma_channels) {
+		/* all chan share one irq, demux inside */
+		irq = platform_get_irq(op, 0);
+		ret = devm_request_irq(pdev->dev, irq,
+			mmp_pdma_int_handler, IRQF_DISABLED, "pdma", pdev);
+		if (ret)
+			return ret;
+	}
+
+	for (i = 0; i < dma_channels; i++) {
+		irq = (irq_num != dma_channels) ? 0 : platform_get_irq(op, i);
+		ret = mmp_pdma_chan_init(pdev, i, irq);
+		if (ret)
+			return ret;
+	}
+
+	dma_cap_set(DMA_SLAVE, pdev->device.cap_mask);
+	dma_cap_set(DMA_MEMCPY, pdev->device.cap_mask);
+	dma_cap_set(DMA_SLAVE, pdev->device.cap_mask);
+	pdev->device.dev = &op->dev;
+	pdev->device.device_alloc_chan_resources = mmp_pdma_alloc_chan_resources;
+	pdev->device.device_free_chan_resources = mmp_pdma_free_chan_resources;
+	pdev->device.device_tx_status = mmp_pdma_tx_status;
+	pdev->device.device_prep_dma_memcpy = mmp_pdma_prep_memcpy;
+	pdev->device.device_prep_slave_sg = mmp_pdma_prep_slave_sg;
+	pdev->device.device_issue_pending = mmp_pdma_issue_pending;
+	pdev->device.device_control = mmp_pdma_control;
+	pdev->device.copy_align = PDMA_ALIGNMENT;
+
+	if (pdev->dev->coherent_dma_mask)
+		dma_set_mask(pdev->dev, pdev->dev->coherent_dma_mask);
+	else
+		dma_set_mask(pdev->dev, DMA_BIT_MASK(64));
+
+	ret = dma_async_device_register(&pdev->device);
+	if (ret) {
+		dev_err(pdev->device.dev, "unable to register\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+static const struct platform_device_id mmp_pdma_id_table[] = {
+	{ "mmp-pdma", },
+	{ },
+};
+
+static struct platform_driver mmp_pdma_driver = {
+	.driver		= {
+		.name	= "mmp-pdma",
+		.owner  = THIS_MODULE,
+		.of_match_table = mmp_pdma_dt_ids,
+	},
+	.id_table	= mmp_pdma_id_table,
+	.probe		= mmp_pdma_probe,
+	.remove		= __devexit_p(mmp_pdma_remove),
+};
+
+module_platform_driver(mmp_pdma_driver);
+
+MODULE_DESCRIPTION("MARVELL MMP Periphera DMA Driver");
+MODULE_AUTHOR("Marvell International Ltd.");
+MODULE_LICENSE("GPL v2");
diff --git a/include/linux/platform_data/mmp_dma.h b/include/linux/platform_data/mmp_dma.h
new file mode 100644
index 0000000..2a330ec
--- /dev/null
+++ b/include/linux/platform_data/mmp_dma.h
@@ -0,0 +1,19 @@
+/*
+ *  MMP Platform DMA Management
+ *
+ *  Copyright (c) 2011 Marvell Semiconductors Inc.
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License version 2 as
+ *  published by the Free Software Foundation.
+ *
+ */
+
+#ifndef MMP_DMA_H
+#define MMP_DMA_H
+
+struct mmp_dma_platdata {
+	int dma_channels;
+};
+
+#endif /* MMP_DMA_H */
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 2/4] dmaengine: mmp_tdma: add dt support
  2012-08-14  4:11 [PATCH v3 0/4] dmaengine: add support mmp-pdma Zhangfei Gao
  2012-08-14  4:11 ` [PATCH v3 1/4] dmaengine: mmp-pdma support Zhangfei Gao
@ 2012-08-14  4:11 ` Zhangfei Gao
  2012-08-14  9:34   ` Arnd Bergmann
  2012-08-14  4:11 ` [PATCH v3 3/4] dmatest: add dmaengine_slave_config for DMA_MEMCPY Zhangfei Gao
  2012-08-14  4:11 ` [PATCH v3 4/4] mtd: pxa3xx-nand: replace pxa_request_dma with dmaengine Zhangfei Gao
  3 siblings, 1 reply; 14+ messages in thread
From: Zhangfei Gao @ 2012-08-14  4:11 UTC (permalink / raw)
  To: linux-arm-kernel

Signed-off-by: Zhangfei Gao <zhangfei.gao@marvell.com>
---
 Documentation/devicetree/bindings/dma/mmp-dma.txt |   30 ++++++++++++
 drivers/dma/mmp_tdma.c                            |   51 +++++++++++++--------
 2 files changed, 61 insertions(+), 20 deletions(-)

diff --git a/Documentation/devicetree/bindings/dma/mmp-dma.txt b/Documentation/devicetree/bindings/dma/mmp-dma.txt
index cbf40d6..36254c4 100644
--- a/Documentation/devicetree/bindings/dma/mmp-dma.txt
+++ b/Documentation/devicetree/bindings/dma/mmp-dma.txt
@@ -43,3 +43,33 @@ pdma: dma-controller at d4000000 {
 	      #dma-channels = <16>;
       };
 
+
+Marvell Two Channel DMA Controller used specifically for audio
+Used platfroms: pxa688, pxa910
+
+Required properties:
+- compatible: Should be "marvell,adma-1.0" or "marvell,pxa910-squ"
+- reg: Should contain DMA registers location and length.
+- interrupts: Either contain all of the per-channel DMA interrupts
+		or one irq for dma device
+
+"marvell,adma-1.0" used on pxa688
+"marvell,pxa910-squ" used on pxa910
+
+Examples:
+
+/* each channel has specific irq */
+adma0: dma-controller at d42a0800 {
+	      compatible = "marvell,adma-1.0";
+	      reg = <0xd42a0800 0x100>;
+	      interrupts = <18 19>;
+	      interrupt-parent = <&intcmux32>;
+      };
+
+/* One irq for all channels */
+squ: dma-controller at d42a0800 {
+	      compatible = "marvell,pxa910-squ";
+	      reg = <0xd42a0800 0x100>;
+	      interrupts = <46>;
+      };
+
diff --git a/drivers/dma/mmp_tdma.c b/drivers/dma/mmp_tdma.c
index 8a15cf2..b93d73c 100644
--- a/drivers/dma/mmp_tdma.c
+++ b/drivers/dma/mmp_tdma.c
@@ -20,6 +20,7 @@
 #include <linux/device.h>
 #include <mach/regs-icu.h>
 #include <mach/sram.h>
+#include <linux/of_device.h>
 
 #include "dmaengine.h"
 
@@ -127,7 +128,6 @@ struct mmp_tdma_device {
 	void __iomem			*base;
 	struct dma_device		device;
 	struct mmp_tdma_chan		*tdmac[TDMA_CHANNEL_NUM];
-	int				irq;
 };
 
 #define to_mmp_tdma_chan(dchan) container_of(dchan, struct mmp_tdma_chan, chan)
@@ -492,7 +492,7 @@ static int __devinit mmp_tdma_chan_init(struct mmp_tdma_device *tdev,
 		return -ENOMEM;
 	}
 	if (irq)
-		tdmac->irq = irq + idx;
+		tdmac->irq = irq;
 	tdmac->dev	   = tdev->dev;
 	tdmac->chan.device = &tdev->device;
 	tdmac->idx	   = idx;
@@ -505,34 +505,43 @@ static int __devinit mmp_tdma_chan_init(struct mmp_tdma_device *tdev,
 	/* add the channel to tdma_chan list */
 	list_add_tail(&tdmac->chan.device_node,
 			&tdev->device.channels);
-
 	return 0;
 }
 
+static struct of_device_id mmp_tdma_dt_ids[] = {
+	{ .compatible = "marvell,adma-1.0", .data = (void *)MMP_AUD_TDMA},
+	{ .compatible = "marvell,pxa910-squ", .data = (void *)PXA910_SQU},
+	{}
+};
+MODULE_DEVICE_TABLE(of, mmp_tdma_dt_ids);
+
 static int __devinit mmp_tdma_probe(struct platform_device *pdev)
 {
-	const struct platform_device_id *id = platform_get_device_id(pdev);
-	enum mmp_tdma_type type = id->driver_data;
+	enum mmp_tdma_type type;
+	const struct of_device_id *of_id;
 	struct mmp_tdma_device *tdev;
 	struct resource *iores;
 	int i, ret;
-	int irq = 0;
+	int irq = 0, irq_num = 0;
 	int chan_num = TDMA_CHANNEL_NUM;
 
+	of_id = of_match_device(mmp_tdma_dt_ids, &pdev->dev);
+	if (of_id)
+		type = (enum mmp_tdma_type) of_id->data;
+	else
+		type = platform_get_device_id(pdev)->driver_data;
+
 	/* always have couple channels */
 	tdev = devm_kzalloc(&pdev->dev, sizeof(*tdev), GFP_KERNEL);
 	if (!tdev)
 		return -ENOMEM;
 
 	tdev->dev = &pdev->dev;
-	iores = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
-	if (!iores)
-		return -EINVAL;
 
-	if (resource_size(iores) != chan_num)
-		tdev->irq = iores->start;
-	else
-		irq = iores->start;
+	for (i = 0; i < chan_num; i++) {
+		if (platform_get_irq(pdev, i) > 0)
+			irq_num++;
+	}
 
 	iores = platform_get_resource(pdev, IORESOURCE_MEM, 0);
 	if (!iores)
@@ -542,25 +551,26 @@ static int __devinit mmp_tdma_probe(struct platform_device *pdev)
 	if (!tdev->base)
 		return -EADDRNOTAVAIL;
 
-	if (tdev->irq) {
-		ret = devm_request_irq(&pdev->dev, tdev->irq,
+	INIT_LIST_HEAD(&tdev->device.channels);
+
+	if (irq_num != chan_num) {
+		irq = platform_get_irq(pdev, 0);
+		ret = devm_request_irq(&pdev->dev, irq,
 			mmp_tdma_int_handler, IRQF_DISABLED, "tdma", tdev);
 		if (ret)
 			return ret;
 	}
 
-	dma_cap_set(DMA_SLAVE, tdev->device.cap_mask);
-	dma_cap_set(DMA_CYCLIC, tdev->device.cap_mask);
-
-	INIT_LIST_HEAD(&tdev->device.channels);
-
 	/* initialize channel parameters */
 	for (i = 0; i < chan_num; i++) {
+		irq = (irq_num != chan_num) ? 0 : platform_get_irq(pdev, i);
 		ret = mmp_tdma_chan_init(tdev, i, irq, type);
 		if (ret)
 			return ret;
 	}
 
+	dma_cap_set(DMA_SLAVE, tdev->device.cap_mask);
+	dma_cap_set(DMA_CYCLIC, tdev->device.cap_mask);
 	tdev->device.dev = &pdev->dev;
 	tdev->device.device_alloc_chan_resources =
 					mmp_tdma_alloc_chan_resources;
@@ -595,6 +605,7 @@ static struct platform_driver mmp_tdma_driver = {
 	.driver		= {
 		.name	= "mmp-tdma",
 		.owner  = THIS_MODULE,
+		.of_match_table = mmp_tdma_dt_ids,
 	},
 	.id_table	= mmp_tdma_id_table,
 	.probe		= mmp_tdma_probe,
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 3/4] dmatest: add dmaengine_slave_config for DMA_MEMCPY
  2012-08-14  4:11 [PATCH v3 0/4] dmaengine: add support mmp-pdma Zhangfei Gao
  2012-08-14  4:11 ` [PATCH v3 1/4] dmaengine: mmp-pdma support Zhangfei Gao
  2012-08-14  4:11 ` [PATCH v3 2/4] dmaengine: mmp_tdma: add dt support Zhangfei Gao
@ 2012-08-14  4:11 ` Zhangfei Gao
  2012-08-14  8:20   ` Russell King - ARM Linux
  2012-08-14  4:11 ` [PATCH v3 4/4] mtd: pxa3xx-nand: replace pxa_request_dma with dmaengine Zhangfei Gao
  3 siblings, 1 reply; 14+ messages in thread
From: Zhangfei Gao @ 2012-08-14  4:11 UTC (permalink / raw)
  To: linux-arm-kernel

Set direction to DMA_MEM_TO_MEM
DMA driver may require such configure info

Signed-off-by: Zhangfei Gao <zhangfei.gao@marvell.com>
---
 drivers/dma/dmatest.c |    4 ++++
 1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c
index 24225f0..73bab68 100644
--- a/drivers/dma/dmatest.c
+++ b/drivers/dma/dmatest.c
@@ -261,6 +261,9 @@ static int dmatest_func(void *data)
 	int			src_cnt;
 	int			dst_cnt;
 	int			i;
+	struct dma_slave_config conf = {
+		.direction = DMA_MEM_TO_MEM,
+	};
 
 	thread_name = current->comm;
 	set_freezable();
@@ -361,6 +364,7 @@ static int dmatest_func(void *data)
 						     DMA_BIDIRECTIONAL);
 		}
 
+		dmaengine_slave_config(chan, &conf);
 
 		if (thread->type == DMA_MEMCPY)
 			tx = dev->device_prep_dma_memcpy(chan,
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 4/4] mtd: pxa3xx-nand: replace pxa_request_dma with dmaengine
  2012-08-14  4:11 [PATCH v3 0/4] dmaengine: add support mmp-pdma Zhangfei Gao
                   ` (2 preceding siblings ...)
  2012-08-14  4:11 ` [PATCH v3 3/4] dmatest: add dmaengine_slave_config for DMA_MEMCPY Zhangfei Gao
@ 2012-08-14  4:11 ` Zhangfei Gao
  3 siblings, 0 replies; 14+ messages in thread
From: Zhangfei Gao @ 2012-08-14  4:11 UTC (permalink / raw)
  To: linux-arm-kernel

Signed-off-by: Zhangfei Gao <zhangfei.gao@marvell.com>
---
 drivers/mtd/nand/pxa3xx_nand.c |  113 ++++++++++++++++++++++------------------
 1 files changed, 63 insertions(+), 50 deletions(-)

diff --git a/drivers/mtd/nand/pxa3xx_nand.c b/drivers/mtd/nand/pxa3xx_nand.c
index c5ea313..2768826 100644
--- a/drivers/mtd/nand/pxa3xx_nand.c
+++ b/drivers/mtd/nand/pxa3xx_nand.c
@@ -22,8 +22,8 @@
 #include <linux/io.h>
 #include <linux/irq.h>
 #include <linux/slab.h>
+#include <linux/dmaengine.h>
 
-#include <mach/dma.h>
 #include <plat/pxa3xx_nand.h>
 
 #define	CHIP_DELAY_TIMEOUT	(2 * HZ/10)
@@ -162,8 +162,7 @@ struct pxa3xx_nand_info {
 	unsigned char		*data_buff;
 	unsigned char		*oob_buff;
 	dma_addr_t 		data_buff_phys;
-	int 			data_dma_ch;
-	struct pxa_dma_desc	*data_desc;
+	struct dma_chan		*data_dma_ch;
 	dma_addr_t 		data_desc_addr;
 
 	struct pxa3xx_nand_host *host[NUM_CHIP_SELECT];
@@ -332,14 +331,6 @@ static void pxa3xx_nand_stop(struct pxa3xx_nand_info *info)
 	nand_writel(info, NDSR, NDSR_MASK);
 }
 
-static void enable_int(struct pxa3xx_nand_info *info, uint32_t int_mask)
-{
-	uint32_t ndcr;
-
-	ndcr = nand_readl(info, NDCR);
-	nand_writel(info, NDCR, ndcr & ~int_mask);
-}
-
 static void disable_int(struct pxa3xx_nand_info *info, uint32_t int_mask)
 {
 	uint32_t ndcr;
@@ -372,24 +363,38 @@ static void handle_data_pio(struct pxa3xx_nand_info *info)
 	}
 }
 
+static void dma_complete_func(void *data)
+{
+	struct pxa3xx_nand_info *info = data;
+
+	info->state = STATE_DMA_DONE;
+}
+
 static void start_data_dma(struct pxa3xx_nand_info *info)
 {
-	struct pxa_dma_desc *desc = info->data_desc;
+	struct dma_device *dma_dev;
+	struct dma_async_tx_descriptor *tx = NULL;
+	dma_addr_t dma_src_addr, dma_dst_addr;
+	dma_cookie_t cookie;
 	int dma_len = ALIGN(info->data_size + info->oob_size, 32);
+	struct dma_slave_config conf;
 
-	desc->ddadr = DDADR_STOP;
-	desc->dcmd = DCMD_ENDIRQEN | DCMD_WIDTH4 | DCMD_BURST32 | dma_len;
+	dma_dev = info->data_dma_ch->device;
 
 	switch (info->state) {
 	case STATE_DMA_WRITING:
-		desc->dsadr = info->data_buff_phys;
-		desc->dtadr = info->mmio_phys + NDDB;
-		desc->dcmd |= DCMD_INCSRCADDR | DCMD_FLOWTRG;
+		dma_src_addr = info->data_buff_phys;
+		dma_dst_addr = info->mmio_phys + NDDB;
+		conf.direction = DMA_MEM_TO_DEV;
+		conf.dst_maxburst = 32;
+		conf.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
 		break;
 	case STATE_DMA_READING:
-		desc->dtadr = info->data_buff_phys;
-		desc->dsadr = info->mmio_phys + NDDB;
-		desc->dcmd |= DCMD_INCTRGADDR | DCMD_FLOWSRC;
+		dma_src_addr = info->mmio_phys + NDDB;
+		dma_dst_addr = info->data_buff_phys;
+		conf.direction = DMA_DEV_TO_MEM;
+		conf.src_maxburst = 32;
+		conf.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
 		break;
 	default:
 		dev_err(&info->pdev->dev, "%s: invalid state %d\n", __func__,
@@ -397,26 +402,27 @@ static void start_data_dma(struct pxa3xx_nand_info *info)
 		BUG();
 	}
 
-	DRCMR(info->drcmr_dat) = DRCMR_MAPVLD | info->data_dma_ch;
-	DDADR(info->data_dma_ch) = info->data_desc_addr;
-	DCSR(info->data_dma_ch) |= DCSR_RUN;
-}
-
-static void pxa3xx_nand_data_dma_irq(int channel, void *data)
-{
-	struct pxa3xx_nand_info *info = data;
-	uint32_t dcsr;
+	conf.slave_id = info->drcmr_dat;
+	dmaengine_slave_config(info->data_dma_ch, &conf);
+	tx = dma_dev->device_prep_dma_memcpy(info->data_dma_ch, dma_dst_addr,
+					     dma_src_addr, dma_len, 0);
+	if (!tx) {
+		dev_err(&info->pdev->dev, "Failed to prepare DMA memcpy\n");
+		return;
+	}
 
-	dcsr = DCSR(channel);
-	DCSR(channel) = dcsr;
+	tx->callback = dma_complete_func;
+	tx->callback_param = info;
 
-	if (dcsr & DCSR_BUSERR) {
-		info->retcode = ERR_DMABUSERR;
+	cookie = tx->tx_submit(tx);
+	if (dma_submit_error(cookie)) {
+		dev_err(&info->pdev->dev, "Failed to do DMA tx_submit\n");
+		return;
 	}
 
-	info->state = STATE_DMA_DONE;
-	enable_int(info, NDCR_INT_MASK);
-	nand_writel(info, NDSR, NDSR_WRDREQ | NDSR_RDDREQ);
+	dma_async_issue_pending(info->data_dma_ch);
+
+	return;
 }
 
 static irqreturn_t pxa3xx_nand_irq(int irq, void *devid)
@@ -434,6 +440,7 @@ static irqreturn_t pxa3xx_nand_irq(int irq, void *devid)
 	}
 
 	status = nand_readl(info, NDSR);
+	nand_writel(info, NDSR, status);
 
 	if (status & NDSR_DBERR)
 		info->retcode = ERR_DBERR;
@@ -442,7 +449,6 @@ static irqreturn_t pxa3xx_nand_irq(int irq, void *devid)
 	if (status & (NDSR_RDDREQ | NDSR_WRDREQ)) {
 		/* whether use dma to transfer data */
 		if (info->use_dma) {
-			disable_int(info, NDCR_INT_MASK);
 			info->state = (status & NDSR_RDDREQ) ?
 				      STATE_DMA_READING : STATE_DMA_WRITING;
 			start_data_dma(info);
@@ -757,6 +763,9 @@ static void pxa3xx_nand_read_buf(struct mtd_info *mtd, uint8_t *buf, int len)
 	struct pxa3xx_nand_info *info = host->info_data;
 	int real_len = min_t(size_t, len, info->buf_count - info->buf_start);
 
+	if (len > mtd->oobsize)
+		info->use_dma = use_dma;
+
 	memcpy(buf, info->data_buff + info->buf_start, real_len);
 	info->buf_start += real_len;
 }
@@ -768,6 +777,9 @@ static void pxa3xx_nand_write_buf(struct mtd_info *mtd,
 	struct pxa3xx_nand_info *info = host->info_data;
 	int real_len = min_t(size_t, len, info->buf_count - info->buf_start);
 
+	if (len > mtd->oobsize)
+		info->use_dma = use_dma;
+
 	memcpy(info->data_buff + info->buf_start, buf, real_len);
 	info->buf_start += real_len;
 }
@@ -886,7 +898,7 @@ static int pxa3xx_nand_detect_config(struct pxa3xx_nand_info *info)
 static int pxa3xx_nand_init_buff(struct pxa3xx_nand_info *info)
 {
 	struct platform_device *pdev = info->pdev;
-	int data_desc_offset = MAX_BUFF_SIZE - sizeof(struct pxa_dma_desc);
+	dma_cap_mask_t mask;
 
 	if (use_dma == 0) {
 		info->data_buff = kmalloc(MAX_BUFF_SIZE, GFP_KERNEL);
@@ -902,19 +914,20 @@ static int pxa3xx_nand_init_buff(struct pxa3xx_nand_info *info)
 		return -ENOMEM;
 	}
 
-	info->data_desc = (void *)info->data_buff + data_desc_offset;
-	info->data_desc_addr = info->data_buff_phys + data_desc_offset;
 
-	info->data_dma_ch = pxa_request_dma("nand-data", DMA_PRIO_LOW,
-				pxa3xx_nand_data_dma_irq, info);
-	if (info->data_dma_ch < 0) {
-		dev_err(&pdev->dev, "failed to request data dma\n");
-		dma_free_coherent(&pdev->dev, MAX_BUFF_SIZE,
-				info->data_buff, info->data_buff_phys);
-		return info->data_dma_ch;
+	dma_cap_zero(mask);
+	dma_cap_set(DMA_MEMCPY, mask);
+	info->data_dma_ch = dma_request_channel(mask, NULL, NULL);
+	if (!info->data_dma_ch) {
+		dev_info(&pdev->dev, "Failed to request DMA channel\n");
+		goto dma_request_fail;
 	}
-
 	return 0;
+
+dma_request_fail:
+	dma_free_coherent(&pdev->dev, MAX_BUFF_SIZE,
+			info->data_buff, info->data_buff_phys);
+	return -EAGAIN;
 }
 
 static int pxa3xx_nand_sensing(struct pxa3xx_nand_info *info)
@@ -1149,7 +1162,7 @@ static int alloc_nand_resource(struct platform_device *pdev)
 fail_free_buf:
 	free_irq(irq, info);
 	if (use_dma) {
-		pxa_free_dma(info->data_dma_ch);
+		dma_release_channel(info->data_dma_ch);
 		dma_free_coherent(&pdev->dev, MAX_BUFF_SIZE,
 			info->data_buff, info->data_buff_phys);
 	} else
@@ -1183,7 +1196,7 @@ static int pxa3xx_nand_remove(struct platform_device *pdev)
 	if (irq >= 0)
 		free_irq(irq, info);
 	if (use_dma) {
-		pxa_free_dma(info->data_dma_ch);
+		dma_release_channel(info->data_dma_ch);
 		dma_free_writecombine(&pdev->dev, MAX_BUFF_SIZE,
 				info->data_buff, info->data_buff_phys);
 	} else
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 3/4] dmatest: add dmaengine_slave_config for DMA_MEMCPY
  2012-08-14  4:11 ` [PATCH v3 3/4] dmatest: add dmaengine_slave_config for DMA_MEMCPY Zhangfei Gao
@ 2012-08-14  8:20   ` Russell King - ARM Linux
  2012-08-15  7:29     ` zhangfei gao
  0 siblings, 1 reply; 14+ messages in thread
From: Russell King - ARM Linux @ 2012-08-14  8:20 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Aug 14, 2012 at 12:11:58PM +0800, Zhangfei Gao wrote:
> Set direction to DMA_MEM_TO_MEM
> DMA driver may require such configure info

No, this is wrong.  By default, any channel being asked to do memcpy
should deal with the channel configuration itself and not require it
to be set - otherwise this breaks the async_tx API.

So consider that dma-test found a bug in your driver which needs fixing.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 1/4] dmaengine: mmp-pdma support
  2012-08-14  4:11 ` [PATCH v3 1/4] dmaengine: mmp-pdma support Zhangfei Gao
@ 2012-08-14  8:30   ` Russell King - ARM Linux
  2012-08-15  7:44     ` zhangfei gao
  2012-08-14  9:33   ` Arnd Bergmann
  1 sibling, 1 reply; 14+ messages in thread
From: Russell King - ARM Linux @ 2012-08-14  8:30 UTC (permalink / raw)
  To: linux-arm-kernel

Please consider looking at the virtual dma channel support, which I've
recently added.  This removes the burden of managing the descriptor
lists from the driver, deals with the workqueues for completing
descriptors, etc.

All you need to do is to build the descriptors for the DMA operation,
manage the scheduling of virtual channels onto physical channels,
calculate the position of DMA, and starting off and handling the DMA
hardware.

"virtual dma channel support" is probably something of a misnomer though;
it can also be used for situations where there's a 1:1 binding between
virtual and physical dma channels (iow, no virtual channels at all.)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 1/4] dmaengine: mmp-pdma support
  2012-08-14  4:11 ` [PATCH v3 1/4] dmaengine: mmp-pdma support Zhangfei Gao
  2012-08-14  8:30   ` Russell King - ARM Linux
@ 2012-08-14  9:33   ` Arnd Bergmann
  1 sibling, 0 replies; 14+ messages in thread
From: Arnd Bergmann @ 2012-08-14  9:33 UTC (permalink / raw)
  To: linux-arm-kernel

On Tuesday 14 August 2012, Zhangfei Gao wrote:
> 1. virtual channel vs. physical channel
> Virtual channel is managed by dmaengine
> Physical channel handling resource, such as irq
> Physical channel is alloced dynamically as descending priority,
> freed immediately when irq done.
> The availble highest priority physically channel will alwayes be alloced
> 
> Issue pending list -> alloc highest dma physically channel available -> dma done -> free physically channel
> 
> 2. list: running list & pending list
> submit: desc list -> pending list
> issue_pending_list: if (IDLE) pending list -> running list; free pending list (RUN)
> irq: free running list (IDLE)
>      check pendlist -> pending list -> running list; free pending list (RUN)
> 
> 3. irq:
> Each list generate one irq, calling callback
> One list may contain several desc chain, in such case, make sure only the last desc list generate irq.
> 
> 4. async
> Submit will add desc chain to pending list, which can be multi-called
> If multi desc chain is submitted, only the last desc would generate irq -> call back
> If IDLE, issue_pending_list start pending_list, transforming pendlist to running list
> If RUN, irq will start pending list
> 
> 5. test
> 5.1 pxa3xx_nand on pxa910
> 5.2 insmod dmatest.ko (threads_per_chan=y)
> By default drivers/dma/dmatest.c test every channel and test memcpy with 1 threads per channel
> 
> Signed-off-by: Zhangfei Gao <zhangfei.gao@marvell.com>

Acked-by: Arnd Bergmann <arnd@arndb.de>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 2/4] dmaengine: mmp_tdma: add dt support
  2012-08-14  4:11 ` [PATCH v3 2/4] dmaengine: mmp_tdma: add dt support Zhangfei Gao
@ 2012-08-14  9:34   ` Arnd Bergmann
  2012-08-15  7:47     ` zhangfei gao
  0 siblings, 1 reply; 14+ messages in thread
From: Arnd Bergmann @ 2012-08-14  9:34 UTC (permalink / raw)
  To: linux-arm-kernel

On Tuesday 14 August 2012, Zhangfei Gao wrote:
> Signed-off-by: Zhangfei Gao <zhangfei.gao@marvell.com>
> ---
>  Documentation/devicetree/bindings/dma/mmp-dma.txt |   30 ++++++++++++
>  drivers/dma/mmp_tdma.c                            |   51 +++++++++++++--------
>  2 files changed, 61 insertions(+), 20 deletions(-)

Acked-by: Arnd Bergmann <arnd@arndb.de>

but please add a changelog text for this patch and every other one you do.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 3/4] dmatest: add dmaengine_slave_config for DMA_MEMCPY
  2012-08-14  8:20   ` Russell King - ARM Linux
@ 2012-08-15  7:29     ` zhangfei gao
  0 siblings, 0 replies; 14+ messages in thread
From: zhangfei gao @ 2012-08-15  7:29 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Aug 14, 2012 at 4:20 PM, Russell King - ARM Linux
<linux@arm.linux.org.uk> wrote:
> On Tue, Aug 14, 2012 at 12:11:58PM +0800, Zhangfei Gao wrote:
>> Set direction to DMA_MEM_TO_MEM
>> DMA driver may require such configure info
>
> No, this is wrong.  By default, any channel being asked to do memcpy
> should deal with the channel configuration itself and not require it
> to be set - otherwise this breaks the async_tx API.
>
> So consider that dma-test found a bug in your driver which needs fixing.

Thanks Russell for the info.

Really not know DMA_MEM_TO_MEM should be set as default.
>From include/linux/dmaengine.h only find "DMA_ASYNC_TX capability
is automatically set as dma devices are registered".

Will change driver and set config as DMA_MEM_TO_MEM as default setting.

Thanks

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 1/4] dmaengine: mmp-pdma support
  2012-08-14  8:30   ` Russell King - ARM Linux
@ 2012-08-15  7:44     ` zhangfei gao
  2012-08-15  8:04       ` Russell King - ARM Linux
  0 siblings, 1 reply; 14+ messages in thread
From: zhangfei gao @ 2012-08-15  7:44 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Aug 14, 2012 at 4:30 PM, Russell King - ARM Linux
<linux@arm.linux.org.uk> wrote:
> Please consider looking at the virtual dma channel support, which I've
> recently added.  This removes the burden of managing the descriptor
> lists from the driver, deals with the workqueues for completing
> descriptors, etc.
>
> All you need to do is to build the descriptors for the DMA operation,
> manage the scheduling of virtual channels onto physical channels,
> calculate the position of DMA, and starting off and handling the DMA
> hardware.
>
> "virtual dma channel support" is probably something of a misnomer though;
> it can also be used for situations where there's a 1:1 binding between
> virtual and physical dma channels (iow, no virtual channels at all.)
>

Thanks Russell.

Have noticed virt-dma.c before.
The reason not use it is I thought it would be simpler
and easy-debugging to manage virt-channel via two lists:
pending and running lists.

Still have one question.
vchan_tx_submit is sending vd node to submitted list.
list_add_tail(&vd->node, &vc->desc_submitted);

While what we want is send the desc chain, including many nodes,
and the first desc will have call back.
chain 1: first -> desc -> desc -> desc
chain 2: first -> desc -> desc -> desc
Once the dma chain done, irq happen and call back from first desc.

Is that mean vchan_tx_submit is not covering this situation?

Thanks

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 2/4] dmaengine: mmp_tdma: add dt support
  2012-08-14  9:34   ` Arnd Bergmann
@ 2012-08-15  7:47     ` zhangfei gao
  0 siblings, 0 replies; 14+ messages in thread
From: zhangfei gao @ 2012-08-15  7:47 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Aug 14, 2012 at 5:34 PM, Arnd Bergmann <arnd@arndb.de> wrote:
> On Tuesday 14 August 2012, Zhangfei Gao wrote:
>> Signed-off-by: Zhangfei Gao <zhangfei.gao@marvell.com>
>> ---
>>  Documentation/devicetree/bindings/dma/mmp-dma.txt |   30 ++++++++++++
>>  drivers/dma/mmp_tdma.c                            |   51 +++++++++++++--------
>>  2 files changed, 61 insertions(+), 20 deletions(-)
>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
>
> but please add a changelog text for this patch and every other one you do.

Thanks Arnd, will take care later.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 1/4] dmaengine: mmp-pdma support
  2012-08-15  7:44     ` zhangfei gao
@ 2012-08-15  8:04       ` Russell King - ARM Linux
  2012-08-15  9:09         ` zhangfei gao
  0 siblings, 1 reply; 14+ messages in thread
From: Russell King - ARM Linux @ 2012-08-15  8:04 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Aug 15, 2012 at 03:44:35PM +0800, zhangfei gao wrote:
> Still have one question.
> vchan_tx_submit is sending vd node to submitted list.
> list_add_tail(&vd->node, &vc->desc_submitted);
> 
> While what we want is send the desc chain, including many nodes,
> and the first desc will have call back.
> chain 1: first -> desc -> desc -> desc
> chain 2: first -> desc -> desc -> desc
> Once the dma chain done, irq happen and call back from first desc.

I don't understand why you want a DMA engine descriptor to represent
one single DMA hardware scatterlist entry.  This makes much more
difficult when reporting status, especially the DMA residue (which
your existing driver does not support.)

Why not hang several DMA hardware scatterlist entries off the same
DMA engine descriptor?

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 1/4] dmaengine: mmp-pdma support
  2012-08-15  8:04       ` Russell King - ARM Linux
@ 2012-08-15  9:09         ` zhangfei gao
  0 siblings, 0 replies; 14+ messages in thread
From: zhangfei gao @ 2012-08-15  9:09 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Aug 15, 2012 at 4:04 PM, Russell King - ARM Linux
<linux@arm.linux.org.uk> wrote:
> On Wed, Aug 15, 2012 at 03:44:35PM +0800, zhangfei gao wrote:
>> Still have one question.
>> vchan_tx_submit is sending vd node to submitted list.
>> list_add_tail(&vd->node, &vc->desc_submitted);
>>
>> While what we want is send the desc chain, including many nodes,
>> and the first desc will have call back.
>> chain 1: first -> desc -> desc -> desc
>> chain 2: first -> desc -> desc -> desc
>> Once the dma chain done, irq happen and call back from first desc.
>
> I don't understand why you want a DMA engine descriptor to represent
> one single DMA hardware scatterlist entry.  This makes much more
> difficult when reporting status, especially the DMA residue (which
> your existing driver does not support.)
>
> Why not hang several DMA hardware scatterlist entries off the same
> DMA engine descriptor?

The cookie can be used to compute tx_status.
For example, one descriptor chain has 10 descriptor, with cookie++.
tx_submit will return final cookie=9.
dma_async_is_tx_complete will use this cookie to check the status.

If using single descriptor describing big chunc of data, with only cookie.
Then driver has to directly check the dma register to get status.

I think the residue may only meaningful to cyclic dma, for audio.
For peripheral dma, irq after one descriptor chain, do we have to set it?
If needed it, how about get such info in irq.

Thanks

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2012-08-15  9:09 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-08-14  4:11 [PATCH v3 0/4] dmaengine: add support mmp-pdma Zhangfei Gao
2012-08-14  4:11 ` [PATCH v3 1/4] dmaengine: mmp-pdma support Zhangfei Gao
2012-08-14  8:30   ` Russell King - ARM Linux
2012-08-15  7:44     ` zhangfei gao
2012-08-15  8:04       ` Russell King - ARM Linux
2012-08-15  9:09         ` zhangfei gao
2012-08-14  9:33   ` Arnd Bergmann
2012-08-14  4:11 ` [PATCH v3 2/4] dmaengine: mmp_tdma: add dt support Zhangfei Gao
2012-08-14  9:34   ` Arnd Bergmann
2012-08-15  7:47     ` zhangfei gao
2012-08-14  4:11 ` [PATCH v3 3/4] dmatest: add dmaengine_slave_config for DMA_MEMCPY Zhangfei Gao
2012-08-14  8:20   ` Russell King - ARM Linux
2012-08-15  7:29     ` zhangfei gao
2012-08-14  4:11 ` [PATCH v3 4/4] mtd: pxa3xx-nand: replace pxa_request_dma with dmaengine Zhangfei Gao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).