* [PATCH 0/1] Add Qualcomm MSM ADM DMAEngine driver
@ 2011-12-22 13:08 Ravi Kumar V
2011-12-22 13:08 ` [PATCH 1/1] msm: DMAEngine: Add DMAEngine driver based on old MSM DMA APIs Ravi Kumar V
0 siblings, 1 reply; 3+ messages in thread
From: Ravi Kumar V @ 2011-12-22 13:08 UTC (permalink / raw)
To: Vinod Koul, Dan Williams
Cc: David Brown, Daniel Walker, Bryan Huntsman, Trilok Soni,
Ravi Kumar V, linux-kernel, linux-arm-msm, linux-arm-kernel
Qualcomm Application Datamover(ADM) supports simple, scatter gather
and box mode dma transfers.
This driver implements the scatter gather and box mode transfer.
Our ADM hardware needs these following fields for a transfer.
-------------------
Scatter gather mode
-------------------
32-bit command configuration.
32-bit source address
32 bit destination address
16-bit length
---------
Box Mode
---------
32-bit command configuration
32-bit source address
32-bit destination address
16-bit source row length
16-bit destination row length
16-bit source row numbers
16-bit destination row numbers
16-bit source row offset
16-bit destination row offset
As of now there is no support to pass private data like "command"
configuration in standard DMAEngine API.
We are using "device_prep_dma_sg" API to do the SG and box mode transfer,
but since the current APIs doesn't support a way to pass the "command configuration",
"source and destination row numbers" we are using "length" and "offset" field of the
existing scatterlist to achieve it. Let us know if there is a better way
to do it or new API needs to be created.
Here we are using same length field as above for command configuration,
16 MSB bits offset field of scatterlist for passing row numbers
and remaining of 16 LSB bits for offset.
Ravi Kumar V (1):
msm: DMAEngine: Add DMAEngine driver based on old MSM DMA APIs
arch/arm/mach-msm/include/mach/dma.h | 53 +++
drivers/dma/Kconfig | 12 +
drivers/dma/Makefile | 1 +
drivers/dma/msm-dma.c | 719 ++++++++++++++++++++++++++++++++++
4 files changed, 785 insertions(+), 0 deletions(-)
create mode 100644 drivers/dma/msm-dma.c
--
Sent by a consultant of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH 1/1] msm: DMAEngine: Add DMAEngine driver based on old MSM DMA APIs
2011-12-22 13:08 [PATCH 0/1] Add Qualcomm MSM ADM DMAEngine driver Ravi Kumar V
@ 2011-12-22 13:08 ` Ravi Kumar V
2011-12-22 19:12 ` David Brown
0 siblings, 1 reply; 3+ messages in thread
From: Ravi Kumar V @ 2011-12-22 13:08 UTC (permalink / raw)
To: Vinod Koul, Dan Williams
Cc: David Brown, Daniel Walker, Bryan Huntsman, Trilok Soni,
Ravi Kumar V, linux-kernel, linux-arm-msm, linux-arm-kernel
Add DMAEngine based driver using the old MSM DMA APIs internally.
The benefit of this approach is that not all the drivers
have to get converted to DMAEngine APIs simultaneosly while
both the drivers can stay enabled in the kernel. The client
drivers using the old MSM APIs directly can now convert to
DMAEngine one by one.
Change-Id: I3647ed7b8c73b3078dfa8877a3560db3cb0a2373
Signed-off-by: Ravi Kumar V <kumarrav@codeaurora.org>
---
arch/arm/mach-msm/include/mach/dma.h | 53 +++
drivers/dma/Kconfig | 12 +
drivers/dma/Makefile | 1 +
drivers/dma/msm-dma.c | 719 ++++++++++++++++++++++++++++++++++
4 files changed, 785 insertions(+), 0 deletions(-)
create mode 100644 drivers/dma/msm-dma.c
diff --git a/arch/arm/mach-msm/include/mach/dma.h b/arch/arm/mach-msm/include/mach/dma.h
index 05583f5..c3850e5 100644
--- a/arch/arm/mach-msm/include/mach/dma.h
+++ b/arch/arm/mach-msm/include/mach/dma.h
@@ -18,6 +18,59 @@
#include <linux/list.h>
#include <mach/msm_iomap.h>
+#ifdef CONFIG_NEED_SG_DMA_LENGTH
+#define msm_dma_len(sg, len) (((sg)->dma_length) = \
+ (((sg)->dma_length) & 0xFFFF0000) | \
+ (len & 0xFFFF))
+#define msm_dma_mode(sg, mode) (((sg)->dma_length) = \
+ (((sg)->dma_length) & 0xFFFCFFFF) | \
+ (mode << 16))
+#define msm_dma_crci(sg, crci) (((sg)->dma_length) = \
+ (((sg)->dma_length) & 0x0003FFFF) | \
+ (crci << 15))
+#define msm_dma_endian(sg, en) (((sg)->dma_length) = \
+ (((sg)->dma_length) & 0x03FFFFFF) | \
+ (en << 26))
+#else
+#define msm_dma_len(sg, len) (((sg)->length) = \
+ (((sg)->length) & 0xFFFF0000) | \
+ (len & 0xFFFF))
+#define msm_dma_mode(sg, mode) (((sg)->length) = \
+ (((sg)->length) & 0xFFFCFFFF) | \
+ (mode << 16))
+#define msm_dma_crci(sg, crci) (((sg)->length) = \
+ (((sg)->length) & 0x0003FFFF) | \
+ (crci << 15))
+#define msm_dma_endian(sg, en) (((sg)->length) = \
+ (((sg)->length) & 0x03FFFFFF) | \
+ (en << 26))
+#endif
+#define msm_dma_offset(sg, val) (((sg)->offset) = \
+ (((sg)->offset) & 0xFFFF0000) | \
+ (val & 0xFFFF))
+#define msm_dma_row_count(sg, cnt) (((sg)->offset) = \
+ (((sg)->offset) & 0x0000FFFF) | \
+ (cnt << 16))
+#define msm_dma_address(sg) ((sg)->dma_address)
+
+#define MSM_DMA_LEN_MASK 0xFFFF
+#define MSM_DMA_CMD_MODE_MASK 0x00030000
+#define MSM_DMA_CMD_MODE_SHIFT 16
+#define MSM_DMA_CMD_MASK 0xFFFC0000
+#define MSM_DMA_CMD_SHIFT 15
+#define MSM_DMA_ENDIAN_MASK 0xFC000000
+#define MSM_DMA_ENDIAN_SHIFT 23
+#define MSM_OFFSET_MASK 0xFFFF
+#define MSM_ROW_CNT_MASK 0xFFFF0000
+#define MSM_ROW_CNT_SHIFT 16
+#define MSM_SG_MODE_CMD 0x00000002
+#define MSM_BOX_MODE_CMD 0x00000003
+
+#define FORCED_FLUSH 0
+#define GRACEFUL_FLUSH 1
+
+#define MSM_CMD_CRCI_SHIFT 3
+
struct msm_dmov_errdata {
uint32_t flush[6];
};
diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index ab8f469..3e69f42 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -245,6 +245,18 @@ config EP93XX_DMA
help
Enable support for the Cirrus Logic EP93xx M2P/M2M DMA controller.
+config MSM_DMA
+ tristate "Qualcomm MSM DMA support"
+ depends on ARCH_MSM
+ select DMA_ENGINE
+ help
+ Support the Qualcomm DMA engine. This engine is integrated into
+ Qualcomm chips.
+
+ Say Y here if you have such a chipset.
+
+ If unsure, say N.
+
config DMA_ENGINE
bool
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index 30cf3b1..56593f0 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -19,6 +19,7 @@ obj-$(CONFIG_COH901318) += coh901318.o coh901318_lli.o
obj-$(CONFIG_AMCC_PPC440SPE_ADMA) += ppc4xx/
obj-$(CONFIG_IMX_SDMA) += imx-sdma.o
obj-$(CONFIG_IMX_DMA) += imx-dma.o
+obj-$(CONFIG_MSM_DMA) += msm-dma.o
obj-$(CONFIG_MXS_DMA) += mxs-dma.o
obj-$(CONFIG_TIMB_DMA) += timb_dma.o
obj-$(CONFIG_STE_DMA40) += ste_dma40.o ste_dma40_ll.o
diff --git a/drivers/dma/msm-dma.c b/drivers/dma/msm-dma.c
new file mode 100644
index 0000000..f9fb2ab
--- /dev/null
+++ b/drivers/dma/msm-dma.c
@@ -0,0 +1,719 @@
+/* Copyright (c) 2011, Code Aurora Forum. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/clk.h>
+#include <linux/err.h>
+#include <linux/io.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/spinlock.h>
+#include <linux/dmapool.h>
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
+
+#include <mach/dma.h>
+
+#define SD3_CHAN_START 0
+#define MSM_DMOV_CRCI_COUNT 16
+#define MSM_DMA_MAX_CHANS_PER_DEVICE 16
+#define MAX_TRANSFER_LENGTH 65535
+#define NO_ERR_CHAN_STATUS 0x80000002
+
+#define SINGLE_MODE_CMD 0x00000000
+#define SG_MODE_CMD 0x00000001
+#define BOX_MODE_CMD 0x00000003
+#define BOX_OFFSET_MASK 0xffff0000
+#define BOX_SRC_LEN_SHIFT 16
+#define BOX_SRC_ROW_SHIFT 16
+#define BOX_SRC_OFFSET_SHIFT 16
+
+#define sg_dma_offset(sg) ((sg)->offset)
+#define DMOV_REG(name, base) (name + base)
+#define to_msm_chan(chan) container_of(chan, struct msm_dma_chan, channel)
+#define DMOV_ID_TO_ADM(id) ((id) / MSM_DMA_MAX_CHANS_PER_DEVICE)
+
+struct msm_dma_chan {
+ int chan_id;
+ dma_cookie_t completed_cookie;
+ dma_cookie_t error_cookie;
+ spinlock_t lock;
+ struct list_head active_list;
+ struct list_head pending_list;
+ struct dma_chan channel;
+ struct dma_pool *desc_pool;
+ struct device *dev;
+ int max_len;
+ int err;
+ struct tasklet_struct tasklet;
+};
+
+struct msm_dma_device {
+ void __iomem *base;
+ struct device *dev;
+ struct dma_device common;
+ struct msm_dma_chan *chan[MSM_DMA_MAX_CHANS_PER_DEVICE];
+};
+
+struct msm_dma_desc_hw {
+ unsigned int cmd_list_ptr;
+} __aligned(8);
+
+/* Single Item Mode */
+struct adm_cmd_t {
+ unsigned int cmd_cntrl;
+ unsigned int src;
+ unsigned int dst;
+ unsigned int len;
+};
+
+struct adm_box_cmd_t {
+ uint32_t cmd_cntrl;
+ uint32_t src_row_addr;
+ uint32_t dst_row_addr;
+ uint32_t src_dst_len;
+ uint32_t num_rows;
+ uint32_t row_offset;
+};
+
+struct msm_dma_desc_sw {
+ struct msm_dma_desc_hw hw;
+ struct adm_cmd_t *vaddr_cmd;
+ struct adm_box_cmd_t *vaddr_box_cmd;
+ size_t coherent_size;
+ dma_addr_t paddr_cmd_list;
+ struct list_head node;
+ struct msm_dmov_cmd dmov_cmd;
+ struct dma_async_tx_descriptor async_tx;
+} __aligned(8);
+
+static int msm_dma_alloc_chan_resources(struct dma_chan *dchan)
+{
+ struct msm_dma_chan *chan = to_msm_chan(dchan);
+
+ /* Has this channel already been allocated? */
+ if (chan->desc_pool)
+ return 1;
+
+ /*
+ * We need the descriptor to be aligned to 8bytes
+ * for meeting ADM specification requirement.
+ */
+ chan->desc_pool = dma_pool_create("msm_dma_desc_pool",
+ chan->dev,
+ sizeof(struct msm_dma_desc_sw),
+ __alignof__(struct msm_dma_desc_sw), 0);
+ if (!chan->desc_pool) {
+ dev_err(chan->dev, "unable to allocate channel %d "
+ "descriptor pool\n", chan->chan_id);
+ return -ENOMEM;
+ }
+
+ chan->completed_cookie = 1;
+ dchan->cookie = 1;
+
+ /* there is at least one descriptor free to be allocated */
+ return 1;
+}
+
+static void msm_dma_free_desc_list(struct msm_dma_chan *chan,
+ struct list_head *list)
+{
+ struct msm_dma_desc_sw *desc, *_desc;
+
+ list_for_each_entry_safe(desc, _desc, list, node) {
+ list_del(&desc->node);
+ dma_pool_free(chan->desc_pool, desc, desc->async_tx.phys);
+ }
+}
+
+static void msm_dma_free_chan_resources(struct dma_chan *dchan)
+{
+ struct msm_dma_chan *chan = to_msm_chan(dchan);
+ unsigned long flags;
+
+ dev_dbg(chan->dev, "Free all channel resources.\n");
+ spin_lock_irqsave(&chan->lock, flags);
+ msm_dma_free_desc_list(chan, &chan->active_list);
+ msm_dma_free_desc_list(chan, &chan->pending_list);
+ spin_unlock_irqrestore(&chan->lock, flags);
+
+ dma_pool_destroy(chan->desc_pool);
+ chan->desc_pool = NULL;
+}
+
+static enum dma_status msm_dma_desc_status(struct msm_dma_chan *chan,
+ struct msm_dma_desc_sw *desc)
+{
+ return dma_async_is_complete(desc->async_tx.cookie,
+ chan->completed_cookie,
+ chan->channel.cookie);
+}
+
+static void msm_chan_desc_cleanup(struct msm_dma_chan *chan)
+{
+ struct msm_dma_desc_sw *desc, *_desc;
+ unsigned long flags;
+
+ dev_dbg(chan->dev, "Cleaning completed descriptor of channel %d\n",
+ chan->chan_id);
+ spin_lock_irqsave(&chan->lock, flags);
+
+ list_for_each_entry_safe(desc, _desc, &chan->active_list, node) {
+ dma_async_tx_callback callback;
+ void *callback_param;
+
+ if (msm_dma_desc_status(chan, desc) == DMA_IN_PROGRESS)
+ break;
+
+ /* Remove from the list of running transactions */
+ list_del(&desc->node);
+
+ /* Run the link descriptor callback function */
+ callback = desc->async_tx.callback;
+ callback_param = desc->async_tx.callback_param;
+ if (callback) {
+ spin_unlock_irqrestore(&chan->lock, flags);
+ callback(callback_param);
+ spin_lock_irqsave(&chan->lock, flags);
+ }
+
+ /* Run any dependencies, then free the descriptor */
+ dma_run_dependencies(&desc->async_tx);
+ spin_unlock_irqrestore(&chan->lock, flags);
+
+ if (desc->vaddr_cmd) {
+ dma_free_coherent(chan->dev, desc->coherent_size,
+ (void *)desc->vaddr_cmd,
+ desc->paddr_cmd_list);
+ } else {
+ dma_free_coherent(chan->dev, desc->coherent_size,
+ (void *)desc->vaddr_box_cmd,
+ desc->paddr_cmd_list);
+ }
+ spin_lock_irqsave(&chan->lock, flags);
+ dma_pool_free(chan->desc_pool, desc, desc->async_tx.phys);
+ }
+
+ spin_unlock_irqrestore(&chan->lock, flags);
+}
+
+static void dma_do_tasklet(unsigned long data)
+{
+ struct msm_dma_chan *chan = (struct msm_dma_chan *)data;
+ msm_chan_desc_cleanup(chan);
+}
+
+static void
+msm_dma_complete_func(struct msm_dmov_cmd *cmd,
+ unsigned int result,
+ struct msm_dmov_errdata *err)
+{
+ unsigned long flags;
+ struct msm_dma_desc_sw *desch = container_of(cmd,
+ struct msm_dma_desc_sw, dmov_cmd);
+ struct msm_dma_chan *chan = to_msm_chan(desch->async_tx.chan);
+
+ spin_lock_irqsave(&chan->lock, flags);
+
+ if ((result != NO_ERR_CHAN_STATUS) && err)
+ chan->error_cookie = desch->async_tx.cookie;
+ chan->completed_cookie = desch->async_tx.cookie;
+
+ tasklet_schedule(&chan->tasklet);
+
+ spin_unlock_irqrestore(&chan->lock, flags);
+}
+
+/*
+ * Passes transfer descriptors to DMA hardware.
+ */
+static void msm_dma_issue_pending(struct dma_chan *dchan)
+{
+ struct msm_dma_chan *chan = to_msm_chan(dchan);
+ struct msm_dma_desc_sw *desch;
+ unsigned long flags;
+
+ if (chan->err)
+ return;
+
+ spin_lock_irqsave(&chan->lock, flags);
+
+ if (list_empty(&chan->pending_list))
+ goto out_unlock;
+
+ desch = list_first_entry(&chan->pending_list, struct msm_dma_desc_sw,
+ node);
+ list_del(&desch->node);
+ desch->dmov_cmd.complete_func = msm_dma_complete_func;
+ desch->dmov_cmd.execute_func = NULL;
+ desch->dmov_cmd.cmdptr = DMOV_CMD_ADDR(desch->async_tx.phys);
+ list_add_tail(&desch->node, &chan->active_list);
+ mb();
+ msm_dmov_enqueue_cmd(chan->chan_id, &desch->dmov_cmd);
+out_unlock:
+ spin_unlock_irqrestore(&chan->lock, flags);
+}
+
+/*
+ * Assignes cookie for each transfer descriptor passed.
+ * Returns
+ * Assigend cookie for success.
+ * Error value for failure.
+ */
+static dma_cookie_t msm_dma_tx_submit(struct dma_async_tx_descriptor *tx)
+{
+ struct msm_dma_chan *chan = to_msm_chan(tx->chan);
+ struct msm_dma_desc_sw *desc = container_of(tx,
+ struct msm_dma_desc_sw, async_tx);
+ unsigned long flags;
+ dma_cookie_t cookie = -EBUSY;
+
+ if (chan->err)
+ return cookie;
+
+ spin_lock_irqsave(&chan->lock, flags);
+
+ /*
+ * assign cookies to all of the software descriptors
+ * that make up this transaction
+ */
+ cookie = chan->channel.cookie;
+ cookie++;
+ if (cookie < 0)
+ cookie = DMA_MIN_COOKIE;
+
+ desc->async_tx.cookie = cookie;
+ chan->channel.cookie = cookie;
+
+ /* put this transaction onto the tail of the pending queue */
+ list_add_tail(&desc->node, &chan->pending_list);
+
+ spin_unlock_irqrestore(&chan->lock, flags);
+
+ return cookie;
+}
+
+/*
+ * Returns the DMA transfer status of a particular cookie
+ */
+static enum dma_status msm_tx_status(struct dma_chan *dchan,
+ dma_cookie_t cookie,
+ struct dma_tx_state *txstate)
+{
+ struct msm_dma_chan *chan = to_msm_chan(dchan);
+ dma_cookie_t last_used;
+ dma_cookie_t last_complete;
+ enum dma_status status;
+
+ last_used = dchan->cookie;
+ last_complete = chan->completed_cookie;
+
+ dma_set_tx_state(txstate, last_complete, last_used, 0);
+
+ status = dma_async_is_complete(cookie, last_complete, last_used);
+
+ if (status != DMA_IN_PROGRESS)
+ if (chan->error_cookie == cookie)
+ status = DMA_ERROR;
+
+ return status;
+}
+
+static struct msm_dma_desc_sw *msm_dma_alloc_descriptor(
+ struct msm_dma_chan *chan,
+ int cmd_cnt,
+ int mask)
+{
+ struct msm_dma_desc_sw *desc;
+ dma_addr_t pdesc_addr;
+ dma_addr_t paddr_cmd_list;
+ void *err = NULL;
+
+ desc = dma_pool_alloc(chan->desc_pool, GFP_ATOMIC, &pdesc_addr);
+ if (!desc) {
+ dev_dbg(chan->dev, "out of memory for desc\n");
+ return ERR_CAST(desc);
+ }
+
+ memset(desc, 0, sizeof(*desc));
+ desc->async_tx.phys = pdesc_addr;
+
+ if (mask == BOX_MODE_CMD) {
+ desc->coherent_size = sizeof(struct adm_box_cmd_t);
+ desc->vaddr_box_cmd = dma_alloc_coherent(chan->dev,
+ sizeof(struct adm_box_cmd_t),
+ &paddr_cmd_list, GFP_ATOMIC);
+ if (!desc->vaddr_box_cmd) {
+ dev_dbg(chan->dev, "out of memory for desc\n");
+ err = desc->vaddr_box_cmd;
+ goto fail;
+ }
+ } else {
+ desc->coherent_size = sizeof(struct adm_cmd_t)*cmd_cnt;
+
+ desc->vaddr_cmd = dma_alloc_coherent(chan->dev,
+ sizeof(struct adm_cmd_t)*cmd_cnt,
+ &paddr_cmd_list, GFP_ATOMIC);
+
+ if (!desc->vaddr_cmd) {
+ dev_dbg(chan->dev, "out of memory for desc\n");
+ err = desc->vaddr_cmd;
+ goto fail;
+ }
+ }
+
+ dma_async_tx_descriptor_init(&desc->async_tx, &chan->channel);
+ desc->async_tx.tx_submit = msm_dma_tx_submit;
+ desc->paddr_cmd_list = paddr_cmd_list;
+ desc->hw.cmd_list_ptr = (paddr_cmd_list >> 3) | CMD_PTR_LP;
+ return desc;
+fail:
+ dma_pool_free(chan->desc_pool, desc, desc->async_tx.phys);
+ return ERR_CAST(err);
+}
+
+/*
+ * Prepares the transfer descriptors from scatterlist.
+ * Returns
+ * address of dma_async_tx_descriptor for success.
+ * Pointer of error value for failure.
+ */
+static struct dma_async_tx_descriptor *msm_dma_prep_sg(
+ struct dma_chan *dchan,
+ struct scatterlist *dst_sg, unsigned int dst_nents,
+ struct scatterlist *src_sg, unsigned int src_nents,
+ unsigned long flags)
+{
+ struct msm_dma_chan *chan;
+ struct msm_dma_desc_sw *new;
+ int cmd_cnt = 0;
+ int first = 0;
+ int i;
+ struct adm_cmd_t *cmdlist_vaddr;
+ struct adm_box_cmd_t *box_cmd_vaddr;
+ struct scatterlist *sg;
+
+ if (!dchan)
+ return ERR_PTR(-EINVAL);
+
+ if (dst_nents == 0 || src_nents == 0)
+ return ERR_PTR(-EINVAL);
+ if (!dst_sg || !src_sg)
+ return ERR_PTR(-EINVAL);
+
+ if (dst_nents != src_nents)
+ return ERR_PTR(-EINVAL);
+
+ chan = to_msm_chan(dchan);
+
+ cmd_cnt = src_nents;
+
+ if (((sg_dma_len(src_sg) & MSM_DMA_CMD_MODE_MASK) >>
+ MSM_DMA_CMD_MODE_SHIFT) & MSM_BOX_MODE_CMD) {
+
+ new = msm_dma_alloc_descriptor(chan, cmd_cnt, BOX_MODE_CMD);
+ if (!new) {
+ dev_err(chan->dev,
+ "No free memory for link descriptor\n");
+ return ERR_PTR(-ENOMEM);
+ }
+ box_cmd_vaddr = new->vaddr_box_cmd;
+
+ for_each_sg(src_sg, sg, src_nents, i) {
+
+ box_cmd_vaddr->src_row_addr =
+ sg_dma_address(sg);
+ box_cmd_vaddr->src_dst_len =
+ (sg_dma_len(sg) & MSM_DMA_LEN_MASK) <<
+ BOX_SRC_LEN_SHIFT;
+ box_cmd_vaddr->cmd_cntrl =
+ ((sg_dma_len(sg) & MSM_DMA_CMD_MASK) >>
+ MSM_DMA_CMD_SHIFT) | BOX_MODE_CMD;
+
+ box_cmd_vaddr->num_rows = ((sg_dma_offset(sg) &
+ MSM_ROW_CNT_MASK) >> MSM_ROW_CNT_SHIFT) <<
+ BOX_SRC_ROW_SHIFT;
+
+ box_cmd_vaddr->row_offset = (sg_dma_offset(sg) &
+ MSM_OFFSET_MASK) << BOX_SRC_OFFSET_SHIFT;
+
+ if (first == 0) {
+ if (cmd_cnt == 1)
+ box_cmd_vaddr->cmd_cntrl |=
+ CMD_LC | CMD_OCB | CMD_OCU;
+ else
+ box_cmd_vaddr->cmd_cntrl |=
+ CMD_OCB;
+ first = 1;
+ }
+ box_cmd_vaddr++;
+ }
+
+ if (cmd_cnt > 1) {
+ box_cmd_vaddr--;
+ box_cmd_vaddr->cmd_cntrl |= CMD_LC | CMD_OCU;
+ }
+
+ box_cmd_vaddr = new->vaddr_box_cmd;
+
+ for_each_sg(dst_sg, sg, src_nents, i) {
+
+ box_cmd_vaddr->dst_row_addr = sg_dma_address(sg);
+ box_cmd_vaddr->src_dst_len |=
+ (sg_dma_len(sg) & MSM_DMA_LEN_MASK);
+ box_cmd_vaddr->num_rows |=
+ ((sg_dma_offset(sg) &
+ MSM_ROW_CNT_MASK) >> MSM_ROW_CNT_SHIFT);
+ box_cmd_vaddr->row_offset |=
+ (sg_dma_offset(sg) &
+ MSM_OFFSET_MASK);
+ box_cmd_vaddr++;
+ }
+#ifdef DEBUG
+ i = 0;
+ box_cmd_vaddr = new->vaddr_box_cmd;
+ do {
+ dev_dbg(chan->dev, "cmd %d src 0x%x dst 0x%x len 0x%x "
+ "cntrl 0x%x row_offset 0x%x num_rows 0x%x\n",
+ i, box_cmd_vaddr->src_row_addr,
+ box_cmd_vaddr->dst_row_addr,
+ box_cmd_vaddr->src_dst_len,
+ box_cmd_vaddr->cmd_cntrl,
+ box_cmd_vaddr->row_offset,
+ box_cmd_vaddr->num_rows);
+ box_cmd_vaddr++;
+ i++;
+ } while (!((box_cmd_vaddr-1)->cmd_cntrl & CMD_LC));
+#endif
+ } else {
+
+ new = msm_dma_alloc_descriptor(chan, cmd_cnt, DMA_SG);
+
+ if (!new) {
+ dev_err(chan->dev,
+ "No free memory for link descriptor\n");
+ return ERR_PTR(-ENOMEM);
+ }
+
+ cmdlist_vaddr = new->vaddr_cmd;
+
+ for_each_sg(src_sg, sg, src_nents, i) {
+
+ cmdlist_vaddr->src = sg_dma_address(sg);
+ cmdlist_vaddr->len = sg_dma_len(sg) & MSM_DMA_LEN_MASK;
+ cmdlist_vaddr->cmd_cntrl =
+ ((sg_dma_len(sg) & MSM_DMA_CMD_MASK) >>
+ MSM_DMA_CMD_SHIFT);
+ if (first == 0) {
+ if (cmd_cnt == 1)
+ cmdlist_vaddr->cmd_cntrl |=
+ CMD_LC | CMD_OCB | CMD_OCU;
+ else
+ cmdlist_vaddr->cmd_cntrl |=
+ CMD_OCB;
+ first = 1;
+ }
+ cmdlist_vaddr++;
+ }
+ if (cmd_cnt > 1) {
+ cmdlist_vaddr--;
+ cmdlist_vaddr->cmd_cntrl |= CMD_LC | CMD_OCU;
+ }
+
+ cmdlist_vaddr = new->vaddr_cmd;
+
+ for_each_sg(dst_sg, sg, src_nents, i) {
+
+ cmdlist_vaddr->dst = sg_dma_address(sg);
+ cmdlist_vaddr++;
+ }
+
+#ifdef DEBUG
+ cmdlist_vaddr = new->vaddr_cmd;
+ i = 0;
+ do {
+ dev_dbg(chan->dev, "cmd %d src 0x%x dst 0x%x len 0x%x "
+ "cntrl 0x%x\n",
+ i, cmdlist_vaddr->src, cmdlist_vaddr->dst,
+ cmdlist_vaddr->len, cmdlist_vaddr->cmd_cntrl);
+ cmdlist_vaddr++;
+ } while (!((cmdlist_vaddr-1)->cmd_cntrl & CMD_LC));
+#endif
+ }
+
+ new->async_tx.flags = flags;
+ new->async_tx.cookie = -EBUSY;
+
+ return &new->async_tx;
+}
+
+/*
+ * Controlling the hardware channel like stopping, flushing.
+ */
+static int msm_dma_chan_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
+ unsigned long arg)
+{
+ int cmd_type = (int) arg;
+
+ if (cmd == DMA_TERMINATE_ALL) {
+ switch (cmd_type) {
+ case GRACEFUL_FLUSH:
+ msm_dmov_stop_cmd(chan->chan_id, NULL, 1);
+ break;
+ case FORCED_FLUSH:
+ /*
+ * We treate default as forced flush
+ * so we fall through
+ */
+ default:
+ msm_dmov_stop_cmd(chan->chan_id, NULL, 0);
+ break;
+ }
+ }
+ return 0;
+}
+
+static void msm_dma_chan_remove(struct msm_dma_chan *chan)
+{
+ tasklet_kill(&chan->tasklet);
+ list_del(&chan->channel.device_node);
+ kfree(chan);
+}
+
+static __devinit int msm_dma_chan_probe(struct msm_dma_device *qdev,
+ int channel)
+{
+ struct msm_dma_chan *chan;
+
+ chan = kzalloc(sizeof(*chan), GFP_KERNEL);
+ if (!chan) {
+ dev_err(qdev->dev, "no free memory for DMA channels!\n");
+ return -ENOMEM;
+ }
+
+ spin_lock_init(&chan->lock);
+ INIT_LIST_HEAD(&chan->pending_list);
+ INIT_LIST_HEAD(&chan->active_list);
+
+ chan->chan_id = channel;
+ chan->completed_cookie = 0;
+ chan->channel.cookie = 0;
+ chan->max_len = MAX_TRANSFER_LENGTH;
+ chan->err = 0;
+ qdev->chan[channel] = chan;
+ chan->channel.device = &qdev->common;
+ chan->dev = qdev->dev;
+
+ tasklet_init(&chan->tasklet, dma_do_tasklet, (unsigned long)chan);
+
+ list_add_tail(&chan->channel.device_node, &qdev->common.channels);
+ qdev->common.chancnt++;
+
+ return 0;
+}
+
+static int __devinit msm_dma_probe(struct platform_device *pdev)
+{
+ struct msm_dma_device *qdev;
+ int i;
+ int ret = 0;
+
+ qdev = kzalloc(sizeof(*qdev), GFP_KERNEL);
+ if (!qdev) {
+ dev_err(&pdev->dev, "Not enough memory for device\n");
+ return -ENOMEM;
+ }
+
+ qdev->dev = &pdev->dev;
+ INIT_LIST_HEAD(&qdev->common.channels);
+ qdev->common.device_alloc_chan_resources =
+ msm_dma_alloc_chan_resources;
+ qdev->common.device_free_chan_resources =
+ msm_dma_free_chan_resources;
+ dma_cap_set(DMA_MEMCPY | DMA_SG, qdev->common.cap_mask);
+
+ qdev->common.device_prep_dma_sg = msm_dma_prep_sg;
+ qdev->common.device_issue_pending = msm_dma_issue_pending;
+ qdev->common.dev = &pdev->dev;
+ qdev->common.device_tx_status = msm_tx_status;
+ qdev->common.device_control = msm_dma_chan_control;
+
+ for (i = SD3_CHAN_START; i < MSM_DMA_MAX_CHANS_PER_DEVICE; i++) {
+ ret = msm_dma_chan_probe(qdev, i);
+ if (ret) {
+ dev_err(&pdev->dev, "channel %d probe failed\n", i);
+ goto chan_free;
+ }
+ }
+
+ dev_info(&pdev->dev, "registering dma device\n");
+
+ ret = dma_async_device_register(&qdev->common);
+
+ if (ret) {
+ dev_err(&pdev->dev, "Registering Dma device failed\n");
+ goto chan_free;
+ }
+
+ dev_set_drvdata(&pdev->dev, qdev);
+ return 0;
+chan_free:
+ for (i = SD3_CHAN_START; i < MSM_DMA_MAX_CHANS_PER_DEVICE; i++) {
+ if (qdev->chan[i])
+ msm_dma_chan_remove(qdev->chan[i]);
+ }
+ kfree(qdev);
+ return ret;
+}
+
+static int __devexit msm_dma_remove(struct platform_device *pdev)
+{
+ struct msm_dma_device *qdev = platform_get_drvdata(pdev);
+ int i;
+
+ dma_async_device_unregister(&qdev->common);
+
+ for (i = SD3_CHAN_START; i < MSM_DMA_MAX_CHANS_PER_DEVICE; i++) {
+ if (qdev->chan[i])
+ msm_dma_chan_remove(qdev->chan[i]);
+ }
+
+ dev_set_drvdata(&pdev->dev, NULL);
+ kfree(qdev);
+
+ return 0;
+}
+
+static struct platform_driver msm_dma_driver = {
+ .remove = __devexit_p(msm_dma_remove),
+ .driver = {
+ .name = "msm_dma",
+ .owner = THIS_MODULE,
+ },
+};
+
+static __init int msm_dma_init(void)
+{
+ return platform_driver_probe(&msm_dma_driver, msm_dma_probe);
+}
+subsys_initcall(msm_dma_init);
+
+static void __exit msm_dma_exit(void)
+{
+ platform_driver_unregister(&msm_dma_driver);
+}
+module_exit(msm_dma_exit);
+
+MODULE_DESCRIPTION("Qualcomm DMA driver");
+MODULE_LICENSE("GPL v2");
--
Sent by a consultant of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH 1/1] msm: DMAEngine: Add DMAEngine driver based on old MSM DMA APIs
2011-12-22 13:08 ` [PATCH 1/1] msm: DMAEngine: Add DMAEngine driver based on old MSM DMA APIs Ravi Kumar V
@ 2011-12-22 19:12 ` David Brown
0 siblings, 0 replies; 3+ messages in thread
From: David Brown @ 2011-12-22 19:12 UTC (permalink / raw)
To: Ravi Kumar V
Cc: Vinod Koul, Dan Williams, Daniel Walker, Bryan Huntsman,
Trilok Soni, linux-kernel, linux-arm-msm, linux-arm-kernel
On Thu, Dec 22, 2011 at 06:38:32PM +0530, Ravi Kumar V wrote:
> --- a/arch/arm/mach-msm/include/mach/dma.h
> +++ b/arch/arm/mach-msm/include/mach/dma.h
> @@ -18,6 +18,59 @@
> #include <linux/list.h>
> #include <mach/msm_iomap.h>
>
> +#ifdef CONFIG_NEED_SG_DMA_LENGTH
> +#define msm_dma_len(sg, len) (((sg)->dma_length) = \
> + (((sg)->dma_length) & 0xFFFF0000) | \
> + (len & 0xFFFF))
> +#define msm_dma_mode(sg, mode) (((sg)->dma_length) = \
> + (((sg)->dma_length) & 0xFFFCFFFF) | \
> + (mode << 16))
> +#define msm_dma_crci(sg, crci) (((sg)->dma_length) = \
> + (((sg)->dma_length) & 0x0003FFFF) | \
> + (crci << 15))
> +#define msm_dma_endian(sg, en) (((sg)->dma_length) = \
> + (((sg)->dma_length) & 0x03FFFFFF) | \
> + (en << 26))
> +#else
> +#define msm_dma_len(sg, len) (((sg)->length) = \
> + (((sg)->length) & 0xFFFF0000) | \
> + (len & 0xFFFF))
> +#define msm_dma_mode(sg, mode) (((sg)->length) = \
> + (((sg)->length) & 0xFFFCFFFF) | \
> + (mode << 16))
> +#define msm_dma_crci(sg, crci) (((sg)->length) = \
> + (((sg)->length) & 0x0003FFFF) | \
> + (crci << 15))
> +#define msm_dma_endian(sg, en) (((sg)->length) = \
> + (((sg)->length) & 0x03FFFFFF) | \
> + (en << 26))
> +#endif
> +#define msm_dma_offset(sg, val) (((sg)->offset) = \
> + (((sg)->offset) & 0xFFFF0000) | \
> + (val & 0xFFFF))
> +#define msm_dma_row_count(sg, cnt) (((sg)->offset) = \
> + (((sg)->offset) & 0x0000FFFF) | \
> + (cnt << 16))
> +#define msm_dma_address(sg) ((sg)->dma_address)
It looks like you are encoding some extra data in the
dma_length/length and offsets fields of the scatterlist. At a
minimum, this needs some explanation, but I'm not sure this is quite
the right approach.
The problem is that a driver using DMAEngine would need to know about
this extra information to really be able to use the MSM dma.
Some descriptions of the bits, which are still defined in
arch/arm/mach-msm/include/mach/dma.h:
- Mode: one of SINGLE, SG, IND_SG, or BOX. Existing drivers seem
to only make use of SINGLE (by implication, being zero), and BOX
mode. This driver looks like it handles these two cases as well,
building the proper types of descriptors.
- CRCI: an MSM-specific term for the flow-control channels on the
DMA controller. These are small integers describing which CRCI
channel is connected to which device.
- Endian: Some bits directing the controller to perform various
types of endian conversions as it transfers the data.
Current drivers that are using the current dmov dma interface seem to
be drivers/mmc/host/msm_sdcc.c and drivers/tty/serial/msm_serial_hs.c.
Both make use of the BOX mode, and require a CRCI channel.
The question is how do would we extend DMAEngine to support these
concepts. Supporting BOX mode, and being able to describe the
flow-control channel are probably the most significant.
A good starting point will be to figure out what other DMA hardware
supports, and might need out of DMAEngine.
It's certainly a possibility to put the driver in as is, but the
clients would have to know the MSM-specific features to be able to use
themic features to be able to use them.
David
--
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2011-12-22 19:12 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-12-22 13:08 [PATCH 0/1] Add Qualcomm MSM ADM DMAEngine driver Ravi Kumar V
2011-12-22 13:08 ` [PATCH 1/1] msm: DMAEngine: Add DMAEngine driver based on old MSM DMA APIs Ravi Kumar V
2011-12-22 19:12 ` David Brown
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).