linux-sh.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] sh: fix the shdma driver, improve dmatest, support sh7724
@ 2009-12-04 18:44 Guennadi Liakhovetski
  2009-12-04 18:44 ` [PATCH 1/5] sh: DMA driver has to specify its alignment requirements Guennadi Liakhovetski
                   ` (5 more replies)
  0 siblings, 6 replies; 19+ messages in thread
From: Guennadi Liakhovetski @ 2009-12-04 18:44 UTC (permalink / raw)
  To: linux-kernel; +Cc: Dan Williams, linux-sh

Hi

Not all of these patches are connected, only the ones, affecting the same 
file should be applied in a specific order, otherwise the patches are 
independent and can be applied separately.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 1/5] sh: DMA driver has to specify its alignment requirements
  2009-12-04 18:44 [PATCH 0/5] sh: fix the shdma driver, improve dmatest, support sh7724 Guennadi Liakhovetski
@ 2009-12-04 18:44 ` Guennadi Liakhovetski
  2009-12-04 18:44 ` [PATCH 2/5] dmaengine: fix dmatest to verify minimum transfer length Guennadi Liakhovetski
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 19+ messages in thread
From: Guennadi Liakhovetski @ 2009-12-04 18:44 UTC (permalink / raw)
  To: linux-kernel; +Cc: Dan Williams, linux-sh

The SH DMA driver by default uses 32-byte transfers, in this mode buffers and
sizes have to be 32-byte aligned. Specifying this requirement also fixes Oopses
with dmatest.

Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
---
 drivers/dma/shdma.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/drivers/dma/shdma.c b/drivers/dma/shdma.c
index b3b065c..f5fae12 100644
--- a/drivers/dma/shdma.c
+++ b/drivers/dma/shdma.c
@@ -677,6 +677,8 @@ static int __init sh_dmae_probe(struct platform_device *pdev)
 	shdev->common.device_is_tx_complete = sh_dmae_is_complete;
 	shdev->common.device_issue_pending = sh_dmae_memcpy_issue_pending;
 	shdev->common.dev = &pdev->dev;
+	/* Default transfer size of 32 bytes requires 32-byte alignment */
+	shdev->common.copy_align = 5;
 
 #if defined(CONFIG_CPU_SH4)
 	/* Non Mix IRQ mode SH7722/SH7730 etc... */
-- 
1.6.2.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 2/5] dmaengine: fix dmatest to verify minimum transfer length
  2009-12-04 18:44 [PATCH 0/5] sh: fix the shdma driver, improve dmatest, support sh7724 Guennadi Liakhovetski
  2009-12-04 18:44 ` [PATCH 1/5] sh: DMA driver has to specify its alignment requirements Guennadi Liakhovetski
@ 2009-12-04 18:44 ` Guennadi Liakhovetski
  2009-12-04 18:44 ` [PATCH 3/5] sh: fix DMA driver's descriptor chaining and cookie Guennadi Liakhovetski
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 19+ messages in thread
From: Guennadi Liakhovetski @ 2009-12-04 18:44 UTC (permalink / raw)
  To: linux-kernel; +Cc: Dan Williams, linux-sh

Transfers and the test buffer have to be at least align bytes long.

Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
---
 drivers/dma/dmatest.c |   16 ++++++++++++----
 1 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c
index a32a4cf..8b90516 100644
--- a/drivers/dma/dmatest.c
+++ b/drivers/dma/dmatest.c
@@ -298,10 +298,6 @@ static int dmatest_func(void *data)
 
 		total_tests++;
 
-		len = dmatest_random() % test_buf_size + 1;
-		src_off = dmatest_random() % (test_buf_size - len + 1);
-		dst_off = dmatest_random() % (test_buf_size - len + 1);
-
 		/* honor alignment restrictions */
 		if (thread->type = DMA_MEMCPY)
 			align = dev->copy_align;
@@ -310,7 +306,19 @@ static int dmatest_func(void *data)
 		else if (thread->type = DMA_PQ)
 			align = dev->pq_align;
 
+		if (1 << align > test_buf_size) {
+			pr_err("%u-byte buffer too small for %d-byte alignment\n",
+			       test_buf_size, 1 << align);
+			break;
+		}
+
+		len = dmatest_random() % test_buf_size + 1;
 		len = (len >> align) << align;
+		if (!len)
+			len = 1 << align;
+		src_off = dmatest_random() % (test_buf_size - len + 1);
+		dst_off = dmatest_random() % (test_buf_size - len + 1);
+
 		src_off = (src_off >> align) << align;
 		dst_off = (dst_off >> align) << align;
 
-- 
1.6.2.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 3/5] sh: fix DMA driver's descriptor chaining and cookie
  2009-12-04 18:44 [PATCH 0/5] sh: fix the shdma driver, improve dmatest, support sh7724 Guennadi Liakhovetski
  2009-12-04 18:44 ` [PATCH 1/5] sh: DMA driver has to specify its alignment requirements Guennadi Liakhovetski
  2009-12-04 18:44 ` [PATCH 2/5] dmaengine: fix dmatest to verify minimum transfer length Guennadi Liakhovetski
@ 2009-12-04 18:44 ` Guennadi Liakhovetski
  2009-12-08  0:58   ` Dan Williams
  2009-12-04 18:45 ` [PATCH 4/5] sh: stylistic improvements for the DMA driver Guennadi Liakhovetski
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 19+ messages in thread
From: Guennadi Liakhovetski @ 2009-12-04 18:44 UTC (permalink / raw)
  To: linux-kernel; +Cc: Dan Williams, linux-sh

The SH DMA driver wrongly assigns negative cookies to transfer descriptors,
also, its chaining of partial descriptors is broken. The latter problem is
usually invisible, because maximum transfer size per chunk is 16M, but if you
artificially set this limit lower, the driver fails. Since cookies are also
used in chunk management, both these problems are fixed in one patch.

Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
---
 drivers/dma/shdma.c |  246 ++++++++++++++++++++++++++++++++-------------------
 drivers/dma/shdma.h |    1 -
 2 files changed, 153 insertions(+), 94 deletions(-)

diff --git a/drivers/dma/shdma.c b/drivers/dma/shdma.c
index f5fae12..aac8f78 100644
--- a/drivers/dma/shdma.c
+++ b/drivers/dma/shdma.c
@@ -30,9 +30,12 @@
 #include "shdma.h"
 
 /* DMA descriptor control */
-#define DESC_LAST	(-1)
-#define DESC_COMP	(1)
-#define DESC_NCOMP	(0)
+enum sh_dmae_desc_status {
+	DESC_PREPARED,
+	DESC_SUBMITTED,
+	DESC_COMPLETED,	/* completed, have to call callback */
+	DESC_WAITING,	/* callback called, waiting for ack / re-submit */
+};
 
 #define NR_DESCS_PER_CHANNEL 32
 /*
@@ -45,6 +48,8 @@
  */
 #define RS_DEFAULT  (RS_DUAL)
 
+static void sh_dmae_chan_ld_cleanup(struct sh_dmae_chan *sh_chan, bool all);
+
 #define SH_DMAC_CHAN_BASE(id) (dma_base_addr[id])
 static void sh_dmae_writel(struct sh_dmae_chan *sh_dc, u32 data, u32 reg)
 {
@@ -185,7 +190,7 @@ static int dmae_set_dmars(struct sh_dmae_chan *sh_chan, u16 val)
 
 static dma_cookie_t sh_dmae_tx_submit(struct dma_async_tx_descriptor *tx)
 {
-	struct sh_desc *desc = tx_to_sh_desc(tx);
+	struct sh_desc *desc = tx_to_sh_desc(tx), *chunk;
 	struct sh_dmae_chan *sh_chan = to_sh_chan(tx->chan);
 	dma_cookie_t cookie;
 
@@ -196,45 +201,40 @@ static dma_cookie_t sh_dmae_tx_submit(struct dma_async_tx_descriptor *tx)
 	if (cookie < 0)
 		cookie = 1;
 
-	/* If desc only in the case of 1 */
-	if (desc->async_tx.cookie != -EBUSY)
-		desc->async_tx.cookie = cookie;
-	sh_chan->common.cookie = desc->async_tx.cookie;
-
-	list_splice_init(&desc->tx_list, sh_chan->ld_queue.prev);
+	tx->cookie = cookie;
+	dev_dbg(sh_chan->dev, "submit %p #%d start %u\n",
+		tx, tx->cookie, desc->hw.sar);
+	sh_chan->common.cookie = tx->cookie;
+
+	/* Mark all chunks of this descriptor as submitted */
+	list_for_each_entry(chunk, desc->node.prev, node) {
+		/*
+		 * All chunks are on the global ld_queue, so, we have to find
+		 * the end of the chain ourselves
+		 */
+		if (chunk != desc && (chunk->async_tx.cookie > 0 ||
+				      chunk->async_tx.cookie = -EBUSY))
+			break;
+		chunk->mark = DESC_SUBMITTED;
+	}
 
 	spin_unlock_bh(&sh_chan->desc_lock);
 
 	return cookie;
 }
 
+/* Called with desc_lock held */
 static struct sh_desc *sh_dmae_get_desc(struct sh_dmae_chan *sh_chan)
 {
-	struct sh_desc *desc, *_desc, *ret = NULL;
-
-	spin_lock_bh(&sh_chan->desc_lock);
-	list_for_each_entry_safe(desc, _desc, &sh_chan->ld_free, node) {
-		if (async_tx_test_ack(&desc->async_tx)) {
-			list_del(&desc->node);
-			ret = desc;
-			break;
-		}
-	}
-	spin_unlock_bh(&sh_chan->desc_lock);
-
-	return ret;
-}
+	struct sh_desc *desc;
 
-static void sh_dmae_put_desc(struct sh_dmae_chan *sh_chan, struct sh_desc *desc)
-{
-	if (desc) {
-		spin_lock_bh(&sh_chan->desc_lock);
+	if (list_empty(&sh_chan->ld_free))
+		return NULL;
 
-		list_splice_init(&desc->tx_list, &sh_chan->ld_free);
-		list_add(&desc->node, &sh_chan->ld_free);
+	desc = list_entry(sh_chan->ld_free.next, struct sh_desc, node);
+	list_del(&desc->node);
 
-		spin_unlock_bh(&sh_chan->desc_lock);
-	}
+	return desc;
 }
 
 static int sh_dmae_alloc_chan_resources(struct dma_chan *chan)
@@ -254,10 +254,9 @@ static int sh_dmae_alloc_chan_resources(struct dma_chan *chan)
 					&sh_chan->common);
 		desc->async_tx.tx_submit = sh_dmae_tx_submit;
 		desc->async_tx.flags = DMA_CTRL_ACK;
-		INIT_LIST_HEAD(&desc->tx_list);
-		sh_dmae_put_desc(sh_chan, desc);
 
 		spin_lock_bh(&sh_chan->desc_lock);
+		list_add(&desc->node, &sh_chan->ld_free);
 		sh_chan->descs_allocated++;
 	}
 	spin_unlock_bh(&sh_chan->desc_lock);
@@ -274,7 +273,10 @@ static void sh_dmae_free_chan_resources(struct dma_chan *chan)
 	struct sh_desc *desc, *_desc;
 	LIST_HEAD(list);
 
-	BUG_ON(!list_empty(&sh_chan->ld_queue));
+	/* Prepared and not submitted descriptors can still be on the queue */
+	if (!list_empty(&sh_chan->ld_queue))
+		sh_dmae_chan_ld_cleanup(sh_chan, true);
+
 	spin_lock_bh(&sh_chan->desc_lock);
 
 	list_splice_init(&sh_chan->ld_free, &list);
@@ -293,6 +295,7 @@ static struct dma_async_tx_descriptor *sh_dmae_prep_memcpy(
 	struct sh_dmae_chan *sh_chan;
 	struct sh_desc *first = NULL, *prev = NULL, *new;
 	size_t copy_size;
+	LIST_HEAD(tx_list);
 
 	if (!chan)
 		return NULL;
@@ -302,43 +305,70 @@ static struct dma_async_tx_descriptor *sh_dmae_prep_memcpy(
 
 	sh_chan = to_sh_chan(chan);
 
+	/* Have to lock the whole loop to protect against concurrent release */
+	spin_lock_bh(&sh_chan->desc_lock);
+
+	/*
+	 * Chaining:
+	 * first descriptor is what user is dealing with in all API calls, its
+	 *	cookie is at first set to -EBUSY, at tx-submit to a positive
+	 *	number
+	 * if more than one chunk is needed further chunks have cookie = -EINVAL
+	 * the last chunk, if not equal to the first, has cookie = -ENOSPC
+	 * all chunks are linked onto the tx_list head with their .node heads
+	 *	only during this function, then they are immediately spliced
+	 *	onto the queue
+	 */
 	do {
 		/* Allocate the link descriptor from DMA pool */
 		new = sh_dmae_get_desc(sh_chan);
 		if (!new) {
 			dev_err(sh_chan->dev,
 					"No free memory for link descriptor\n");
-			goto err_get_desc;
+			list_splice(&tx_list, &sh_chan->ld_free);
+			spin_unlock_bh(&sh_chan->desc_lock);
+			return NULL;
 		}
 
-		copy_size = min(len, (size_t)SH_DMA_TCR_MAX);
+		copy_size = min(len, (size_t)SH_DMA_TCR_MAX + 1);
 
 		new->hw.sar = dma_src;
 		new->hw.dar = dma_dest;
 		new->hw.tcr = copy_size;
-		if (!first)
+		if (!first) {
+			/* First desc */
+			new->async_tx.cookie = -EBUSY;
 			first = new;
+		} else {
+			/* Other desc - invisible to the user */
+			new->async_tx.cookie = -EINVAL;
+		}
+
+		dev_dbg(sh_chan->dev,
+			"chaining %u of %u with %p, start %u, cookie %d\n",
+			copy_size, len, &new->async_tx, dma_src,
+			new->async_tx.cookie);
 
-		new->mark = DESC_NCOMP;
-		async_tx_ack(&new->async_tx);
+		new->mark = DESC_PREPARED;
+		new->async_tx.flags = flags;
 
 		prev = new;
 		len -= copy_size;
 		dma_src += copy_size;
 		dma_dest += copy_size;
 		/* Insert the link descriptor to the LD ring */
-		list_add_tail(&new->node, &first->tx_list);
+		list_add_tail(&new->node, &tx_list);
 	} while (len);
 
-	new->async_tx.flags = flags; /* client is in control of this ack */
-	new->async_tx.cookie = -EBUSY; /* Last desc */
+	if (new != first)
+		new->async_tx.cookie = -ENOSPC;
 
-	return &first->async_tx;
+	/* Put them immediately on the queue, so, they don't get lost */
+	list_splice_tail(&tx_list, sh_chan->ld_queue.prev);
 
-err_get_desc:
-	sh_dmae_put_desc(sh_chan, first);
-	return NULL;
+	spin_unlock_bh(&sh_chan->desc_lock);
 
+	return &first->async_tx;
 }
 
 /*
@@ -346,64 +376,87 @@ err_get_desc:
  *
  * This function clean up the ld_queue of DMA channel.
  */
-static void sh_dmae_chan_ld_cleanup(struct sh_dmae_chan *sh_chan)
+static void sh_dmae_chan_ld_cleanup(struct sh_dmae_chan *sh_chan, bool all)
 {
 	struct sh_desc *desc, *_desc;
 
 	spin_lock_bh(&sh_chan->desc_lock);
 	list_for_each_entry_safe(desc, _desc, &sh_chan->ld_queue, node) {
-		dma_async_tx_callback callback;
-		void *callback_param;
+		struct dma_async_tx_descriptor *tx = &desc->async_tx;
 
-		/* non send data */
-		if (desc->mark = DESC_NCOMP)
+		/* unsent data */
+		if (desc->mark = DESC_SUBMITTED && !all)
 			break;
 
-		/* send data sesc */
-		callback = desc->async_tx.callback;
-		callback_param = desc->async_tx.callback_param;
+		if (desc->mark = DESC_PREPARED && !all)
+			continue;
+
+		/* Call callback on the "exposed" descriptor (cookie > 0) */
+		if (tx->cookie > 0 && (desc->mark = DESC_COMPLETED ||
+				       desc->mark = DESC_WAITING)) {
+			struct sh_desc *chunk;
+			bool head_acked = async_tx_test_ack(tx);
+			/* sent data desc */
+			dma_async_tx_callback callback = tx->callback;
+
+			/* Run the link descriptor callback function */
+			if (callback && desc->mark = DESC_COMPLETED) {
+				spin_unlock_bh(&sh_chan->desc_lock);
+				dev_dbg(sh_chan->dev,
+					"descriptor %p callback\n", tx);
+				callback(tx->callback_param);
+				spin_lock_bh(&sh_chan->desc_lock);
+			}
 
-		/* Remove from ld_queue list */
-		list_splice_init(&desc->tx_list, &sh_chan->ld_free);
+			list_for_each_entry(chunk, desc->node.prev, node) {
+				/*
+				 * All chunks are on the global ld_queue, so, we
+				 * have to find the end of the chain ourselves
+				 */
+				if (chunk != desc &&
+				    (chunk->async_tx.cookie > 0 ||
+				     chunk->async_tx.cookie = -EBUSY))
+					break;
+
+				if (head_acked)
+					async_tx_ack(&chunk->async_tx);
+				else
+					chunk->mark = DESC_WAITING;
+			}
+		}
 
-		dev_dbg(sh_chan->dev, "link descriptor %p will be recycle.\n",
-				desc);
+		dev_dbg(sh_chan->dev, "descriptor %p #%d completed.\n",
+			tx, tx->cookie);
 
-		list_move(&desc->node, &sh_chan->ld_free);
-		/* Run the link descriptor callback function */
-		if (callback) {
-			spin_unlock_bh(&sh_chan->desc_lock);
-			dev_dbg(sh_chan->dev, "link descriptor %p callback\n",
-					desc);
-			callback(callback_param);
-			spin_lock_bh(&sh_chan->desc_lock);
-		}
+		if (((desc->mark = DESC_COMPLETED ||
+		      desc->mark = DESC_WAITING) &&
+		     async_tx_test_ack(&desc->async_tx)) || all)
+			/* Remove from ld_queue list */
+			list_move(&desc->node, &sh_chan->ld_free);
 	}
 	spin_unlock_bh(&sh_chan->desc_lock);
 }
 
 static void sh_chan_xfer_ld_queue(struct sh_dmae_chan *sh_chan)
 {
-	struct list_head *ld_node;
 	struct sh_dmae_regs hw;
+	struct sh_desc *sd;
 
 	/* DMA work check */
 	if (dmae_is_idle(sh_chan))
 		return;
 
+	spin_lock_bh(&sh_chan->desc_lock);
 	/* Find the first un-transfer desciptor */
-	for (ld_node = sh_chan->ld_queue.next;
-		(ld_node != &sh_chan->ld_queue)
-			&& (to_sh_desc(ld_node)->mark = DESC_COMP);
-		ld_node = ld_node->next)
-		cpu_relax();
-
-	if (ld_node != &sh_chan->ld_queue) {
-		/* Get the ld start address from ld_queue */
-		hw = to_sh_desc(ld_node)->hw;
-		dmae_set_reg(sh_chan, hw);
-		dmae_start(sh_chan);
-	}
+	list_for_each_entry(sd, &sh_chan->ld_queue, node)
+		if (sd->mark = DESC_SUBMITTED) {
+			/* Get the ld start address from ld_queue */
+			hw = sd->hw;
+			dmae_set_reg(sh_chan, hw);
+			dmae_start(sh_chan);
+			break;
+		}
+	spin_unlock_bh(&sh_chan->desc_lock);
 }
 
 static void sh_dmae_memcpy_issue_pending(struct dma_chan *chan)
@@ -421,12 +474,11 @@ static enum dma_status sh_dmae_is_complete(struct dma_chan *chan,
 	dma_cookie_t last_used;
 	dma_cookie_t last_complete;
 
-	sh_dmae_chan_ld_cleanup(sh_chan);
+	sh_dmae_chan_ld_cleanup(sh_chan, false);
 
 	last_used = chan->cookie;
 	last_complete = sh_chan->completed_cookie;
-	if (last_complete = -EBUSY)
-		last_complete = last_used;
+	BUG_ON(last_complete < 0);
 
 	if (done)
 		*done = last_complete;
@@ -481,11 +533,13 @@ static irqreturn_t sh_dmae_err(int irq, void *data)
 		err = sh_dmae_rst(0);
 		if (err)
 			return err;
+#ifdef SH_DMAC_BASE1
 		if (shdev->pdata.mode & SHDMA_DMAOR1) {
 			err = sh_dmae_rst(1);
 			if (err)
 				return err;
 		}
+#endif
 		disable_irq(irq);
 		return IRQ_HANDLED;
 	}
@@ -499,30 +553,36 @@ static void dmae_do_tasklet(unsigned long data)
 	u32 sar_buf = sh_dmae_readl(sh_chan, SAR);
 	list_for_each_entry_safe(desc, _desc,
 					&sh_chan->ld_queue, node) {
-		if ((desc->hw.sar + desc->hw.tcr) = sar_buf) {
+		if ((desc->hw.sar + desc->hw.tcr) = sar_buf &&
+		    desc->mark = DESC_SUBMITTED) {
 			cur_desc = desc;
 			break;
 		}
 	}
 
 	if (cur_desc) {
+		dev_dbg(sh_chan->dev, "done %p #%d start %u\n",
+			&cur_desc->async_tx, cur_desc->async_tx.cookie,
+			cur_desc->hw.sar);
+
 		switch (cur_desc->async_tx.cookie) {
-		case 0: /* other desc data */
+		case -EINVAL:
+		case -ENOSPC:
+			/* other desc data */
 			break;
-		case -EBUSY: /* last desc */
-		sh_chan->completed_cookie -				cur_desc->async_tx.cookie;
+		case -EBUSY: /* Cannot be */
+			BUG_ON(1);
 			break;
-		default: /* first desc ( 0 < )*/
+		default: /* first desc: cookie > 0 */
 			sh_chan->completed_cookie -				cur_desc->async_tx.cookie - 1;
+				cur_desc->async_tx.cookie;
 			break;
 		}
-		cur_desc->mark = DESC_COMP;
+		cur_desc->mark = DESC_COMPLETED;
 	}
 	/* Next desc */
 	sh_chan_xfer_ld_queue(sh_chan);
-	sh_dmae_chan_ld_cleanup(sh_chan);
+	sh_dmae_chan_ld_cleanup(sh_chan, false);
 }
 
 static unsigned int get_dmae_irq(unsigned int id)
diff --git a/drivers/dma/shdma.h b/drivers/dma/shdma.h
index 2b4bc15..c93555c 100644
--- a/drivers/dma/shdma.h
+++ b/drivers/dma/shdma.h
@@ -26,7 +26,6 @@ struct sh_dmae_regs {
 };
 
 struct sh_desc {
-	struct list_head tx_list;
 	struct sh_dmae_regs hw;
 	struct list_head node;
 	struct dma_async_tx_descriptor async_tx;
-- 
1.6.2.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 4/5] sh: stylistic improvements for the DMA driver
  2009-12-04 18:44 [PATCH 0/5] sh: fix the shdma driver, improve dmatest, support sh7724 Guennadi Liakhovetski
                   ` (2 preceding siblings ...)
  2009-12-04 18:44 ` [PATCH 3/5] sh: fix DMA driver's descriptor chaining and cookie Guennadi Liakhovetski
@ 2009-12-04 18:45 ` Guennadi Liakhovetski
  2009-12-08  0:54   ` Dan Williams
  2009-12-04 18:45 ` [PATCH 5/5] sh: dmaengine support for sh7724 Guennadi Liakhovetski
  2009-12-08  1:09 ` [PATCH 0/5] sh: fix the shdma driver, improve dmatest, support Dan Williams
  5 siblings, 1 reply; 19+ messages in thread
From: Guennadi Liakhovetski @ 2009-12-04 18:45 UTC (permalink / raw)
  To: linux-kernel; +Cc: Dan Williams, linux-sh

Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
---
 drivers/dma/shdma.c |   34 +++++++++++++++++-----------------
 drivers/dma/shdma.h |   14 +++++++-------
 2 files changed, 24 insertions(+), 24 deletions(-)

diff --git a/drivers/dma/shdma.c b/drivers/dma/shdma.c
index aac8f78..0d21468 100644
--- a/drivers/dma/shdma.c
+++ b/drivers/dma/shdma.c
@@ -85,17 +85,17 @@ static int sh_dmae_rst(int id)
 	unsigned short dmaor;
 
 	sh_dmae_ctl_stop(id);
-	dmaor = (dmaor_read_reg(id)|DMAOR_INIT);
+	dmaor = dmaor_read_reg(id) | DMAOR_INIT;
 
 	dmaor_write_reg(id, dmaor);
-	if ((dmaor_read_reg(id) & (DMAOR_AE | DMAOR_NMIF))) {
+	if (dmaor_read_reg(id) & (DMAOR_AE | DMAOR_NMIF)) {
 		pr_warning(KERN_ERR "dma-sh: Can't initialize DMAOR.\n");
 		return -EINVAL;
 	}
 	return 0;
 }
 
-static int dmae_is_idle(struct sh_dmae_chan *sh_chan)
+static int dmae_is_busy(struct sh_dmae_chan *sh_chan)
 {
 	u32 chcr = sh_dmae_readl(sh_chan, CHCR);
 	if (chcr & CHCR_DE) {
@@ -115,15 +115,14 @@ static void dmae_set_reg(struct sh_dmae_chan *sh_chan, struct sh_dmae_regs hw)
 {
 	sh_dmae_writel(sh_chan, hw.sar, SAR);
 	sh_dmae_writel(sh_chan, hw.dar, DAR);
-	sh_dmae_writel(sh_chan,
-		(hw.tcr >> calc_xmit_shift(sh_chan)), TCR);
+	sh_dmae_writel(sh_chan, hw.tcr >> calc_xmit_shift(sh_chan), TCR);
 }
 
 static void dmae_start(struct sh_dmae_chan *sh_chan)
 {
 	u32 chcr = sh_dmae_readl(sh_chan, CHCR);
 
-	chcr |= (CHCR_DE|CHCR_IE);
+	chcr |= CHCR_DE | CHCR_IE;
 	sh_dmae_writel(sh_chan, chcr, CHCR);
 }
 
@@ -137,7 +136,7 @@ static void dmae_halt(struct sh_dmae_chan *sh_chan)
 
 static int dmae_set_chcr(struct sh_dmae_chan *sh_chan, u32 val)
 {
-	int ret = dmae_is_idle(sh_chan);
+	int ret = dmae_is_busy(sh_chan);
 	/* When DMA was working, can not set data to CHCR */
 	if (ret)
 		return ret;
@@ -154,7 +153,7 @@ static int dmae_set_dmars(struct sh_dmae_chan *sh_chan, u16 val)
 {
 	u32 addr;
 	int shift = 0;
-	int ret = dmae_is_idle(sh_chan);
+	int ret = dmae_is_busy(sh_chan);
 	if (ret)
 		return ret;
 
@@ -324,7 +323,7 @@ static struct dma_async_tx_descriptor *sh_dmae_prep_memcpy(
 		new = sh_dmae_get_desc(sh_chan);
 		if (!new) {
 			dev_err(sh_chan->dev,
-					"No free memory for link descriptor\n");
+				"No free memory for link descriptor\n");
 			list_splice(&tx_list, &sh_chan->ld_free);
 			spin_unlock_bh(&sh_chan->desc_lock);
 			return NULL;
@@ -443,7 +442,7 @@ static void sh_chan_xfer_ld_queue(struct sh_dmae_chan *sh_chan)
 	struct sh_desc *sd;
 
 	/* DMA work check */
-	if (dmae_is_idle(sh_chan))
+	if (dmae_is_busy(sh_chan))
 		return;
 
 	spin_lock_bh(&sh_chan->desc_lock);
@@ -551,8 +550,9 @@ static void dmae_do_tasklet(unsigned long data)
 	struct sh_dmae_chan *sh_chan = (struct sh_dmae_chan *)data;
 	struct sh_desc *desc, *_desc, *cur_desc = NULL;
 	u32 sar_buf = sh_dmae_readl(sh_chan, SAR);
+
 	list_for_each_entry_safe(desc, _desc,
-					&sh_chan->ld_queue, node) {
+				 &sh_chan->ld_queue, node) {
 		if ((desc->hw.sar + desc->hw.tcr) = sar_buf &&
 		    desc->mark = DESC_SUBMITTED) {
 			cur_desc = desc;
@@ -603,8 +603,8 @@ static int __devinit sh_dmae_chan_probe(struct sh_dmae_device *shdev, int id)
 	/* alloc channel */
 	new_sh_chan = kzalloc(sizeof(struct sh_dmae_chan), GFP_KERNEL);
 	if (!new_sh_chan) {
-		dev_err(shdev->common.dev, "No free memory for allocating "
-				"dma channels!\n");
+		dev_err(shdev->common.dev,
+			"No free memory for allocating dma channels!\n");
 		return -ENOMEM;
 	}
 
@@ -646,8 +646,8 @@ static int __devinit sh_dmae_chan_probe(struct sh_dmae_device *shdev, int id)
 			"sh-dmae%d", new_sh_chan->id);
 
 	/* set up channel irq */
-	err = request_irq(irq, &sh_dmae_interrupt,
-		irqflags, new_sh_chan->dev_id, new_sh_chan);
+	err = request_irq(irq, &sh_dmae_interrupt, irqflags,
+			  new_sh_chan->dev_id, new_sh_chan);
 	if (err) {
 		dev_err(shdev->common.dev, "DMA channel %d request_irq error "
 			"with return %d\n", id, err);
@@ -751,8 +751,8 @@ static int __init sh_dmae_probe(struct platform_device *pdev)
 	}
 
 	for (ecnt = 0 ; ecnt < ARRAY_SIZE(eirq); ecnt++) {
-		err = request_irq(eirq[ecnt], sh_dmae_err,
-			irqflags, "DMAC Address Error", shdev);
+		err = request_irq(eirq[ecnt], sh_dmae_err, irqflags,
+				  "DMAC Address Error", shdev);
 		if (err) {
 			dev_err(&pdev->dev, "DMA device request_irq"
 				"error (irq %d) with return %d\n",
diff --git a/drivers/dma/shdma.h b/drivers/dma/shdma.h
index c93555c..f412bfa 100644
--- a/drivers/dma/shdma.h
+++ b/drivers/dma/shdma.h
@@ -34,15 +34,15 @@ struct sh_desc {
 
 struct sh_dmae_chan {
 	dma_cookie_t completed_cookie;	/* The maximum cookie completed */
-	spinlock_t desc_lock;			/* Descriptor operation lock */
-	struct list_head ld_queue;		/* Link descriptors queue */
-	struct list_head ld_free;		/* Link descriptors free */
-	struct dma_chan common;			/* DMA common channel */
-	struct device *dev;				/* Channel device */
+	spinlock_t desc_lock;		/* Descriptor operation lock */
+	struct list_head ld_queue;	/* Link descriptors queue */
+	struct list_head ld_free;	/* Link descriptors free */
+	struct dma_chan common;		/* DMA common channel */
+	struct device *dev;		/* Channel device */
 	struct tasklet_struct tasklet;	/* Tasklet */
-	int descs_allocated;			/* desc count */
+	int descs_allocated;		/* desc count */
 	int id;				/* Raw id of this channel */
-	char dev_id[16];	/* unique name per DMAC of channel */
+	char dev_id[16];		/* unique name per DMAC of channel */
 
 	/* Set chcr */
 	int (*set_chcr)(struct sh_dmae_chan *sh_chan, u32 regs);
-- 
1.6.2.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 5/5] sh: dmaengine support for sh7724
  2009-12-04 18:44 [PATCH 0/5] sh: fix the shdma driver, improve dmatest, support sh7724 Guennadi Liakhovetski
                   ` (3 preceding siblings ...)
  2009-12-04 18:45 ` [PATCH 4/5] sh: stylistic improvements for the DMA driver Guennadi Liakhovetski
@ 2009-12-04 18:45 ` Guennadi Liakhovetski
  2009-12-08  1:02   ` Dan Williams
  2009-12-08  1:09 ` [PATCH 0/5] sh: fix the shdma driver, improve dmatest, support Dan Williams
  5 siblings, 1 reply; 19+ messages in thread
From: Guennadi Liakhovetski @ 2009-12-04 18:45 UTC (permalink / raw)
  To: linux-kernel; +Cc: Dan Williams, linux-sh, Paul Mundt

Add a dmaengine platform device to sh7724, fix DMA channel interrupt numbers.

Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
---

With this patch dmatest works with channels 0-5, channels 6-11 still don't 
work for some reason... So, feel free to discard it for now, if a complete 
fix would be preferred.

 arch/sh/include/cpu-sh4/cpu/dma-sh4a.h |    8 ++++----
 arch/sh/kernel/cpu/sh4a/setup-sh7724.c |   14 ++++++++++++++
 2 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/arch/sh/include/cpu-sh4/cpu/dma-sh4a.h b/arch/sh/include/cpu-sh4/cpu/dma-sh4a.h
index f0886bc..c4ed660 100644
--- a/arch/sh/include/cpu-sh4/cpu/dma-sh4a.h
+++ b/arch/sh/include/cpu-sh4/cpu/dma-sh4a.h
@@ -19,10 +19,10 @@
 #elif defined(CONFIG_CPU_SUBTYPE_SH7723) || \
       defined(CONFIG_CPU_SUBTYPE_SH7724)
 #define DMTE0_IRQ	48	/* DMAC0A*/
-#define DMTE4_IRQ	40	/* DMAC0B */
-#define DMTE6_IRQ	42
-#define DMTE8_IRQ	76	/* DMAC1A */
-#define DMTE9_IRQ	77
+#define DMTE4_IRQ	76	/* DMAC0B */
+#define DMTE6_IRQ	40
+#define DMTE8_IRQ	42	/* DMAC1A */
+#define DMTE9_IRQ	43
 #define DMTE10_IRQ	72	/* DMAC1B */
 #define DMTE11_IRQ	73
 #define DMAE0_IRQ	78	/* DMA Error IRQ*/
diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7724.c b/arch/sh/kernel/cpu/sh4a/setup-sh7724.c
index 6dc4469..da55a4e 100644
--- a/arch/sh/kernel/cpu/sh4a/setup-sh7724.c
+++ b/arch/sh/kernel/cpu/sh4a/setup-sh7724.c
@@ -23,9 +23,22 @@
 #include <linux/notifier.h>
 #include <asm/suspend.h>
 #include <asm/clock.h>
+#include <asm/dma-sh.h>
 #include <asm/mmzone.h>
 #include <cpu/sh7724.h>
 
+static struct sh_dmae_pdata dma_platform_data = {
+	.mode = SHDMA_DMAOR1,
+};
+
+static struct platform_device dma_device = {
+	.name		= "sh-dma-engine",
+	.id		= -1,
+	.dev		= {
+		.platform_data	= &dma_platform_data,
+	},
+};
+
 /* Serial */
 static struct plat_sci_port sci_platform_data[] = {
 	{
@@ -533,6 +546,7 @@ static struct platform_device *sh7724_devices[] __initdata = {
 	&tmu3_device,
 	&tmu4_device,
 	&tmu5_device,
+	&dma_device,
 	&sci_device,
 	&rtc_device,
 	&iic0_device,
-- 
1.6.2.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 4/5] sh: stylistic improvements for the DMA driver
  2009-12-04 18:45 ` [PATCH 4/5] sh: stylistic improvements for the DMA driver Guennadi Liakhovetski
@ 2009-12-08  0:54   ` Dan Williams
  2009-12-08  8:20     ` Guennadi Liakhovetski
  0 siblings, 1 reply; 19+ messages in thread
From: Dan Williams @ 2009-12-08  0:54 UTC (permalink / raw)
  To: Guennadi Liakhovetski; +Cc: linux-kernel, linux-sh, Nobuhiro Iwamatsu

On Fri, Dec 4, 2009 at 11:45 AM, Guennadi Liakhovetski
<g.liakhovetski@gmx.de> wrote:
> Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
> ---
>  drivers/dma/shdma.c |   34 +++++++++++++++++-----------------
>  drivers/dma/shdma.h |   14 +++++++-------
>  2 files changed, 24 insertions(+), 24 deletions(-)
>
[..]
> -static int dmae_is_idle(struct sh_dmae_chan *sh_chan)
> +static int dmae_is_busy(struct sh_dmae_chan *sh_chan)

This one isn't purely style, I agree with it, but think it would be a
bit cleaner to return a bool rather than an error code.

--
Dan

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/5] sh: fix DMA driver's descriptor chaining and cookie
  2009-12-04 18:44 ` [PATCH 3/5] sh: fix DMA driver's descriptor chaining and cookie Guennadi Liakhovetski
@ 2009-12-08  0:58   ` Dan Williams
  2009-12-12  2:11     ` Nobuhiro Iwamatsu
  0 siblings, 1 reply; 19+ messages in thread
From: Dan Williams @ 2009-12-08  0:58 UTC (permalink / raw)
  To: Guennadi Liakhovetski; +Cc: linux-kernel, linux-sh, Nobuhiro Iwamatsu

I would like an acked-by from Nobuhiro  on this one.

[ full patch quoted below ]

Thanks,
Dan

On Fri, Dec 4, 2009 at 11:44 AM, Guennadi Liakhovetski
<g.liakhovetski@gmx.de> wrote:
> The SH DMA driver wrongly assigns negative cookies to transfer descriptors,
> also, its chaining of partial descriptors is broken. The latter problem is
> usually invisible, because maximum transfer size per chunk is 16M, but if you
> artificially set this limit lower, the driver fails. Since cookies are also
> used in chunk management, both these problems are fixed in one patch.
>
> Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
> ---
>  drivers/dma/shdma.c |  246 ++++++++++++++++++++++++++++++++-------------------
>  drivers/dma/shdma.h |    1 -
>  2 files changed, 153 insertions(+), 94 deletions(-)
>
> diff --git a/drivers/dma/shdma.c b/drivers/dma/shdma.c
> index f5fae12..aac8f78 100644
> --- a/drivers/dma/shdma.c
> +++ b/drivers/dma/shdma.c
> @@ -30,9 +30,12 @@
>  #include "shdma.h"
>
>  /* DMA descriptor control */
> -#define DESC_LAST      (-1)
> -#define DESC_COMP      (1)
> -#define DESC_NCOMP     (0)
> +enum sh_dmae_desc_status {
> +       DESC_PREPARED,
> +       DESC_SUBMITTED,
> +       DESC_COMPLETED, /* completed, have to call callback */
> +       DESC_WAITING,   /* callback called, waiting for ack / re-submit */
> +};
>
>  #define NR_DESCS_PER_CHANNEL 32
>  /*
> @@ -45,6 +48,8 @@
>  */
>  #define RS_DEFAULT  (RS_DUAL)
>
> +static void sh_dmae_chan_ld_cleanup(struct sh_dmae_chan *sh_chan, bool all);
> +
>  #define SH_DMAC_CHAN_BASE(id) (dma_base_addr[id])
>  static void sh_dmae_writel(struct sh_dmae_chan *sh_dc, u32 data, u32 reg)
>  {
> @@ -185,7 +190,7 @@ static int dmae_set_dmars(struct sh_dmae_chan *sh_chan, u16 val)
>
>  static dma_cookie_t sh_dmae_tx_submit(struct dma_async_tx_descriptor *tx)
>  {
> -       struct sh_desc *desc = tx_to_sh_desc(tx);
> +       struct sh_desc *desc = tx_to_sh_desc(tx), *chunk;
>        struct sh_dmae_chan *sh_chan = to_sh_chan(tx->chan);
>        dma_cookie_t cookie;
>
> @@ -196,45 +201,40 @@ static dma_cookie_t sh_dmae_tx_submit(struct dma_async_tx_descriptor *tx)
>        if (cookie < 0)
>                cookie = 1;
>
> -       /* If desc only in the case of 1 */
> -       if (desc->async_tx.cookie != -EBUSY)
> -               desc->async_tx.cookie = cookie;
> -       sh_chan->common.cookie = desc->async_tx.cookie;
> -
> -       list_splice_init(&desc->tx_list, sh_chan->ld_queue.prev);
> +       tx->cookie = cookie;
> +       dev_dbg(sh_chan->dev, "submit %p #%d start %u\n",
> +               tx, tx->cookie, desc->hw.sar);
> +       sh_chan->common.cookie = tx->cookie;
> +
> +       /* Mark all chunks of this descriptor as submitted */
> +       list_for_each_entry(chunk, desc->node.prev, node) {
> +               /*
> +                * All chunks are on the global ld_queue, so, we have to find
> +                * the end of the chain ourselves
> +                */
> +               if (chunk != desc && (chunk->async_tx.cookie > 0 ||
> +                                     chunk->async_tx.cookie = -EBUSY))
> +                       break;
> +               chunk->mark = DESC_SUBMITTED;
> +       }
>
>        spin_unlock_bh(&sh_chan->desc_lock);
>
>        return cookie;
>  }
>
> +/* Called with desc_lock held */
>  static struct sh_desc *sh_dmae_get_desc(struct sh_dmae_chan *sh_chan)
>  {
> -       struct sh_desc *desc, *_desc, *ret = NULL;
> -
> -       spin_lock_bh(&sh_chan->desc_lock);
> -       list_for_each_entry_safe(desc, _desc, &sh_chan->ld_free, node) {
> -               if (async_tx_test_ack(&desc->async_tx)) {
> -                       list_del(&desc->node);
> -                       ret = desc;
> -                       break;
> -               }
> -       }
> -       spin_unlock_bh(&sh_chan->desc_lock);
> -
> -       return ret;
> -}
> +       struct sh_desc *desc;
>
> -static void sh_dmae_put_desc(struct sh_dmae_chan *sh_chan, struct sh_desc *desc)
> -{
> -       if (desc) {
> -               spin_lock_bh(&sh_chan->desc_lock);
> +       if (list_empty(&sh_chan->ld_free))
> +               return NULL;
>
> -               list_splice_init(&desc->tx_list, &sh_chan->ld_free);
> -               list_add(&desc->node, &sh_chan->ld_free);
> +       desc = list_entry(sh_chan->ld_free.next, struct sh_desc, node);
> +       list_del(&desc->node);
>
> -               spin_unlock_bh(&sh_chan->desc_lock);
> -       }
> +       return desc;
>  }
>
>  static int sh_dmae_alloc_chan_resources(struct dma_chan *chan)
> @@ -254,10 +254,9 @@ static int sh_dmae_alloc_chan_resources(struct dma_chan *chan)
>                                        &sh_chan->common);
>                desc->async_tx.tx_submit = sh_dmae_tx_submit;
>                desc->async_tx.flags = DMA_CTRL_ACK;
> -               INIT_LIST_HEAD(&desc->tx_list);
> -               sh_dmae_put_desc(sh_chan, desc);
>
>                spin_lock_bh(&sh_chan->desc_lock);
> +               list_add(&desc->node, &sh_chan->ld_free);
>                sh_chan->descs_allocated++;
>        }
>        spin_unlock_bh(&sh_chan->desc_lock);
> @@ -274,7 +273,10 @@ static void sh_dmae_free_chan_resources(struct dma_chan *chan)
>        struct sh_desc *desc, *_desc;
>        LIST_HEAD(list);
>
> -       BUG_ON(!list_empty(&sh_chan->ld_queue));
> +       /* Prepared and not submitted descriptors can still be on the queue */
> +       if (!list_empty(&sh_chan->ld_queue))
> +               sh_dmae_chan_ld_cleanup(sh_chan, true);
> +
>        spin_lock_bh(&sh_chan->desc_lock);
>
>        list_splice_init(&sh_chan->ld_free, &list);
> @@ -293,6 +295,7 @@ static struct dma_async_tx_descriptor *sh_dmae_prep_memcpy(
>        struct sh_dmae_chan *sh_chan;
>        struct sh_desc *first = NULL, *prev = NULL, *new;
>        size_t copy_size;
> +       LIST_HEAD(tx_list);
>
>        if (!chan)
>                return NULL;
> @@ -302,43 +305,70 @@ static struct dma_async_tx_descriptor *sh_dmae_prep_memcpy(
>
>        sh_chan = to_sh_chan(chan);
>
> +       /* Have to lock the whole loop to protect against concurrent release */
> +       spin_lock_bh(&sh_chan->desc_lock);
> +
> +       /*
> +        * Chaining:
> +        * first descriptor is what user is dealing with in all API calls, its
> +        *      cookie is at first set to -EBUSY, at tx-submit to a positive
> +        *      number
> +        * if more than one chunk is needed further chunks have cookie = -EINVAL
> +        * the last chunk, if not equal to the first, has cookie = -ENOSPC
> +        * all chunks are linked onto the tx_list head with their .node heads
> +        *      only during this function, then they are immediately spliced
> +        *      onto the queue
> +        */
>        do {
>                /* Allocate the link descriptor from DMA pool */
>                new = sh_dmae_get_desc(sh_chan);
>                if (!new) {
>                        dev_err(sh_chan->dev,
>                                        "No free memory for link descriptor\n");
> -                       goto err_get_desc;
> +                       list_splice(&tx_list, &sh_chan->ld_free);
> +                       spin_unlock_bh(&sh_chan->desc_lock);
> +                       return NULL;
>                }
>
> -               copy_size = min(len, (size_t)SH_DMA_TCR_MAX);
> +               copy_size = min(len, (size_t)SH_DMA_TCR_MAX + 1);
>
>                new->hw.sar = dma_src;
>                new->hw.dar = dma_dest;
>                new->hw.tcr = copy_size;
> -               if (!first)
> +               if (!first) {
> +                       /* First desc */
> +                       new->async_tx.cookie = -EBUSY;
>                        first = new;
> +               } else {
> +                       /* Other desc - invisible to the user */
> +                       new->async_tx.cookie = -EINVAL;
> +               }
> +
> +               dev_dbg(sh_chan->dev,
> +                       "chaining %u of %u with %p, start %u, cookie %d\n",
> +                       copy_size, len, &new->async_tx, dma_src,
> +                       new->async_tx.cookie);
>
> -               new->mark = DESC_NCOMP;
> -               async_tx_ack(&new->async_tx);
> +               new->mark = DESC_PREPARED;
> +               new->async_tx.flags = flags;
>
>                prev = new;
>                len -= copy_size;
>                dma_src += copy_size;
>                dma_dest += copy_size;
>                /* Insert the link descriptor to the LD ring */
> -               list_add_tail(&new->node, &first->tx_list);
> +               list_add_tail(&new->node, &tx_list);
>        } while (len);
>
> -       new->async_tx.flags = flags; /* client is in control of this ack */
> -       new->async_tx.cookie = -EBUSY; /* Last desc */
> +       if (new != first)
> +               new->async_tx.cookie = -ENOSPC;
>
> -       return &first->async_tx;
> +       /* Put them immediately on the queue, so, they don't get lost */
> +       list_splice_tail(&tx_list, sh_chan->ld_queue.prev);
>
> -err_get_desc:
> -       sh_dmae_put_desc(sh_chan, first);
> -       return NULL;
> +       spin_unlock_bh(&sh_chan->desc_lock);
>
> +       return &first->async_tx;
>  }
>
>  /*
> @@ -346,64 +376,87 @@ err_get_desc:
>  *
>  * This function clean up the ld_queue of DMA channel.
>  */
> -static void sh_dmae_chan_ld_cleanup(struct sh_dmae_chan *sh_chan)
> +static void sh_dmae_chan_ld_cleanup(struct sh_dmae_chan *sh_chan, bool all)
>  {
>        struct sh_desc *desc, *_desc;
>
>        spin_lock_bh(&sh_chan->desc_lock);
>        list_for_each_entry_safe(desc, _desc, &sh_chan->ld_queue, node) {
> -               dma_async_tx_callback callback;
> -               void *callback_param;
> +               struct dma_async_tx_descriptor *tx = &desc->async_tx;
>
> -               /* non send data */
> -               if (desc->mark = DESC_NCOMP)
> +               /* unsent data */
> +               if (desc->mark = DESC_SUBMITTED && !all)
>                        break;
>
> -               /* send data sesc */
> -               callback = desc->async_tx.callback;
> -               callback_param = desc->async_tx.callback_param;
> +               if (desc->mark = DESC_PREPARED && !all)
> +                       continue;
> +
> +               /* Call callback on the "exposed" descriptor (cookie > 0) */
> +               if (tx->cookie > 0 && (desc->mark = DESC_COMPLETED ||
> +                                      desc->mark = DESC_WAITING)) {
> +                       struct sh_desc *chunk;
> +                       bool head_acked = async_tx_test_ack(tx);
> +                       /* sent data desc */
> +                       dma_async_tx_callback callback = tx->callback;
> +
> +                       /* Run the link descriptor callback function */
> +                       if (callback && desc->mark = DESC_COMPLETED) {
> +                               spin_unlock_bh(&sh_chan->desc_lock);
> +                               dev_dbg(sh_chan->dev,
> +                                       "descriptor %p callback\n", tx);
> +                               callback(tx->callback_param);
> +                               spin_lock_bh(&sh_chan->desc_lock);
> +                       }
>
> -               /* Remove from ld_queue list */
> -               list_splice_init(&desc->tx_list, &sh_chan->ld_free);
> +                       list_for_each_entry(chunk, desc->node.prev, node) {
> +                               /*
> +                                * All chunks are on the global ld_queue, so, we
> +                                * have to find the end of the chain ourselves
> +                                */
> +                               if (chunk != desc &&
> +                                   (chunk->async_tx.cookie > 0 ||
> +                                    chunk->async_tx.cookie = -EBUSY))
> +                                       break;
> +
> +                               if (head_acked)
> +                                       async_tx_ack(&chunk->async_tx);
> +                               else
> +                                       chunk->mark = DESC_WAITING;
> +                       }
> +               }
>
> -               dev_dbg(sh_chan->dev, "link descriptor %p will be recycle.\n",
> -                               desc);
> +               dev_dbg(sh_chan->dev, "descriptor %p #%d completed.\n",
> +                       tx, tx->cookie);
>
> -               list_move(&desc->node, &sh_chan->ld_free);
> -               /* Run the link descriptor callback function */
> -               if (callback) {
> -                       spin_unlock_bh(&sh_chan->desc_lock);
> -                       dev_dbg(sh_chan->dev, "link descriptor %p callback\n",
> -                                       desc);
> -                       callback(callback_param);
> -                       spin_lock_bh(&sh_chan->desc_lock);
> -               }
> +               if (((desc->mark = DESC_COMPLETED ||
> +                     desc->mark = DESC_WAITING) &&
> +                    async_tx_test_ack(&desc->async_tx)) || all)
> +                       /* Remove from ld_queue list */
> +                       list_move(&desc->node, &sh_chan->ld_free);
>        }
>        spin_unlock_bh(&sh_chan->desc_lock);
>  }
>
>  static void sh_chan_xfer_ld_queue(struct sh_dmae_chan *sh_chan)
>  {
> -       struct list_head *ld_node;
>        struct sh_dmae_regs hw;
> +       struct sh_desc *sd;
>
>        /* DMA work check */
>        if (dmae_is_idle(sh_chan))
>                return;
>
> +       spin_lock_bh(&sh_chan->desc_lock);
>        /* Find the first un-transfer desciptor */
> -       for (ld_node = sh_chan->ld_queue.next;
> -               (ld_node != &sh_chan->ld_queue)
> -                       && (to_sh_desc(ld_node)->mark = DESC_COMP);
> -               ld_node = ld_node->next)
> -               cpu_relax();
> -
> -       if (ld_node != &sh_chan->ld_queue) {
> -               /* Get the ld start address from ld_queue */
> -               hw = to_sh_desc(ld_node)->hw;
> -               dmae_set_reg(sh_chan, hw);
> -               dmae_start(sh_chan);
> -       }
> +       list_for_each_entry(sd, &sh_chan->ld_queue, node)
> +               if (sd->mark = DESC_SUBMITTED) {
> +                       /* Get the ld start address from ld_queue */
> +                       hw = sd->hw;
> +                       dmae_set_reg(sh_chan, hw);
> +                       dmae_start(sh_chan);
> +                       break;
> +               }
> +       spin_unlock_bh(&sh_chan->desc_lock);
>  }
>
>  static void sh_dmae_memcpy_issue_pending(struct dma_chan *chan)
> @@ -421,12 +474,11 @@ static enum dma_status sh_dmae_is_complete(struct dma_chan *chan,
>        dma_cookie_t last_used;
>        dma_cookie_t last_complete;
>
> -       sh_dmae_chan_ld_cleanup(sh_chan);
> +       sh_dmae_chan_ld_cleanup(sh_chan, false);
>
>        last_used = chan->cookie;
>        last_complete = sh_chan->completed_cookie;
> -       if (last_complete = -EBUSY)
> -               last_complete = last_used;
> +       BUG_ON(last_complete < 0);
>
>        if (done)
>                *done = last_complete;
> @@ -481,11 +533,13 @@ static irqreturn_t sh_dmae_err(int irq, void *data)
>                err = sh_dmae_rst(0);
>                if (err)
>                        return err;
> +#ifdef SH_DMAC_BASE1
>                if (shdev->pdata.mode & SHDMA_DMAOR1) {
>                        err = sh_dmae_rst(1);
>                        if (err)
>                                return err;
>                }
> +#endif
>                disable_irq(irq);
>                return IRQ_HANDLED;
>        }
> @@ -499,30 +553,36 @@ static void dmae_do_tasklet(unsigned long data)
>        u32 sar_buf = sh_dmae_readl(sh_chan, SAR);
>        list_for_each_entry_safe(desc, _desc,
>                                        &sh_chan->ld_queue, node) {
> -               if ((desc->hw.sar + desc->hw.tcr) = sar_buf) {
> +               if ((desc->hw.sar + desc->hw.tcr) = sar_buf &&
> +                   desc->mark = DESC_SUBMITTED) {
>                        cur_desc = desc;
>                        break;
>                }
>        }
>
>        if (cur_desc) {
> +               dev_dbg(sh_chan->dev, "done %p #%d start %u\n",
> +                       &cur_desc->async_tx, cur_desc->async_tx.cookie,
> +                       cur_desc->hw.sar);
> +
>                switch (cur_desc->async_tx.cookie) {
> -               case 0: /* other desc data */
> +               case -EINVAL:
> +               case -ENOSPC:
> +                       /* other desc data */
>                        break;
> -               case -EBUSY: /* last desc */
> -               sh_chan->completed_cookie > -                               cur_desc->async_tx.cookie;
> +               case -EBUSY: /* Cannot be */
> +                       BUG_ON(1);
>                        break;
> -               default: /* first desc ( 0 < )*/
> +               default: /* first desc: cookie > 0 */
>                        sh_chan->completed_cookie > -                               cur_desc->async_tx.cookie - 1;
> +                               cur_desc->async_tx.cookie;
>                        break;
>                }
> -               cur_desc->mark = DESC_COMP;
> +               cur_desc->mark = DESC_COMPLETED;
>        }
>        /* Next desc */
>        sh_chan_xfer_ld_queue(sh_chan);
> -       sh_dmae_chan_ld_cleanup(sh_chan);
> +       sh_dmae_chan_ld_cleanup(sh_chan, false);
>  }
>
>  static unsigned int get_dmae_irq(unsigned int id)
> diff --git a/drivers/dma/shdma.h b/drivers/dma/shdma.h
> index 2b4bc15..c93555c 100644
> --- a/drivers/dma/shdma.h
> +++ b/drivers/dma/shdma.h
> @@ -26,7 +26,6 @@ struct sh_dmae_regs {
>  };
>
>  struct sh_desc {
> -       struct list_head tx_list;
>        struct sh_dmae_regs hw;
>        struct list_head node;
>        struct dma_async_tx_descriptor async_tx;
> --
> 1.6.2.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 5/5] sh: dmaengine support for sh7724
  2009-12-04 18:45 ` [PATCH 5/5] sh: dmaengine support for sh7724 Guennadi Liakhovetski
@ 2009-12-08  1:02   ` Dan Williams
  2009-12-08  7:13     ` Paul Mundt
  0 siblings, 1 reply; 19+ messages in thread
From: Dan Williams @ 2009-12-08  1:02 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: linux-kernel, linux-sh, Paul Mundt, Nobuhiro Iwamatsu

On Fri, Dec 4, 2009 at 11:45 AM, Guennadi Liakhovetski
<g.liakhovetski@gmx.de> wrote:
> Add a dmaengine platform device to sh7724, fix DMA channel interrupt numbers.
>
> Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
> ---
>
> With this patch dmatest works with channels 0-5, channels 6-11 still don't
> work for some reason... So, feel free to discard it for now, if a complete
> fix would be preferred.

Does this patch limit operation to channels 0-5?  I leave this one for
Paul to disposition.

Thanks,
Dan

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/5] sh: fix the shdma driver, improve dmatest, support
  2009-12-04 18:44 [PATCH 0/5] sh: fix the shdma driver, improve dmatest, support sh7724 Guennadi Liakhovetski
                   ` (4 preceding siblings ...)
  2009-12-04 18:45 ` [PATCH 5/5] sh: dmaengine support for sh7724 Guennadi Liakhovetski
@ 2009-12-08  1:09 ` Dan Williams
  5 siblings, 0 replies; 19+ messages in thread
From: Dan Williams @ 2009-12-08  1:09 UTC (permalink / raw)
  To: Guennadi Liakhovetski; +Cc: linux-kernel, linux-sh, Nobuhiro Iwamatsu

On Fri, Dec 4, 2009 at 11:44 AM, Guennadi Liakhovetski
<g.liakhovetski@gmx.de> wrote:
> Hi
>
> Not all of these patches are connected, only the ones, affecting the same
> file should be applied in a specific order, otherwise the patches are
> independent and can be applied separately.

I'll take 1 and 2 straight away, 4 with the bool fixup and a changelog
explaing the dmae_is_idle() change, and 3 after an ack from Nobuhiro
(or when I get a chance to look at it a bit deeper).

Thanks,
Dan

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 5/5] sh: dmaengine support for sh7724
  2009-12-08  1:02   ` Dan Williams
@ 2009-12-08  7:13     ` Paul Mundt
  2009-12-17  3:14       ` Nobuhiro Iwamatsu
  0 siblings, 1 reply; 19+ messages in thread
From: Paul Mundt @ 2009-12-08  7:13 UTC (permalink / raw)
  To: Dan Williams
  Cc: Guennadi Liakhovetski, linux-kernel, linux-sh, Nobuhiro Iwamatsu

On Mon, Dec 07, 2009 at 06:02:24PM -0700, Dan Williams wrote:
> On Fri, Dec 4, 2009 at 11:45 AM, Guennadi Liakhovetski
> <g.liakhovetski@gmx.de> wrote:
> > Add a dmaengine platform device to sh7724, fix DMA channel interrupt numbers.
> >
> > Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
> > ---
> >
> > With this patch dmatest works with channels 0-5, channels 6-11 still don't
> > work for some reason... So, feel free to discard it for now, if a complete
> > fix would be preferred.
> 
> Does this patch limit operation to channels 0-5?  I leave this one for
> Paul to disposition.
> 
There are 6 channels per controller, so the fact that the second
controller isn't working could either be cause the driver needs more work
to handle multiple controllers properly, or there could be a pinmux issue
that hasn't been resolved yet for DMAC1. Both of these cases require the
driver to be a bit smarter, but as Guennadi is presently working on these
things, doing them incrementally is ok with me. If this gets DMAC0
working at least where it wasn't before, it's certainly worth applying.
DMAC1 seems to have more problems than just wrong interrupt numbers, so
this isn't going to cause any regressions there either.

Acked-by: Paul Mundt <lethal@linux-sh.org>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 4/5] sh: stylistic improvements for the DMA driver
  2009-12-08  0:54   ` Dan Williams
@ 2009-12-08  8:20     ` Guennadi Liakhovetski
  0 siblings, 0 replies; 19+ messages in thread
From: Guennadi Liakhovetski @ 2009-12-08  8:20 UTC (permalink / raw)
  To: Dan Williams; +Cc: linux-kernel, linux-sh, Nobuhiro Iwamatsu

On Mon, 7 Dec 2009, Dan Williams wrote:

> On Fri, Dec 4, 2009 at 11:45 AM, Guennadi Liakhovetski
> <g.liakhovetski@gmx.de> wrote:
> > Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
> > ---
> >  drivers/dma/shdma.c |   34 +++++++++++++++++-----------------
> >  drivers/dma/shdma.h |   14 +++++++-------
> >  2 files changed, 24 insertions(+), 24 deletions(-)
> >
> [..]
> > -static int dmae_is_idle(struct sh_dmae_chan *sh_chan)
> > +static int dmae_is_busy(struct sh_dmae_chan *sh_chan)
> 
> This one isn't purely style, I agree with it, but think it would be a
> bit cleaner to return a bool rather than an error code.

Yes, I also thought about it, but I wasn't sure whether the author had 
some special considerations, why he has chosen to return an error code. 
Besides, that return code is also used in functions dmae_set_chcr() and 
dmae_set_dmars() as their return value. So, you'd have to change those two 
to something like

	if (dmae_is_busy(sh_chan))
		return -EBUSY;

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/5] sh: fix DMA driver's descriptor chaining and cookie
  2009-12-08  0:58   ` Dan Williams
@ 2009-12-12  2:11     ` Nobuhiro Iwamatsu
  0 siblings, 0 replies; 19+ messages in thread
From: Nobuhiro Iwamatsu @ 2009-12-12  2:11 UTC (permalink / raw)
  To: Dan Williams; +Cc: Guennadi Liakhovetski, linux-kernel, linux-sh

Hi, all.

I will test this patch next week. Please wait.

Best regards,
  Nobuhiro

2009/12/8 Dan Williams <dan.j.williams@intel.com>:
> I would like an acked-by from Nobuhiro  on this one.
>
> [ full patch quoted below ]
>
> Thanks,
> Dan
>
> On Fri, Dec 4, 2009 at 11:44 AM, Guennadi Liakhovetski
> <g.liakhovetski@gmx.de> wrote:
>> The SH DMA driver wrongly assigns negative cookies to transfer descriptors,
>> also, its chaining of partial descriptors is broken. The latter problem is
>> usually invisible, because maximum transfer size per chunk is 16M, but if you
>> artificially set this limit lower, the driver fails. Since cookies are also
>> used in chunk management, both these problems are fixed in one patch.
>>
>> Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
>> ---
>>  drivers/dma/shdma.c |  246 ++++++++++++++++++++++++++++++++-------------------
>>  drivers/dma/shdma.h |    1 -
>>  2 files changed, 153 insertions(+), 94 deletions(-)
>>
>> diff --git a/drivers/dma/shdma.c b/drivers/dma/shdma.c
>> index f5fae12..aac8f78 100644
>> --- a/drivers/dma/shdma.c
>> +++ b/drivers/dma/shdma.c
>> @@ -30,9 +30,12 @@
>>  #include "shdma.h"
>>
>>  /* DMA descriptor control */
>> -#define DESC_LAST      (-1)
>> -#define DESC_COMP      (1)
>> -#define DESC_NCOMP     (0)
>> +enum sh_dmae_desc_status {
>> +       DESC_PREPARED,
>> +       DESC_SUBMITTED,
>> +       DESC_COMPLETED, /* completed, have to call callback */
>> +       DESC_WAITING,   /* callback called, waiting for ack / re-submit */
>> +};
>>
>>  #define NR_DESCS_PER_CHANNEL 32
>>  /*
>> @@ -45,6 +48,8 @@
>>  */
>>  #define RS_DEFAULT  (RS_DUAL)
>>
>> +static void sh_dmae_chan_ld_cleanup(struct sh_dmae_chan *sh_chan, bool all);
>> +
>>  #define SH_DMAC_CHAN_BASE(id) (dma_base_addr[id])
>>  static void sh_dmae_writel(struct sh_dmae_chan *sh_dc, u32 data, u32 reg)
>>  {
>> @@ -185,7 +190,7 @@ static int dmae_set_dmars(struct sh_dmae_chan *sh_chan, u16 val)
>>
>>  static dma_cookie_t sh_dmae_tx_submit(struct dma_async_tx_descriptor *tx)
>>  {
>> -       struct sh_desc *desc = tx_to_sh_desc(tx);
>> +       struct sh_desc *desc = tx_to_sh_desc(tx), *chunk;
>>        struct sh_dmae_chan *sh_chan = to_sh_chan(tx->chan);
>>        dma_cookie_t cookie;
>>
>> @@ -196,45 +201,40 @@ static dma_cookie_t sh_dmae_tx_submit(struct dma_async_tx_descriptor *tx)
>>        if (cookie < 0)
>>                cookie = 1;
>>
>> -       /* If desc only in the case of 1 */
>> -       if (desc->async_tx.cookie != -EBUSY)
>> -               desc->async_tx.cookie = cookie;
>> -       sh_chan->common.cookie = desc->async_tx.cookie;
>> -
>> -       list_splice_init(&desc->tx_list, sh_chan->ld_queue.prev);
>> +       tx->cookie = cookie;
>> +       dev_dbg(sh_chan->dev, "submit %p #%d start %u\n",
>> +               tx, tx->cookie, desc->hw.sar);
>> +       sh_chan->common.cookie = tx->cookie;
>> +
>> +       /* Mark all chunks of this descriptor as submitted */
>> +       list_for_each_entry(chunk, desc->node.prev, node) {
>> +               /*
>> +                * All chunks are on the global ld_queue, so, we have to find
>> +                * the end of the chain ourselves
>> +                */
>> +               if (chunk != desc && (chunk->async_tx.cookie > 0 ||
>> +                                     chunk->async_tx.cookie = -EBUSY))
>> +                       break;
>> +               chunk->mark = DESC_SUBMITTED;
>> +       }
>>
>>        spin_unlock_bh(&sh_chan->desc_lock);
>>
>>        return cookie;
>>  }
>>
>> +/* Called with desc_lock held */
>>  static struct sh_desc *sh_dmae_get_desc(struct sh_dmae_chan *sh_chan)
>>  {
>> -       struct sh_desc *desc, *_desc, *ret = NULL;
>> -
>> -       spin_lock_bh(&sh_chan->desc_lock);
>> -       list_for_each_entry_safe(desc, _desc, &sh_chan->ld_free, node) {
>> -               if (async_tx_test_ack(&desc->async_tx)) {
>> -                       list_del(&desc->node);
>> -                       ret = desc;
>> -                       break;
>> -               }
>> -       }
>> -       spin_unlock_bh(&sh_chan->desc_lock);
>> -
>> -       return ret;
>> -}
>> +       struct sh_desc *desc;
>>
>> -static void sh_dmae_put_desc(struct sh_dmae_chan *sh_chan, struct sh_desc *desc)
>> -{
>> -       if (desc) {
>> -               spin_lock_bh(&sh_chan->desc_lock);
>> +       if (list_empty(&sh_chan->ld_free))
>> +               return NULL;
>>
>> -               list_splice_init(&desc->tx_list, &sh_chan->ld_free);
>> -               list_add(&desc->node, &sh_chan->ld_free);
>> +       desc = list_entry(sh_chan->ld_free.next, struct sh_desc, node);
>> +       list_del(&desc->node);
>>
>> -               spin_unlock_bh(&sh_chan->desc_lock);
>> -       }
>> +       return desc;
>>  }
>>
>>  static int sh_dmae_alloc_chan_resources(struct dma_chan *chan)
>> @@ -254,10 +254,9 @@ static int sh_dmae_alloc_chan_resources(struct dma_chan *chan)
>>                                        &sh_chan->common);
>>                desc->async_tx.tx_submit = sh_dmae_tx_submit;
>>                desc->async_tx.flags = DMA_CTRL_ACK;
>> -               INIT_LIST_HEAD(&desc->tx_list);
>> -               sh_dmae_put_desc(sh_chan, desc);
>>
>>                spin_lock_bh(&sh_chan->desc_lock);
>> +               list_add(&desc->node, &sh_chan->ld_free);
>>                sh_chan->descs_allocated++;
>>        }
>>        spin_unlock_bh(&sh_chan->desc_lock);
>> @@ -274,7 +273,10 @@ static void sh_dmae_free_chan_resources(struct dma_chan *chan)
>>        struct sh_desc *desc, *_desc;
>>        LIST_HEAD(list);
>>
>> -       BUG_ON(!list_empty(&sh_chan->ld_queue));
>> +       /* Prepared and not submitted descriptors can still be on the queue */
>> +       if (!list_empty(&sh_chan->ld_queue))
>> +               sh_dmae_chan_ld_cleanup(sh_chan, true);
>> +
>>        spin_lock_bh(&sh_chan->desc_lock);
>>
>>        list_splice_init(&sh_chan->ld_free, &list);
>> @@ -293,6 +295,7 @@ static struct dma_async_tx_descriptor *sh_dmae_prep_memcpy(
>>        struct sh_dmae_chan *sh_chan;
>>        struct sh_desc *first = NULL, *prev = NULL, *new;
>>        size_t copy_size;
>> +       LIST_HEAD(tx_list);
>>
>>        if (!chan)
>>                return NULL;
>> @@ -302,43 +305,70 @@ static struct dma_async_tx_descriptor *sh_dmae_prep_memcpy(
>>
>>        sh_chan = to_sh_chan(chan);
>>
>> +       /* Have to lock the whole loop to protect against concurrent release */
>> +       spin_lock_bh(&sh_chan->desc_lock);
>> +
>> +       /*
>> +        * Chaining:
>> +        * first descriptor is what user is dealing with in all API calls, its
>> +        *      cookie is at first set to -EBUSY, at tx-submit to a positive
>> +        *      number
>> +        * if more than one chunk is needed further chunks have cookie = -EINVAL
>> +        * the last chunk, if not equal to the first, has cookie = -ENOSPC
>> +        * all chunks are linked onto the tx_list head with their .node heads
>> +        *      only during this function, then they are immediately spliced
>> +        *      onto the queue
>> +        */
>>        do {
>>                /* Allocate the link descriptor from DMA pool */
>>                new = sh_dmae_get_desc(sh_chan);
>>                if (!new) {
>>                        dev_err(sh_chan->dev,
>>                                        "No free memory for link descriptor\n");
>> -                       goto err_get_desc;
>> +                       list_splice(&tx_list, &sh_chan->ld_free);
>> +                       spin_unlock_bh(&sh_chan->desc_lock);
>> +                       return NULL;
>>                }
>>
>> -               copy_size = min(len, (size_t)SH_DMA_TCR_MAX);
>> +               copy_size = min(len, (size_t)SH_DMA_TCR_MAX + 1);
>>
>>                new->hw.sar = dma_src;
>>                new->hw.dar = dma_dest;
>>                new->hw.tcr = copy_size;
>> -               if (!first)
>> +               if (!first) {
>> +                       /* First desc */
>> +                       new->async_tx.cookie = -EBUSY;
>>                        first = new;
>> +               } else {
>> +                       /* Other desc - invisible to the user */
>> +                       new->async_tx.cookie = -EINVAL;
>> +               }
>> +
>> +               dev_dbg(sh_chan->dev,
>> +                       "chaining %u of %u with %p, start %u, cookie %d\n",
>> +                       copy_size, len, &new->async_tx, dma_src,
>> +                       new->async_tx.cookie);
>>
>> -               new->mark = DESC_NCOMP;
>> -               async_tx_ack(&new->async_tx);
>> +               new->mark = DESC_PREPARED;
>> +               new->async_tx.flags = flags;
>>
>>                prev = new;
>>                len -= copy_size;
>>                dma_src += copy_size;
>>                dma_dest += copy_size;
>>                /* Insert the link descriptor to the LD ring */
>> -               list_add_tail(&new->node, &first->tx_list);
>> +               list_add_tail(&new->node, &tx_list);
>>        } while (len);
>>
>> -       new->async_tx.flags = flags; /* client is in control of this ack */
>> -       new->async_tx.cookie = -EBUSY; /* Last desc */
>> +       if (new != first)
>> +               new->async_tx.cookie = -ENOSPC;
>>
>> -       return &first->async_tx;
>> +       /* Put them immediately on the queue, so, they don't get lost */
>> +       list_splice_tail(&tx_list, sh_chan->ld_queue.prev);
>>
>> -err_get_desc:
>> -       sh_dmae_put_desc(sh_chan, first);
>> -       return NULL;
>> +       spin_unlock_bh(&sh_chan->desc_lock);
>>
>> +       return &first->async_tx;
>>  }
>>
>>  /*
>> @@ -346,64 +376,87 @@ err_get_desc:
>>  *
>>  * This function clean up the ld_queue of DMA channel.
>>  */
>> -static void sh_dmae_chan_ld_cleanup(struct sh_dmae_chan *sh_chan)
>> +static void sh_dmae_chan_ld_cleanup(struct sh_dmae_chan *sh_chan, bool all)
>>  {
>>        struct sh_desc *desc, *_desc;
>>
>>        spin_lock_bh(&sh_chan->desc_lock);
>>        list_for_each_entry_safe(desc, _desc, &sh_chan->ld_queue, node) {
>> -               dma_async_tx_callback callback;
>> -               void *callback_param;
>> +               struct dma_async_tx_descriptor *tx = &desc->async_tx;
>>
>> -               /* non send data */
>> -               if (desc->mark = DESC_NCOMP)
>> +               /* unsent data */
>> +               if (desc->mark = DESC_SUBMITTED && !all)
>>                        break;
>>
>> -               /* send data sesc */
>> -               callback = desc->async_tx.callback;
>> -               callback_param = desc->async_tx.callback_param;
>> +               if (desc->mark = DESC_PREPARED && !all)
>> +                       continue;
>> +
>> +               /* Call callback on the "exposed" descriptor (cookie > 0) */
>> +               if (tx->cookie > 0 && (desc->mark = DESC_COMPLETED ||
>> +                                      desc->mark = DESC_WAITING)) {
>> +                       struct sh_desc *chunk;
>> +                       bool head_acked = async_tx_test_ack(tx);
>> +                       /* sent data desc */
>> +                       dma_async_tx_callback callback = tx->callback;
>> +
>> +                       /* Run the link descriptor callback function */
>> +                       if (callback && desc->mark = DESC_COMPLETED) {
>> +                               spin_unlock_bh(&sh_chan->desc_lock);
>> +                               dev_dbg(sh_chan->dev,
>> +                                       "descriptor %p callback\n", tx);
>> +                               callback(tx->callback_param);
>> +                               spin_lock_bh(&sh_chan->desc_lock);
>> +                       }
>>
>> -               /* Remove from ld_queue list */
>> -               list_splice_init(&desc->tx_list, &sh_chan->ld_free);
>> +                       list_for_each_entry(chunk, desc->node.prev, node) {
>> +                               /*
>> +                                * All chunks are on the global ld_queue, so, we
>> +                                * have to find the end of the chain ourselves
>> +                                */
>> +                               if (chunk != desc &&
>> +                                   (chunk->async_tx.cookie > 0 ||
>> +                                    chunk->async_tx.cookie = -EBUSY))
>> +                                       break;
>> +
>> +                               if (head_acked)
>> +                                       async_tx_ack(&chunk->async_tx);
>> +                               else
>> +                                       chunk->mark = DESC_WAITING;
>> +                       }
>> +               }
>>
>> -               dev_dbg(sh_chan->dev, "link descriptor %p will be recycle.\n",
>> -                               desc);
>> +               dev_dbg(sh_chan->dev, "descriptor %p #%d completed.\n",
>> +                       tx, tx->cookie);
>>
>> -               list_move(&desc->node, &sh_chan->ld_free);
>> -               /* Run the link descriptor callback function */
>> -               if (callback) {
>> -                       spin_unlock_bh(&sh_chan->desc_lock);
>> -                       dev_dbg(sh_chan->dev, "link descriptor %p callback\n",
>> -                                       desc);
>> -                       callback(callback_param);
>> -                       spin_lock_bh(&sh_chan->desc_lock);
>> -               }
>> +               if (((desc->mark = DESC_COMPLETED ||
>> +                     desc->mark = DESC_WAITING) &&
>> +                    async_tx_test_ack(&desc->async_tx)) || all)
>> +                       /* Remove from ld_queue list */
>> +                       list_move(&desc->node, &sh_chan->ld_free);
>>        }
>>        spin_unlock_bh(&sh_chan->desc_lock);
>>  }
>>
>>  static void sh_chan_xfer_ld_queue(struct sh_dmae_chan *sh_chan)
>>  {
>> -       struct list_head *ld_node;
>>        struct sh_dmae_regs hw;
>> +       struct sh_desc *sd;
>>
>>        /* DMA work check */
>>        if (dmae_is_idle(sh_chan))
>>                return;
>>
>> +       spin_lock_bh(&sh_chan->desc_lock);
>>        /* Find the first un-transfer desciptor */
>> -       for (ld_node = sh_chan->ld_queue.next;
>> -               (ld_node != &sh_chan->ld_queue)
>> -                       && (to_sh_desc(ld_node)->mark = DESC_COMP);
>> -               ld_node = ld_node->next)
>> -               cpu_relax();
>> -
>> -       if (ld_node != &sh_chan->ld_queue) {
>> -               /* Get the ld start address from ld_queue */
>> -               hw = to_sh_desc(ld_node)->hw;
>> -               dmae_set_reg(sh_chan, hw);
>> -               dmae_start(sh_chan);
>> -       }
>> +       list_for_each_entry(sd, &sh_chan->ld_queue, node)
>> +               if (sd->mark = DESC_SUBMITTED) {
>> +                       /* Get the ld start address from ld_queue */
>> +                       hw = sd->hw;
>> +                       dmae_set_reg(sh_chan, hw);
>> +                       dmae_start(sh_chan);
>> +                       break;
>> +               }
>> +       spin_unlock_bh(&sh_chan->desc_lock);
>>  }
>>
>>  static void sh_dmae_memcpy_issue_pending(struct dma_chan *chan)
>> @@ -421,12 +474,11 @@ static enum dma_status sh_dmae_is_complete(struct dma_chan *chan,
>>        dma_cookie_t last_used;
>>        dma_cookie_t last_complete;
>>
>> -       sh_dmae_chan_ld_cleanup(sh_chan);
>> +       sh_dmae_chan_ld_cleanup(sh_chan, false);
>>
>>        last_used = chan->cookie;
>>        last_complete = sh_chan->completed_cookie;
>> -       if (last_complete = -EBUSY)
>> -               last_complete = last_used;
>> +       BUG_ON(last_complete < 0);
>>
>>        if (done)
>>                *done = last_complete;
>> @@ -481,11 +533,13 @@ static irqreturn_t sh_dmae_err(int irq, void *data)
>>                err = sh_dmae_rst(0);
>>                if (err)
>>                        return err;
>> +#ifdef SH_DMAC_BASE1
>>                if (shdev->pdata.mode & SHDMA_DMAOR1) {
>>                        err = sh_dmae_rst(1);
>>                        if (err)
>>                                return err;
>>                }
>> +#endif
>>                disable_irq(irq);
>>                return IRQ_HANDLED;
>>        }
>> @@ -499,30 +553,36 @@ static void dmae_do_tasklet(unsigned long data)
>>        u32 sar_buf = sh_dmae_readl(sh_chan, SAR);
>>        list_for_each_entry_safe(desc, _desc,
>>                                        &sh_chan->ld_queue, node) {
>> -               if ((desc->hw.sar + desc->hw.tcr) = sar_buf) {
>> +               if ((desc->hw.sar + desc->hw.tcr) = sar_buf &&
>> +                   desc->mark = DESC_SUBMITTED) {
>>                        cur_desc = desc;
>>                        break;
>>                }
>>        }
>>
>>        if (cur_desc) {
>> +               dev_dbg(sh_chan->dev, "done %p #%d start %u\n",
>> +                       &cur_desc->async_tx, cur_desc->async_tx.cookie,
>> +                       cur_desc->hw.sar);
>> +
>>                switch (cur_desc->async_tx.cookie) {
>> -               case 0: /* other desc data */
>> +               case -EINVAL:
>> +               case -ENOSPC:
>> +                       /* other desc data */
>>                        break;
>> -               case -EBUSY: /* last desc */
>> -               sh_chan->completed_cookie >> -                               cur_desc->async_tx.cookie;
>> +               case -EBUSY: /* Cannot be */
>> +                       BUG_ON(1);
>>                        break;
>> -               default: /* first desc ( 0 < )*/
>> +               default: /* first desc: cookie > 0 */
>>                        sh_chan->completed_cookie >> -                               cur_desc->async_tx.cookie - 1;
>> +                               cur_desc->async_tx.cookie;
>>                        break;
>>                }
>> -               cur_desc->mark = DESC_COMP;
>> +               cur_desc->mark = DESC_COMPLETED;
>>        }
>>        /* Next desc */
>>        sh_chan_xfer_ld_queue(sh_chan);
>> -       sh_dmae_chan_ld_cleanup(sh_chan);
>> +       sh_dmae_chan_ld_cleanup(sh_chan, false);
>>  }
>>
>>  static unsigned int get_dmae_irq(unsigned int id)
>> diff --git a/drivers/dma/shdma.h b/drivers/dma/shdma.h
>> index 2b4bc15..c93555c 100644
>> --- a/drivers/dma/shdma.h
>> +++ b/drivers/dma/shdma.h
>> @@ -26,7 +26,6 @@ struct sh_dmae_regs {
>>  };
>>
>>  struct sh_desc {
>> -       struct list_head tx_list;
>>        struct sh_dmae_regs hw;
>>        struct list_head node;
>>        struct dma_async_tx_descriptor async_tx;
>> --
>> 1.6.2.4
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at  http://www.tux.org/lkml/
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-sh" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
Nobuhiro Iwamatsu
   iwamatsu at {nigauri.org / debian.org}
   GPG ID: 40AD1FA6

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 5/5] sh: dmaengine support for sh7724
  2009-12-08  7:13     ` Paul Mundt
@ 2009-12-17  3:14       ` Nobuhiro Iwamatsu
  2009-12-17  5:15         ` Paul Mundt
  2009-12-17  7:42         ` Guennadi Liakhovetski
  0 siblings, 2 replies; 19+ messages in thread
From: Nobuhiro Iwamatsu @ 2009-12-17  3:14 UTC (permalink / raw)
  To: Paul Mundt; +Cc: Dan Williams, Guennadi Liakhovetski, linux-kernel, linux-sh

Hi,

2009/12/8 Paul Mundt <lethal@linux-sh.org>:
> On Mon, Dec 07, 2009 at 06:02:24PM -0700, Dan Williams wrote:
>> On Fri, Dec 4, 2009 at 11:45 AM, Guennadi Liakhovetski
>> <g.liakhovetski@gmx.de> wrote:
>> > Add a dmaengine platform device to sh7724, fix DMA channel interrupt numbers.
>> >
>> > Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
>> > ---
>> >
>> > With this patch dmatest works with channels 0-5, channels 6-11 still don't
>> > work for some reason... So, feel free to discard it for now, if a complete
>> > fix would be preferred.
>>
>> Does this patch limit operation to channels 0-5?  I leave this one for
>> Paul to disposition.
>>
> There are 6 channels per controller, so the fact that the second
> controller isn't working could either be cause the driver needs more work
> to handle multiple controllers properly, or there could be a pinmux issue
> that hasn't been resolved yet for DMAC1. Both of these cases require the
> driver to be a bit smarter, but as Guennadi is presently working on these
> things, doing them incrementally is ok with me. If this gets DMAC0
> working at least where it wasn't before, it's certainly worth applying.
> DMAC1 seems to have more problems than just wrong interrupt numbers, so
> this isn't going to cause any regressions there either.
>
> Acked-by: Paul Mundt <lethal@linux-sh.org>

This patch can not apply Paul's git/HEAD too.
I updated patch.

Acked-by: Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com>

Best regards,
  Nobuhiro

-----

Add a dmaengine platform device to sh7724, fix DMA channel interrupt numbers.

Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Nobuhiro Iwamatsu <iwamatsu@nigauri.org>
---
 arch/sh/include/cpu-sh4/cpu/dma-sh4a.h |    8 ++++----
 arch/sh/kernel/cpu/sh4a/setup-sh7724.c |   15 +++++++++++++++
 2 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/arch/sh/include/cpu-sh4/cpu/dma-sh4a.h
b/arch/sh/include/cpu-sh4/cpu/dma-sh4a.h
index f0886bc..c4ed660 100644
--- a/arch/sh/include/cpu-sh4/cpu/dma-sh4a.h
+++ b/arch/sh/include/cpu-sh4/cpu/dma-sh4a.h
@@ -19,10 +19,10 @@
 #elif defined(CONFIG_CPU_SUBTYPE_SH7723) || \
       defined(CONFIG_CPU_SUBTYPE_SH7724)
 #define DMTE0_IRQ	48	/* DMAC0A*/
-#define DMTE4_IRQ	40	/* DMAC0B */
-#define DMTE6_IRQ	42
-#define DMTE8_IRQ	76	/* DMAC1A */
-#define DMTE9_IRQ	77
+#define DMTE4_IRQ	76	/* DMAC0B */
+#define DMTE6_IRQ	40
+#define DMTE8_IRQ	42	/* DMAC1A */
+#define DMTE9_IRQ	43
 #define DMTE10_IRQ	72	/* DMAC1B */
 #define DMTE11_IRQ	73
 #define DMAE0_IRQ	78	/* DMA Error IRQ*/
diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7724.c
b/arch/sh/kernel/cpu/sh4a/setup-sh7724.c
index a52f351..d32f96c 100644
--- a/arch/sh/kernel/cpu/sh4a/setup-sh7724.c
+++ b/arch/sh/kernel/cpu/sh4a/setup-sh7724.c
@@ -23,9 +23,23 @@
 #include <linux/notifier.h>
 #include <asm/suspend.h>
 #include <asm/clock.h>
+#include <asm/dma-sh.h>
 #include <asm/mmzone.h>
 #include <cpu/sh7724.h>

+/* DMA */
+static struct sh_dmae_pdata dma_platform_data = {
+	.mode = SHDMA_DMAOR1,
+};
+
+static struct platform_device dma_device = {
+	.name	= "sh-dma-engine",
+	.id		= -1,
+	.dev	= {
+		.platform_data	= &dma_platform_data,
+	},
+};
+
 /* Serial */
 static struct plat_sci_port scif0_platform_data = {
 	.mapbase        = 0xffe00000,
@@ -649,6 +663,7 @@ static struct platform_device *sh7724_devices[]
__initdata = {
 	&tmu3_device,
 	&tmu4_device,
 	&tmu5_device,
+	&dma_device,
 	&rtc_device,
 	&iic0_device,
 	&iic1_device,
-- 
1.6.5.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 5/5] sh: dmaengine support for sh7724
  2009-12-17  3:14       ` Nobuhiro Iwamatsu
@ 2009-12-17  5:15         ` Paul Mundt
  2009-12-17  7:42         ` Guennadi Liakhovetski
  1 sibling, 0 replies; 19+ messages in thread
From: Paul Mundt @ 2009-12-17  5:15 UTC (permalink / raw)
  To: Nobuhiro Iwamatsu
  Cc: Dan Williams, Guennadi Liakhovetski, linux-kernel, linux-sh

On Thu, Dec 17, 2009 at 12:14:14PM +0900, Nobuhiro Iwamatsu wrote:
> Hi,
> 
> 2009/12/8 Paul Mundt <lethal@linux-sh.org>:
> > On Mon, Dec 07, 2009 at 06:02:24PM -0700, Dan Williams wrote:
> >> On Fri, Dec 4, 2009 at 11:45 AM, Guennadi Liakhovetski
> >> <g.liakhovetski@gmx.de> wrote:
> >> > Add a dmaengine platform device to sh7724, fix DMA channel interrupt numbers.
> >> >
> >> > Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
> >> > ---
> >> >
> >> > With this patch dmatest works with channels 0-5, channels 6-11 still don't
> >> > work for some reason... So, feel free to discard it for now, if a complete
> >> > fix would be preferred.
> >>
> >> Does this patch limit operation to channels 0-5? ?I leave this one for
> >> Paul to disposition.
> >>
> > There are 6 channels per controller, so the fact that the second
> > controller isn't working could either be cause the driver needs more work
> > to handle multiple controllers properly, or there could be a pinmux issue
> > that hasn't been resolved yet for DMAC1. Both of these cases require the
> > driver to be a bit smarter, but as Guennadi is presently working on these
> > things, doing them incrementally is ok with me. If this gets DMAC0
> > working at least where it wasn't before, it's certainly worth applying.
> > DMAC1 seems to have more problems than just wrong interrupt numbers, so
> > this isn't going to cause any regressions there either.
> >
> > Acked-by: Paul Mundt <lethal@linux-sh.org>
> 
> This patch can not apply Paul's git/HEAD too.
> I updated patch.
> 
> Acked-by: Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com>
> 

Ok, this doesn't really have any outstanding build dependencies, but
obviously isn't going to pass the test cases until the outstanding
dmaengine patches are merged.

In any event, fixing up the wrong IRQs is something we should be doing
sooner rather than later, so I'll queue this up as it is. Hopefully the
dmaengine bits will be sorted in time for -rc2 and then everything should
just magically work ;-)

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 5/5] sh: dmaengine support for sh7724
  2009-12-17  3:14       ` Nobuhiro Iwamatsu
  2009-12-17  5:15         ` Paul Mundt
@ 2009-12-17  7:42         ` Guennadi Liakhovetski
  2009-12-17  7:47           ` Paul Mundt
  1 sibling, 1 reply; 19+ messages in thread
From: Guennadi Liakhovetski @ 2009-12-17  7:42 UTC (permalink / raw)
  To: Nobuhiro Iwamatsu; +Cc: Paul Mundt, Dan Williams, linux-kernel, linux-sh

Hi again

On Thu, 17 Dec 2009, Nobuhiro Iwamatsu wrote:

> Hi,
> 
> 2009/12/8 Paul Mundt <lethal@linux-sh.org>:
> > On Mon, Dec 07, 2009 at 06:02:24PM -0700, Dan Williams wrote:
> >> On Fri, Dec 4, 2009 at 11:45 AM, Guennadi Liakhovetski
> >> <g.liakhovetski@gmx.de> wrote:
> >> > Add a dmaengine platform device to sh7724, fix DMA channel interrupt numbers.
> >> >
> >> > Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
> >> > ---
> >> >
> >> > With this patch dmatest works with channels 0-5, channels 6-11 still don't
> >> > work for some reason... So, feel free to discard it for now, if a complete
> >> > fix would be preferred.
> >>
> >> Does this patch limit operation to channels 0-5?  I leave this one for
> >> Paul to disposition.
> >>
> > There are 6 channels per controller, so the fact that the second
> > controller isn't working could either be cause the driver needs more work
> > to handle multiple controllers properly, or there could be a pinmux issue
> > that hasn't been resolved yet for DMAC1. Both of these cases require the
> > driver to be a bit smarter, but as Guennadi is presently working on these
> > things, doing them incrementally is ok with me. If this gets DMAC0
> > working at least where it wasn't before, it's certainly worth applying.
> > DMAC1 seems to have more problems than just wrong interrupt numbers, so
> > this isn't going to cause any regressions there either.
> >
> > Acked-by: Paul Mundt <lethal@linux-sh.org>
> 
> This patch can not apply Paul's git/HEAD too.
> I updated patch.
> 
> Acked-by: Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com>

Same comment applies as to the other my patch. Paul, I think, you 
indicated, that you've already merged my patch, or at least, that you have 
no problem merging it. Please, do use my original version and not this 
one.

Thanks
Guennadi

> 
> Best regards,
>   Nobuhiro
> 
> -----
> 
> Add a dmaengine platform device to sh7724, fix DMA channel interrupt numbers.
> 
> Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
> Signed-off-by: Nobuhiro Iwamatsu <iwamatsu@nigauri.org>
> ---
>  arch/sh/include/cpu-sh4/cpu/dma-sh4a.h |    8 ++++----
>  arch/sh/kernel/cpu/sh4a/setup-sh7724.c |   15 +++++++++++++++
>  2 files changed, 19 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/sh/include/cpu-sh4/cpu/dma-sh4a.h
> b/arch/sh/include/cpu-sh4/cpu/dma-sh4a.h
> index f0886bc..c4ed660 100644
> --- a/arch/sh/include/cpu-sh4/cpu/dma-sh4a.h
> +++ b/arch/sh/include/cpu-sh4/cpu/dma-sh4a.h
> @@ -19,10 +19,10 @@
>  #elif defined(CONFIG_CPU_SUBTYPE_SH7723) || \
>        defined(CONFIG_CPU_SUBTYPE_SH7724)
>  #define DMTE0_IRQ	48	/* DMAC0A*/
> -#define DMTE4_IRQ	40	/* DMAC0B */
> -#define DMTE6_IRQ	42
> -#define DMTE8_IRQ	76	/* DMAC1A */
> -#define DMTE9_IRQ	77
> +#define DMTE4_IRQ	76	/* DMAC0B */
> +#define DMTE6_IRQ	40
> +#define DMTE8_IRQ	42	/* DMAC1A */
> +#define DMTE9_IRQ	43
>  #define DMTE10_IRQ	72	/* DMAC1B */
>  #define DMTE11_IRQ	73
>  #define DMAE0_IRQ	78	/* DMA Error IRQ*/
> diff --git a/arch/sh/kernel/cpu/sh4a/setup-sh7724.c
> b/arch/sh/kernel/cpu/sh4a/setup-sh7724.c
> index a52f351..d32f96c 100644
> --- a/arch/sh/kernel/cpu/sh4a/setup-sh7724.c
> +++ b/arch/sh/kernel/cpu/sh4a/setup-sh7724.c
> @@ -23,9 +23,23 @@
>  #include <linux/notifier.h>
>  #include <asm/suspend.h>
>  #include <asm/clock.h>
> +#include <asm/dma-sh.h>
>  #include <asm/mmzone.h>
>  #include <cpu/sh7724.h>
> 
> +/* DMA */
> +static struct sh_dmae_pdata dma_platform_data = {
> +	.mode = SHDMA_DMAOR1,
> +};
> +
> +static struct platform_device dma_device = {
> +	.name	= "sh-dma-engine",
> +	.id		= -1,
> +	.dev	= {
> +		.platform_data	= &dma_platform_data,
> +	},
> +};
> +
>  /* Serial */
>  static struct plat_sci_port scif0_platform_data = {
>  	.mapbase        = 0xffe00000,
> @@ -649,6 +663,7 @@ static struct platform_device *sh7724_devices[]
> __initdata = {
>  	&tmu3_device,
>  	&tmu4_device,
>  	&tmu5_device,
> +	&dma_device,
>  	&rtc_device,
>  	&iic0_device,
>  	&iic1_device,
> -- 
> 1.6.5.4
> 

---
Guennadi Liakhovetski

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 5/5] sh: dmaengine support for sh7724
  2009-12-17  7:42         ` Guennadi Liakhovetski
@ 2009-12-17  7:47           ` Paul Mundt
  2009-12-17  8:02             ` Guennadi Liakhovetski
  0 siblings, 1 reply; 19+ messages in thread
From: Paul Mundt @ 2009-12-17  7:47 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Nobuhiro Iwamatsu, Dan Williams, linux-kernel, linux-sh

On Thu, Dec 17, 2009 at 08:42:39AM +0100, Guennadi Liakhovetski wrote:
> On Thu, 17 Dec 2009, Nobuhiro Iwamatsu wrote:
> > This patch can not apply Paul's git/HEAD too.
> > I updated patch.
> > 
> > Acked-by: Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com>
> 
> Same comment applies as to the other my patch. Paul, I think, you 
> indicated, that you've already merged my patch, or at least, that you have 
> no problem merging it. Please, do use my original version and not this 
> one.
> 
I applied it by hand, so your authorship was retained, and I fixed up the
whitespace damage by hand (it looks like it was just line wrapping from
the mailer, which I deal with often enough that I no longer particularly
care for trivial patches). For the non-trivial patches, reposting those
without whitespace damage would be preferable.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 5/5] sh: dmaengine support for sh7724
  2009-12-17  7:47           ` Paul Mundt
@ 2009-12-17  8:02             ` Guennadi Liakhovetski
  2009-12-17  8:09               ` Paul Mundt
  0 siblings, 1 reply; 19+ messages in thread
From: Guennadi Liakhovetski @ 2009-12-17  8:02 UTC (permalink / raw)
  To: Paul Mundt; +Cc: Nobuhiro Iwamatsu, Dan Williams, linux-kernel, linux-sh

Hi Paul

On Thu, 17 Dec 2009, Paul Mundt wrote:

> On Thu, Dec 17, 2009 at 08:42:39AM +0100, Guennadi Liakhovetski wrote:
> > On Thu, 17 Dec 2009, Nobuhiro Iwamatsu wrote:
> > > This patch can not apply Paul's git/HEAD too.
> > > I updated patch.
> > > 
> > > Acked-by: Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com>
> > 
> > Same comment applies as to the other my patch. Paul, I think, you 
> > indicated, that you've already merged my patch, or at least, that you have 
> > no problem merging it. Please, do use my original version and not this 
> > one.
> > 
> I applied it by hand, so your authorship was retained, and I fixed up the
> whitespace damage by hand (it looks like it was just line wrapping from
> the mailer, which I deal with often enough that I no longer particularly
> care for trivial patches). For the non-trivial patches, reposting those
> without whitespace damage would be preferable.

Hm, strange, are you sure _my_ original version was white-space damaged? I 
am quite sure my mailing chain behaves, and I just double-checked my patch 
as I got it back from the list - it looks ok to me. Would be interesting 
to know what specific white-space issues you found there, you could just 
mail me those off-list, if you like.

Thanks
Guennadi
---
Guennadi Liakhovetski

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 5/5] sh: dmaengine support for sh7724
  2009-12-17  8:02             ` Guennadi Liakhovetski
@ 2009-12-17  8:09               ` Paul Mundt
  0 siblings, 0 replies; 19+ messages in thread
From: Paul Mundt @ 2009-12-17  8:09 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Nobuhiro Iwamatsu, Dan Williams, linux-kernel, linux-sh

On Thu, Dec 17, 2009 at 09:02:11AM +0100, Guennadi Liakhovetski wrote:
> On Thu, 17 Dec 2009, Paul Mundt wrote:
> > On Thu, Dec 17, 2009 at 08:42:39AM +0100, Guennadi Liakhovetski wrote:
> > > Same comment applies as to the other my patch. Paul, I think, you 
> > > indicated, that you've already merged my patch, or at least, that you have 
> > > no problem merging it. Please, do use my original version and not this 
> > > one.
> > > 
> > I applied it by hand, so your authorship was retained, and I fixed up the
> > whitespace damage by hand (it looks like it was just line wrapping from
> > the mailer, which I deal with often enough that I no longer particularly
> > care for trivial patches). For the non-trivial patches, reposting those
> > without whitespace damage would be preferable.
> 
> Hm, strange, are you sure _my_ original version was white-space damaged? I 
> am quite sure my mailing chain behaves, and I just double-checked my patch 
> as I got it back from the list - it looks ok to me. Would be interesting 
> to know what specific white-space issues you found there, you could just 
> mail me those off-list, if you like.
> 
No, yours was fine, it just didn't apply. Iwamatsu-san's was updated, but
whitespace damaged. So neither one was directly usable. ;-)

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2009-12-17  8:09 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-12-04 18:44 [PATCH 0/5] sh: fix the shdma driver, improve dmatest, support sh7724 Guennadi Liakhovetski
2009-12-04 18:44 ` [PATCH 1/5] sh: DMA driver has to specify its alignment requirements Guennadi Liakhovetski
2009-12-04 18:44 ` [PATCH 2/5] dmaengine: fix dmatest to verify minimum transfer length Guennadi Liakhovetski
2009-12-04 18:44 ` [PATCH 3/5] sh: fix DMA driver's descriptor chaining and cookie Guennadi Liakhovetski
2009-12-08  0:58   ` Dan Williams
2009-12-12  2:11     ` Nobuhiro Iwamatsu
2009-12-04 18:45 ` [PATCH 4/5] sh: stylistic improvements for the DMA driver Guennadi Liakhovetski
2009-12-08  0:54   ` Dan Williams
2009-12-08  8:20     ` Guennadi Liakhovetski
2009-12-04 18:45 ` [PATCH 5/5] sh: dmaengine support for sh7724 Guennadi Liakhovetski
2009-12-08  1:02   ` Dan Williams
2009-12-08  7:13     ` Paul Mundt
2009-12-17  3:14       ` Nobuhiro Iwamatsu
2009-12-17  5:15         ` Paul Mundt
2009-12-17  7:42         ` Guennadi Liakhovetski
2009-12-17  7:47           ` Paul Mundt
2009-12-17  8:02             ` Guennadi Liakhovetski
2009-12-17  8:09               ` Paul Mundt
2009-12-08  1:09 ` [PATCH 0/5] sh: fix the shdma driver, improve dmatest, support Dan Williams

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).