netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/4] net: ethernet: davinci cpdma: Enable interrupt while waiting for teardown complete
@ 2013-01-18 10:06 Sebastian Andrzej Siewior
  2013-01-18 10:06 ` [PATCH 2/4] net: ethernet: davinci_cpdma: fix descriptor handling Sebastian Andrzej Siewior
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Sebastian Andrzej Siewior @ 2013-01-18 10:06 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Thomas Gleixner, Rakesh Ranjan, Bruno Bittner,
	Holger Dengler, Jan Altenberg, Sebastian Andrzej Siewior

From: Thomas Gleixner <tglx@linutronix.de>

A teardown might take some time. If the other CPU is going to queue
something it will spin and wait. Dropping the lock will allow to
continue processing. It will notice that the channel is in state
teardown and will not do anything.

Cc: Rakesh Ranjan <rakesh.ranjan@vnl.in>
Cc: Bruno Bittner <Bruno.Bittner@sick.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[dengler: patch description]
Signed-off-by: Holger Dengler <dengler@linutronix.de>
[jan: forward ported]
Signed-off-by: Jan Altenberg <jan@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/net/ethernet/ti/davinci_cpdma.c |    4 ++++
 1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c b/drivers/net/ethernet/ti/davinci_cpdma.c
index 4995673..dd5f2db 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.c
+++ b/drivers/net/ethernet/ti/davinci_cpdma.c
@@ -836,6 +836,8 @@ int cpdma_chan_stop(struct cpdma_chan *chan)
 	/* trigger teardown */
 	dma_reg_write(ctlr, chan->td, chan_linear(chan));
 
+	spin_unlock_irqrestore(&chan->lock, flags);
+
 	/* wait for teardown complete */
 	timeout = jiffies + HZ/10;	/* 100 msec */
 	while (time_before(jiffies, timeout)) {
@@ -845,6 +847,8 @@ int cpdma_chan_stop(struct cpdma_chan *chan)
 		cpu_relax();
 	}
 	WARN_ON(!time_before(jiffies, timeout));
+
+	spin_lock_irqsave(&chan->lock, flags);
 	chan_write(chan, cp, CPDMA_TEARDOWN_VALUE);
 
 	/* handle completed packets */
-- 
1.7.6.5

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/4] net: ethernet: davinci_cpdma: fix descriptor handling
  2013-01-18 10:06 [PATCH 1/4] net: ethernet: davinci cpdma: Enable interrupt while waiting for teardown complete Sebastian Andrzej Siewior
@ 2013-01-18 10:06 ` Sebastian Andrzej Siewior
  2013-01-18 10:06 ` [PATCH 3/4] net: ethernet: ti cpsw: Split up DMA descriptor pool Sebastian Andrzej Siewior
  2013-01-18 10:06 ` [PATCH 4/4] net: ethernet: ti cpsw: separate interrupt handler for TX, RX, and MISC Sebastian Andrzej Siewior
  2 siblings, 0 replies; 5+ messages in thread
From: Sebastian Andrzej Siewior @ 2013-01-18 10:06 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Thomas Gleixner, Rakesh Ranjan, Bruno Bittner,
	stable, Holger Dengler, Jan Altenberg, Sebastian Andrzej Siewior

From: Thomas Gleixner <tglx@linutronix.de>

According to the datasheet there is a miss-queued buffer condition
handling required in the processing. This is implemented correctly
when descriptors are queued for TX, but during TX cleanup of already
transmitted descriptors the condition is not handled correctly. That
leads to a stall of the TX queue which fills up and never gets processed.

Cc: Rakesh Ranjan <rakesh.ranjan@vnl.in>
Cc: Bruno Bittner <Bruno.Bittner@sick.com>
Cc: stable@vger.kernel.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[dengler: patch description]
Signed-off-by: Holger Dengler <dengler@linutronix.de>
[jan: forward ported]
Signed-off-by: Jan Altenberg <jan@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/net/ethernet/ti/davinci_cpdma.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c b/drivers/net/ethernet/ti/davinci_cpdma.c
index dd5f2db..709c437 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.c
+++ b/drivers/net/ethernet/ti/davinci_cpdma.c
@@ -756,7 +756,8 @@ static int __cpdma_chan_process(struct cpdma_chan *chan)
 	chan->count--;
 	chan->stats.good_dequeue++;
 
-	if (status & CPDMA_DESC_EOQ) {
+	if ((status & CPDMA_DESC_EOQ) && (chan->head) &&
+			(!(status & CPDMA_DESC_TD_COMPLETE))) {
 		chan->stats.requeue++;
 		chan_write(chan, hdp, desc_phys(pool, chan->head));
 	}
-- 
1.7.6.5

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 3/4] net: ethernet: ti cpsw: Split up DMA descriptor pool
  2013-01-18 10:06 [PATCH 1/4] net: ethernet: davinci cpdma: Enable interrupt while waiting for teardown complete Sebastian Andrzej Siewior
  2013-01-18 10:06 ` [PATCH 2/4] net: ethernet: davinci_cpdma: fix descriptor handling Sebastian Andrzej Siewior
@ 2013-01-18 10:06 ` Sebastian Andrzej Siewior
  2013-01-18 17:14   ` Mugunthan V N
  2013-01-18 10:06 ` [PATCH 4/4] net: ethernet: ti cpsw: separate interrupt handler for TX, RX, and MISC Sebastian Andrzej Siewior
  2 siblings, 1 reply; 5+ messages in thread
From: Sebastian Andrzej Siewior @ 2013-01-18 10:06 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Thomas Gleixner, Rakesh Ranjan, Bruno Bittner,
	Holger Dengler, Jan Altenberg, Sebastian Andrzej Siewior

From: Thomas Gleixner <tglx@linutronix.de>

Split the buffer pool into a RX and a TX block so neither of the
channels can influence the other. It is possible to fillup the pool by
sending a lot of large packets on a slow half-duplex link.

Cc: Rakesh Ranjan <rakesh.ranjan@vnl.in>
Cc: Bruno Bittner <Bruno.Bittner@sick.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[dengler: patch description]
Signed-off-by: Holger Dengler <dengler@linutronix.de>
[jan: forward ported]
Signed-off-by: Jan Altenberg <jan@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/net/ethernet/ti/davinci_cpdma.c |   35 +++++++++++++++++++++++++++---
 1 files changed, 31 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c b/drivers/net/ethernet/ti/davinci_cpdma.c
index 709c437..70325cd 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.c
+++ b/drivers/net/ethernet/ti/davinci_cpdma.c
@@ -217,16 +217,41 @@ desc_from_phys(struct cpdma_desc_pool *pool, dma_addr_t dma)
 }
 
 static struct cpdma_desc __iomem *
-cpdma_desc_alloc(struct cpdma_desc_pool *pool, int num_desc)
+cpdma_desc_alloc(struct cpdma_desc_pool *pool, int num_desc, bool is_rx)
 {
 	unsigned long flags;
 	int index;
 	struct cpdma_desc __iomem *desc = NULL;
+	static int last_index = 4096;
 
 	spin_lock_irqsave(&pool->lock, flags);
 
-	index = bitmap_find_next_zero_area(pool->bitmap, pool->num_desc, 0,
-					   num_desc, 0);
+	/*
+	 * The pool is split into two areas rx and tx. So we make sure
+	 * that we can't run out of pool buffers for RX when TX has
+	 * tons of stuff queued.
+	 */
+	if (is_rx) {
+		index = bitmap_find_next_zero_area(pool->bitmap,
+				pool->num_desc/2, 0, num_desc, 0);
+	 } else {
+		if (last_index >= pool->num_desc)
+			last_index = pool->num_desc / 2;
+
+		index = bitmap_find_next_zero_area(pool->bitmap,
+				pool->num_desc, last_index, num_desc, 0);
+
+		if (!(index < pool->num_desc)) {
+			index = bitmap_find_next_zero_area(pool->bitmap,
+				pool->num_desc, pool->num_desc/2, num_desc, 0);
+		}
+
+		if (index < pool->num_desc)
+			last_index = index + 1;
+		else
+			last_index = pool->num_desc / 2;
+	}
+
 	if (index < pool->num_desc) {
 		bitmap_set(pool->bitmap, index, num_desc);
 		desc = pool->iomap + pool->desc_size * index;
@@ -660,6 +685,7 @@ int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data,
 	unsigned long			flags;
 	u32				mode;
 	int				ret = 0;
+	bool				is_rx;
 
 	spin_lock_irqsave(&chan->lock, flags);
 
@@ -668,7 +694,8 @@ int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data,
 		goto unlock_ret;
 	}
 
-	desc = cpdma_desc_alloc(ctlr->pool, 1);
+	is_rx = (chan->rxfree != 0);
+	desc = cpdma_desc_alloc(ctlr->pool, 1, is_rx);
 	if (!desc) {
 		chan->stats.desc_alloc_fail++;
 		ret = -ENOMEM;
-- 
1.7.6.5

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 4/4] net: ethernet: ti cpsw: separate interrupt handler for TX, RX, and MISC
  2013-01-18 10:06 [PATCH 1/4] net: ethernet: davinci cpdma: Enable interrupt while waiting for teardown complete Sebastian Andrzej Siewior
  2013-01-18 10:06 ` [PATCH 2/4] net: ethernet: davinci_cpdma: fix descriptor handling Sebastian Andrzej Siewior
  2013-01-18 10:06 ` [PATCH 3/4] net: ethernet: ti cpsw: Split up DMA descriptor pool Sebastian Andrzej Siewior
@ 2013-01-18 10:06 ` Sebastian Andrzej Siewior
  2 siblings, 0 replies; 5+ messages in thread
From: Sebastian Andrzej Siewior @ 2013-01-18 10:06 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Thomas Gleixner, Mugunthan V N, Rakesh Ranjan,
	Bruno Bittner, stable, Holger Dengler, Jan Altenberg,
	Sebastian Andrzej Siewior

From: Thomas Gleixner <tglx@linutronix.de>

The interrupt handling of the device is using the same interrupt
handler function for all possible device interrupts.

If an host error interrupt is raised (misc), then it merily switches
the device driver into NAPI scheduling mode which is completely
pointless since neither RX nor TX completes. It also fails to give
any information about the reason for this host error interrupt.

The solution for this problem is to provide separate interrupt
handlers for RX, TX and the error (misc) interrupt.

This allows at least to print out the host error reason even if we
have no error handling mechanisms there yet. Though it allowed us
to pinpoint the main problem as we had the information what kind of
error the DMA engine run into.
The patch also clean ups the complete TX queue. It was observed that
sometimes the EOI ack for RX queue also ACKed TX packets which were
not-yet processed. This lead to TX stall.

Cc: Mugunthan V N <mugunthanvnm@ti.com>
Cc: Rakesh Ranjan <rakesh.ranjan@vnl.in>
Cc: Bruno Bittner <Bruno.Bittner@sick.com>
Cc: stable@vger.kernel.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[dengler: patch description]
Signed-off-by: Holger Dengler <dengler@linutronix.de>
[jan: forward ported]
Signed-off-by: Jan Altenberg <jan@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/net/ethernet/ti/cpsw.c          |  249 ++++++++++++++++++++++++++-----
 drivers/net/ethernet/ti/davinci_cpdma.c |   31 +++-
 drivers/net/ethernet/ti/davinci_cpdma.h |    5 +-
 3 files changed, 236 insertions(+), 49 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 40aff68..ee4533d 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -118,6 +118,13 @@ do {								\
 #define TX_PRIORITY_MAPPING	0x33221100
 #define CPDMA_TX_PRIORITY_MAP	0x76543210
 
+enum {
+	CPSW_IRQ_TYPE_RX_THRESH,
+	CPSW_IRQ_TYPE_RX,
+	CPSW_IRQ_TYPE_TX,
+	CPSW_IRQ_TYPE_MISC,
+};
+
 #define cpsw_enable_irq(priv)	\
 	do {			\
 		u32 i;		\
@@ -312,9 +319,12 @@ struct cpsw_priv {
 	struct cpdma_ctlr		*dma;
 	struct cpdma_chan		*txch, *rxch;
 	struct cpsw_ale			*ale;
+	bool				rx_irqs_disabled;
+	bool				tx_irqs_disabled;
+	bool				misc_irqs_disabled;
 	/* snapshot of IRQ numbers */
-	u32 irqs_table[4];
-	u32 num_irqs;
+	u32				irqs_table[4];
+	u32				num_irqs;
 	struct cpts cpts;
 };
 
@@ -350,21 +360,93 @@ static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
 	}
 }
 
-static void cpsw_intr_enable(struct cpsw_priv *priv)
+static void cpsw_intr_rx_enable(struct cpsw_priv *priv)
+{
+	__raw_writel(0x01, &priv->wr_regs->rx_en);
+	cpdma_chan_int_ctrl(priv->rxch, true);
+	return;
+}
+
+static void cpsw_intr_rx_disable(struct cpsw_priv *priv)
+{
+	__raw_writel(0x00, &priv->wr_regs->rx_en);
+	cpdma_chan_int_ctrl(priv->rxch, false);
+	return;
+}
+
+static void cpsw_intr_tx_enable(struct cpsw_priv *priv)
+{
+	__raw_writel(0x01, &priv->wr_regs->tx_en);
+	cpdma_chan_int_ctrl(priv->txch, true);
+	return;
+}
+
+static void cpsw_intr_tx_disable(struct cpsw_priv *priv)
 {
-	__raw_writel(0xFF, &priv->wr_regs->tx_en);
-	__raw_writel(0xFF, &priv->wr_regs->rx_en);
+	__raw_writel(0x00, &priv->wr_regs->tx_en);
+	cpdma_chan_int_ctrl(priv->txch, false);
+	return;
+}
 
+static void cpsw_intr_misc_enable(struct cpsw_priv *priv)
+{
+	__raw_writel(0x04, &priv->wr_regs->misc_en);
 	cpdma_ctlr_int_ctrl(priv->dma, true);
 	return;
 }
 
+static void cpsw_intr_misc_disable(struct cpsw_priv *priv)
+{
+	__raw_writel(0x00, &priv->wr_regs->misc_en);
+	cpdma_ctlr_int_ctrl(priv->dma, false);
+}
+
+static void cpsw_intr_enable(struct cpsw_priv *priv)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&priv->lock, flags);
+	if (priv->rx_irqs_disabled) {
+		enable_irq(priv->irqs_table[CPSW_IRQ_TYPE_RX]);
+		priv->rx_irqs_disabled = false;
+	}
+	if (priv->tx_irqs_disabled) {
+		enable_irq(priv->irqs_table[CPSW_IRQ_TYPE_TX]);
+		priv->tx_irqs_disabled = false;
+	}
+	if (priv->misc_irqs_disabled) {
+		enable_irq(priv->irqs_table[CPSW_IRQ_TYPE_MISC]);
+		priv->misc_irqs_disabled = false;
+	}
+	cpsw_intr_rx_enable(priv);
+	cpsw_intr_tx_enable(priv);
+	cpsw_intr_misc_enable(priv);
+
+	spin_unlock_irqrestore(&priv->lock, flags);
+ }
+
 static void cpsw_intr_disable(struct cpsw_priv *priv)
 {
-	__raw_writel(0, &priv->wr_regs->tx_en);
-	__raw_writel(0, &priv->wr_regs->rx_en);
+	unsigned long flags;
+	spin_lock_irqsave(&priv->lock, flags);
+	if (!priv->rx_irqs_disabled) {
+		disable_irq_nosync(priv->irqs_table[CPSW_IRQ_TYPE_RX]);
+		priv->rx_irqs_disabled = true;
+	}
+	if (!priv->tx_irqs_disabled) {
+		disable_irq_nosync(priv->irqs_table[CPSW_IRQ_TYPE_TX]);
+		priv->tx_irqs_disabled = true;
+	}
+	if (!priv->misc_irqs_disabled) {
+		disable_irq_nosync(priv->irqs_table[CPSW_IRQ_TYPE_MISC]);
+		priv->misc_irqs_disabled = true;
+	}
 
-	cpdma_ctlr_int_ctrl(priv->dma, false);
+	cpsw_intr_rx_disable(priv);
+	cpsw_intr_tx_disable(priv);
+	cpsw_intr_misc_disable(priv);
+
+	spin_unlock_irqrestore(&priv->lock, flags);
 	return;
 }
 
@@ -422,15 +504,57 @@ void cpsw_rx_handler(void *token, int len, int status)
 	WARN_ON(ret < 0);
 }
 
-static irqreturn_t cpsw_interrupt(int irq, void *dev_id)
+static irqreturn_t cpsw_rx_interrupt(int irq, void *dev_id)
 {
 	struct cpsw_priv *priv = dev_id;
+	unsigned long flags;
 
-	if (likely(netif_running(priv->ndev))) {
-		cpsw_intr_disable(priv);
-		cpsw_disable_irq(priv);
+	spin_lock_irqsave(&priv->lock, flags);
+	disable_irq_nosync(irq);
+	priv->rx_irqs_disabled = true;
+	cpsw_intr_rx_disable(priv);
+	spin_unlock_irqrestore(&priv->lock, flags);
+
+	if (netif_running(priv->ndev))
 		napi_schedule(&priv->napi);
+
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t cpsw_tx_interrupt(int irq, void *dev_id)
+{
+	struct cpsw_priv *priv = dev_id;
+	unsigned long flags;
+
+	spin_lock_irqsave(&priv->lock, flags);
+	disable_irq_nosync(irq);
+	priv->tx_irqs_disabled = true;
+	cpsw_intr_tx_disable(priv);
+	spin_unlock_irqrestore(&priv->lock, flags);
+
+	if (netif_running(priv->ndev))
+		napi_schedule(&priv->napi);
+
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t cpsw_misc_interrupt(int irq, void *dev_id)
+{
+	struct cpsw_priv *priv = dev_id;
+	unsigned long flags;
+
+	if (!netif_running(priv->ndev)) {
+		spin_lock_irqsave(&priv->lock, flags);
+		disable_irq_nosync(irq);
+		priv->misc_irqs_disabled = true;
+		cpsw_intr_misc_disable(priv);
+		spin_unlock_irqrestore(&priv->lock, flags);
+		return IRQ_HANDLED;
 	}
+
+	printk(KERN_ERR "Host error: %x\n", cpdma_get_host_state(priv->dma));
+	cpdma_ctlr_misc_eoi(priv->dma);
+
 	return IRQ_HANDLED;
 }
 
@@ -445,20 +569,39 @@ static inline int cpsw_get_slave_port(struct cpsw_priv *priv, u32 slave_num)
 static int cpsw_poll(struct napi_struct *napi, int budget)
 {
 	struct cpsw_priv	*priv = napi_to_priv(napi);
-	int			num_tx, num_rx;
+	int			num_rx = 0;
+	int			txcnt = 0;
+	int			tx;
+	unsigned long		flags;
 
-	num_tx = cpdma_chan_process(priv->txch, 128);
-	num_rx = cpdma_chan_process(priv->rxch, budget);
-
-	if (num_rx || num_tx)
-		cpsw_dbg(priv, intr, "poll %d rx, %d tx pkts\n",
-			 num_rx, num_tx);
+	/* cleanup the complete TX queue */
+	do {
+		tx = cpdma_chan_process(priv->txch, 128);
+		if (tx > 0)
+			txcnt += tx;
+	} while (tx == 128);
+	if (txcnt) {
+		spin_lock_irqsave(&priv->lock, flags);
+		if (priv->tx_irqs_disabled == true) {
+			cpsw_intr_tx_enable(priv);
+			cpdma_ctlr_tx_eoi(priv->dma);
+			enable_irq(priv->irqs_table[CPSW_IRQ_TYPE_TX]);
+			priv->tx_irqs_disabled = false;
+		}
+		spin_unlock_irqrestore(&priv->lock, flags);
+	}
 
+	num_rx = cpdma_chan_process(priv->rxch, budget);
 	if (num_rx < budget) {
-		napi_complete(napi);
-		cpsw_intr_enable(priv);
-		cpdma_ctlr_eoi(priv->dma);
-		cpsw_enable_irq(priv);
+		spin_lock_irqsave(&priv->lock, flags);
+		if (priv->rx_irqs_disabled == true) {
+			napi_complete(napi);
+			cpsw_intr_rx_enable(priv);
+			cpdma_ctlr_rx_eoi(priv->dma);
+			enable_irq(priv->irqs_table[CPSW_IRQ_TYPE_RX]);
+			priv->rx_irqs_disabled = false;
+		}
+		spin_unlock_irqrestore(&priv->lock, flags);
 	}
 
 	return num_rx;
@@ -679,7 +822,8 @@ static int cpsw_ndo_open(struct net_device *ndev)
 	cpdma_ctlr_start(priv->dma);
 	cpsw_intr_enable(priv);
 	napi_enable(&priv->napi);
-	cpdma_ctlr_eoi(priv->dma);
+	cpdma_ctlr_rx_eoi(priv->dma);
+	cpdma_ctlr_tx_eoi(priv->dma);
 
 	return 0;
 }
@@ -702,7 +846,6 @@ static int cpsw_ndo_stop(struct net_device *ndev)
 	napi_disable(&priv->napi);
 	netif_carrier_off(priv->ndev);
 	cpsw_intr_disable(priv);
-	cpdma_ctlr_int_ctrl(priv->dma, false);
 	cpdma_ctlr_stop(priv->dma);
 	cpsw_ale_stop(priv->ale);
 	for_each_slave(priv, cpsw_slave_stop, priv);
@@ -896,12 +1039,11 @@ static void cpsw_ndo_tx_timeout(struct net_device *ndev)
 	cpsw_err(priv, tx_err, "transmit timeout, restarting dma\n");
 	priv->stats.tx_errors++;
 	cpsw_intr_disable(priv);
-	cpdma_ctlr_int_ctrl(priv->dma, false);
 	cpdma_chan_stop(priv->txch);
 	cpdma_chan_start(priv->txch);
-	cpdma_ctlr_int_ctrl(priv->dma, true);
 	cpsw_intr_enable(priv);
-	cpdma_ctlr_eoi(priv->dma);
+	cpdma_ctlr_rx_eoi(priv->dma);
+	cpdma_ctlr_tx_eoi(priv->dma);
 }
 
 static struct net_device_stats *cpsw_ndo_get_stats(struct net_device *ndev)
@@ -916,11 +1058,11 @@ static void cpsw_ndo_poll_controller(struct net_device *ndev)
 	struct cpsw_priv *priv = netdev_priv(ndev);
 
 	cpsw_intr_disable(priv);
-	cpdma_ctlr_int_ctrl(priv->dma, false);
-	cpsw_interrupt(ndev->irq, priv);
-	cpdma_ctlr_int_ctrl(priv->dma, true);
+	cpsw_rx_interrupt(priv->irqs_table[CPSW_IRQ_TYPE_RX], priv);
+	cpsw_tx_interrupt(priv->irqs_table[CPSW_IRQ_TYPE_TX], priv);
 	cpsw_intr_enable(priv);
-	cpdma_ctlr_eoi(priv->dma);
+	cpdma_ctlr_rx_eoi(priv->dma);
+	cpdma_ctlr_tx_eoi(priv->dma);
 }
 #endif
 
@@ -1333,17 +1475,37 @@ static int cpsw_probe(struct platform_device *pdev)
 		goto clean_ale_ret;
 	}
 
+	priv->num_irqs = 0;
 	while ((res = platform_get_resource(priv->pdev, IORESOURCE_IRQ, k))) {
-		for (i = res->start; i <= res->end; i++) {
-			if (request_irq(i, cpsw_interrupt, IRQF_DISABLED,
+		i = res->start;
+		switch (k) {
+		case CPSW_IRQ_TYPE_RX_THRESH:
+			i = 0;
+			break;
+		case CPSW_IRQ_TYPE_RX:
+			if (request_irq(i, cpsw_rx_interrupt, 0,
 					dev_name(&pdev->dev), priv)) {
 				dev_err(priv->dev, "error attaching irq\n");
-				goto clean_ale_ret;
+				goto clean_irq_ret;
 			}
-			priv->irqs_table[k] = i;
-			priv->num_irqs = k;
+			break;
+		case CPSW_IRQ_TYPE_TX:
+			if (request_irq(i, cpsw_tx_interrupt, 0,
+					dev_name(&pdev->dev), priv)) {
+				dev_err(priv->dev, "error attaching irq\n");
+				goto clean_irq_ret;
+			}
+			break;
+		case CPSW_IRQ_TYPE_MISC:
+			if (request_irq(i, cpsw_misc_interrupt, 0,
+					dev_name(&pdev->dev), priv)) {
+				dev_err(priv->dev, "error attaching irq\n");
+				goto clean_irq_ret;
+			}
+			break;
 		}
-		k++;
+		priv->irqs_table[k++] = i;
+		priv->num_irqs = k;
 	}
 
 	ndev->flags |= IFF_ALLMULTI;	/* see cpsw_ndo_change_rx_flags() */
@@ -1371,7 +1533,10 @@ static int cpsw_probe(struct platform_device *pdev)
 	return 0;
 
 clean_irq_ret:
-	free_irq(ndev->irq, priv);
+	for (i = 0; i < priv->num_irqs; i++) {
+		if (priv->irqs_table[i] > 0)
+			free_irq(priv->irqs_table[i], priv);
+	}
 clean_ale_ret:
 	cpsw_ale_destroy(priv->ale);
 clean_dma_ret:
@@ -1402,12 +1567,16 @@ static int cpsw_remove(struct platform_device *pdev)
 {
 	struct net_device *ndev = platform_get_drvdata(pdev);
 	struct cpsw_priv *priv = netdev_priv(ndev);
+	int i;
 
 	pr_info("removing device");
 	platform_set_drvdata(pdev, NULL);
 
 	cpts_unregister(&priv->cpts);
-	free_irq(ndev->irq, priv);
+	for (i = 0; i < priv->num_irqs; i++) {
+		if (priv->irqs_table[i] > 0)
+			free_irq(priv->irqs_table[i], priv);
+	}
 	cpsw_ale_destroy(priv->ale);
 	cpdma_chan_destroy(priv->txch);
 	cpdma_chan_destroy(priv->rxch);
diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c b/drivers/net/ethernet/ti/davinci_cpdma.c
index 70325cd..8c7c589 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.c
+++ b/drivers/net/ethernet/ti/davinci_cpdma.c
@@ -479,7 +479,7 @@ EXPORT_SYMBOL_GPL(cpdma_ctlr_destroy);
 int cpdma_ctlr_int_ctrl(struct cpdma_ctlr *ctlr, bool enable)
 {
 	unsigned long flags;
-	int i, reg;
+	int reg;
 
 	spin_lock_irqsave(&ctlr->lock, flags);
 	if (ctlr->state != CPDMA_STATE_ACTIVE) {
@@ -489,20 +489,35 @@ int cpdma_ctlr_int_ctrl(struct cpdma_ctlr *ctlr, bool enable)
 
 	reg = enable ? CPDMA_DMAINTMASKSET : CPDMA_DMAINTMASKCLEAR;
 	dma_reg_write(ctlr, reg, CPDMA_DMAINT_HOSTERR);
-
-	for (i = 0; i < ARRAY_SIZE(ctlr->channels); i++) {
-		if (ctlr->channels[i])
-			cpdma_chan_int_ctrl(ctlr->channels[i], enable);
-	}
-
 	spin_unlock_irqrestore(&ctlr->lock, flags);
 	return 0;
 }
+EXPORT_SYMBOL_GPL(cpdma_ctlr_int_ctrl);
+
+u32 cpdma_get_host_state(struct cpdma_ctlr *ctlr)
+{
+	return dma_reg_read(ctlr, CPDMA_DMASTATUS);
+}
+EXPORT_SYMBOL_GPL(cpdma_get_host_state);
+
+void cpdma_ctlr_rx_eoi(struct cpdma_ctlr *ctlr)
+{
+	dma_reg_write(ctlr, CPDMA_MACEOIVECTOR, 1);
+}
+EXPORT_SYMBOL_GPL(cpdma_ctlr_rx_eoi);
+
+void cpdma_ctlr_tx_eoi(struct cpdma_ctlr *ctlr)
+{
+	dma_reg_write(ctlr, CPDMA_MACEOIVECTOR, 2);
+}
+EXPORT_SYMBOL_GPL(cpdma_ctlr_tx_eoi);
 
-void cpdma_ctlr_eoi(struct cpdma_ctlr *ctlr)
+void cpdma_ctlr_misc_eoi(struct cpdma_ctlr *ctlr)
 {
 	dma_reg_write(ctlr, CPDMA_MACEOIVECTOR, 0);
+	dma_reg_write(ctlr, CPDMA_MACEOIVECTOR, 3);
 }
+EXPORT_SYMBOL_GPL(cpdma_ctlr_misc_eoi);
 
 struct cpdma_chan *cpdma_chan_create(struct cpdma_ctlr *ctlr, int chan_num,
 				     cpdma_handler_fn handler)
diff --git a/drivers/net/ethernet/ti/davinci_cpdma.h b/drivers/net/ethernet/ti/davinci_cpdma.h
index afa19a0..083a0cc 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.h
+++ b/drivers/net/ethernet/ti/davinci_cpdma.h
@@ -86,8 +86,11 @@ int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data,
 int cpdma_chan_process(struct cpdma_chan *chan, int quota);
 
 int cpdma_ctlr_int_ctrl(struct cpdma_ctlr *ctlr, bool enable);
-void cpdma_ctlr_eoi(struct cpdma_ctlr *ctlr);
+void cpdma_ctlr_rx_eoi(struct cpdma_ctlr *ctlr);
+void cpdma_ctlr_tx_eoi(struct cpdma_ctlr *ctlr);
+void cpdma_ctlr_misc_eoi(struct cpdma_ctlr *ctlr);
 int cpdma_chan_int_ctrl(struct cpdma_chan *chan, bool enable);
+u32 cpdma_get_host_state(struct cpdma_ctlr *ctlr);
 
 enum cpdma_control {
 	CPDMA_CMD_IDLE,			/* write-only */
-- 
1.7.6.5

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 3/4] net: ethernet: ti cpsw: Split up DMA descriptor pool
  2013-01-18 10:06 ` [PATCH 3/4] net: ethernet: ti cpsw: Split up DMA descriptor pool Sebastian Andrzej Siewior
@ 2013-01-18 17:14   ` Mugunthan V N
  0 siblings, 0 replies; 5+ messages in thread
From: Mugunthan V N @ 2013-01-18 17:14 UTC (permalink / raw)
  To: netdev

Sebastian Andrzej Siewior <sebastian <at> breakpoint.cc> writes:

> 
> From: Thomas Gleixner <tglx <at> linutronix.de>
> 
> Split the buffer pool into a RX and a TX block so neither of the
> channels can influence the other. It is possible to fillup the pool by
> sending a lot of large packets on a slow half-duplex link.
> 
> Cc: Rakesh Ranjan <rakesh.ranjan <at> vnl.in>
> Cc: Bruno Bittner <Bruno.Bittner <at> sick.com>
> Signed-off-by: Thomas Gleixner <tglx <at> linutronix.de>
> [dengler: patch description]
> Signed-off-by: Holger Dengler <dengler <at> linutronix.de>
> [jan: forward ported]
> Signed-off-by: Jan Altenberg <jan <at> linutronix.de>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy <at> linutronix.de>
> ---
>  drivers/net/ethernet/ti/davinci_cpdma.c |   35 +++++++++++++++++++++++++++---
>  1 files changed, 31 insertions(+), 4 deletions(-)
> 

The same patch i have posted in the following patch series and its under review.
http://patchwork.ozlabs.org/patch/213322/

Regards
Mugunthan V N

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2013-01-18 17:19 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-01-18 10:06 [PATCH 1/4] net: ethernet: davinci cpdma: Enable interrupt while waiting for teardown complete Sebastian Andrzej Siewior
2013-01-18 10:06 ` [PATCH 2/4] net: ethernet: davinci_cpdma: fix descriptor handling Sebastian Andrzej Siewior
2013-01-18 10:06 ` [PATCH 3/4] net: ethernet: ti cpsw: Split up DMA descriptor pool Sebastian Andrzej Siewior
2013-01-18 17:14   ` Mugunthan V N
2013-01-18 10:06 ` [PATCH 4/4] net: ethernet: ti cpsw: separate interrupt handler for TX, RX, and MISC Sebastian Andrzej Siewior

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).