* [PATCH v4 00/10] serial: sh-sci: Add DT DMA support
@ 2015-09-18 11:08 Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 01/10] serial: sh-sci: Shuffle functions around Geert Uytterhoeven
` (9 more replies)
0 siblings, 10 replies; 11+ messages in thread
From: Geert Uytterhoeven @ 2015-09-18 11:08 UTC (permalink / raw)
To: Greg Kroah-Hartman, Jiri Slaby
Cc: Muhammad Hamza Farooq, Magnus Damm, Yoshihiro Shimoda,
Laurent Pinchart, Nobuhiro Iwamatsu, Yoshihiro Kaneko,
Kazuya Mizuguchi, Koji Matsuoka, Wolfram Sang,
Guennadi Liakhovetski, linux-serial, linux-sh, Geert Uytterhoeven
Hi Greg, Jiri,
This patch series adds DT DMA support to the Renesas "SCI" serial driver
on R-Car Gen2 SoCs.
It depends on "[PATCH v3 00/33] serial: sh-sci: Miscellaneous and DMA
Improvements" [1], which contains preparatory patches that were
considered stable and safe before.
Changes compared to "[PATCH/RFC v3 0/4] serial: sh-sci: Add DT DMA
support" [2]:
- Dropped RFC status,
- Dropped "serial: sh-sci: Stop TX and RX on shutdown", as it caused
serial console output to stop suddenly during halt/reboot on e.g.
armadillo and kzm9g,
- Added sevaral new fixes, mostly from Muhammad Hamza Farooq and
Aleksandar Mitev (thanks a lot!),
- Added a first patch to shuffle functions around, to avoid churn in
patches making functional changes.
This was tested on r8a7791/koelsch (using SCIF, SCIFA, SCIFB, and
HSCIF), and received some testing on r8a7795/salvator-x (using SCIF and
HSCIF).
While it doesn't matter much at low speeds, reliability at high speeds
(> 1 Mbps) is improved by applying the following improvements to the
rcar-dmac driver:
- "[PATCH] dmaengine: rcar-dmac: Fix residue reporting for pending
descriptors" [3],
- Muhammad Hamza Farooq's series "[PATCH 0/6] rcar DMA engine patches"
[4].
Note that SCI DMA support was originally written for SH-based systems,
and was never fully wired up in platform code in mainline, so the risk
for regressions is fairly small.
For testing convenience, I've pushed this and related series to topic
branches in
git://git.kernel.org/pub/scm/linux/kernel/git/geert/renesas-drivers.git:
- topic/scif-misc-v3 [1]
- topic/scif-dma-v4
- topic/rcar-dmac-residue-v1 [3]
- topic/rcar-dmac-hamza-v1 [4]
Thanks!
References:
[1] http://www.spinics.net/lists/linux-serial/msg18682.html
[2] http://www.spinics.net/lists/linux-sh/msg44347.html
[3] http://www.spinics.net/lists/dmaengine/msg05711.html
[4] http://www.spinics.net/lists/dmaengine/msg06103.html
Aleksandar Mitev (1):
serial: sh-sci: Remove timer on shutdown of port
Geert Uytterhoeven (5):
serial: sh-sci: Shuffle functions around
serial: sh-sci: Get rid of the workqueue to handle receive DMA
requests
serial: sh-sci: Submit RX DMA from RX interrupt on (H)SCIF
serial: sh-sci: Stop calling sci_start_rx() from sci_request_dma()
serial: sh-sci: Add DT support to DMA setup
Muhammad Hamza Farooq (4):
serial: sh-sci: Redirect port interrupts to CPU _only_ when DMA stops
serial: sh-sci: Call dma_async_issue_pending when transaction
completes
serial: sh-sci: Do not terminate DMA engine when race condition occurs
serial: sh-sci: Pause DMA engine and get DMA status again
drivers/tty/serial/sh-sci.c | 1364 ++++++++++++++++++++++---------------------
1 file changed, 698 insertions(+), 666 deletions(-)
--
1.9.1
Gr{oetje,eeting}s,
Geert
--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org
In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v4 01/10] serial: sh-sci: Shuffle functions around
2015-09-18 11:08 [PATCH v4 00/10] serial: sh-sci: Add DT DMA support Geert Uytterhoeven
@ 2015-09-18 11:08 ` Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 02/10] serial: sh-sci: Get rid of the workqueue to handle receive DMA requests Geert Uytterhoeven
` (8 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Geert Uytterhoeven @ 2015-09-18 11:08 UTC (permalink / raw)
To: Greg Kroah-Hartman, Jiri Slaby
Cc: Muhammad Hamza Farooq, Magnus Damm, Yoshihiro Shimoda,
Laurent Pinchart, Nobuhiro Iwamatsu, Yoshihiro Kaneko,
Kazuya Mizuguchi, Koji Matsuoka, Wolfram Sang,
Guennadi Liakhovetski, linux-serial, linux-sh, Geert Uytterhoeven
This allows to:
- Remove forward declarations of static functions,
- Coalesce two sections protected by #ifdef CONFIG_SERIAL_SH_SCI_DMA,
- Avoid shuffling functions around in the near future,
- Avoid adding forward declarations in the near future.
No functional changes.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
---
v4:
- New.
---
drivers/tty/serial/sh-sci.c | 1305 +++++++++++++++++++++----------------------
1 file changed, 649 insertions(+), 656 deletions(-)
diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
index d8b73e791a554823..7d8b2644e06d4b8c 100644
--- a/drivers/tty/serial/sh-sci.c
+++ b/drivers/tty/serial/sh-sci.c
@@ -123,11 +123,6 @@ struct sci_port {
struct notifier_block freq_transition;
};
-/* Function prototypes */
-static void sci_start_tx(struct uart_port *port);
-static void sci_stop_tx(struct uart_port *port);
-static void sci_start_rx(struct uart_port *port);
-
#define SCI_NPORTS CONFIG_SERIAL_SH_SCI_NR_UARTS
static struct sci_port sci_ports[SCI_NPORTS];
@@ -489,6 +484,89 @@ static void sci_port_disable(struct sci_port *sci_port)
pm_runtime_put_sync(sci_port->port.dev);
}
+static inline unsigned long port_rx_irq_mask(struct uart_port *port)
+{
+ /*
+ * Not all ports (such as SCIFA) will support REIE. Rather than
+ * special-casing the port type, we check the port initialization
+ * IRQ enable mask to see whether the IRQ is desired at all. If
+ * it's unset, it's logically inferred that there's no point in
+ * testing for it.
+ */
+ return SCSCR_RIE | (to_sci_port(port)->cfg->scscr & SCSCR_REIE);
+}
+
+static void sci_start_tx(struct uart_port *port)
+{
+ struct sci_port *s = to_sci_port(port);
+ unsigned short ctrl;
+
+#ifdef CONFIG_SERIAL_SH_SCI_DMA
+ if (port->type = PORT_SCIFA || port->type = PORT_SCIFB) {
+ u16 new, scr = serial_port_in(port, SCSCR);
+ if (s->chan_tx)
+ new = scr | SCSCR_TDRQE;
+ else
+ new = scr & ~SCSCR_TDRQE;
+ if (new != scr)
+ serial_port_out(port, SCSCR, new);
+ }
+
+ if (s->chan_tx && !uart_circ_empty(&s->port.state->xmit) &&
+ dma_submit_error(s->cookie_tx)) {
+ s->cookie_tx = 0;
+ schedule_work(&s->work_tx);
+ }
+#endif
+
+ if (!s->chan_tx || port->type = PORT_SCIFA || port->type = PORT_SCIFB) {
+ /* Set TIE (Transmit Interrupt Enable) bit in SCSCR */
+ ctrl = serial_port_in(port, SCSCR);
+ serial_port_out(port, SCSCR, ctrl | SCSCR_TIE);
+ }
+}
+
+static void sci_stop_tx(struct uart_port *port)
+{
+ unsigned short ctrl;
+
+ /* Clear TIE (Transmit Interrupt Enable) bit in SCSCR */
+ ctrl = serial_port_in(port, SCSCR);
+
+ if (port->type = PORT_SCIFA || port->type = PORT_SCIFB)
+ ctrl &= ~SCSCR_TDRQE;
+
+ ctrl &= ~SCSCR_TIE;
+
+ serial_port_out(port, SCSCR, ctrl);
+}
+
+static void sci_start_rx(struct uart_port *port)
+{
+ unsigned short ctrl;
+
+ ctrl = serial_port_in(port, SCSCR) | port_rx_irq_mask(port);
+
+ if (port->type = PORT_SCIFA || port->type = PORT_SCIFB)
+ ctrl &= ~SCSCR_RDRQE;
+
+ serial_port_out(port, SCSCR, ctrl);
+}
+
+static void sci_stop_rx(struct uart_port *port)
+{
+ unsigned short ctrl;
+
+ ctrl = serial_port_in(port, SCSCR);
+
+ if (port->type = PORT_SCIFA || port->type = PORT_SCIFB)
+ ctrl &= ~SCSCR_RDRQE;
+
+ ctrl &= ~port_rx_irq_mask(port);
+
+ serial_port_out(port, SCSCR, ctrl);
+}
+
static void sci_clear_SCxSR(struct uart_port *port, unsigned int mask)
{
if (port->type = PORT_SCI) {
@@ -940,694 +1018,743 @@ static int sci_handle_breaks(struct uart_port *port)
return copied;
}
-static irqreturn_t sci_rx_interrupt(int irq, void *ptr)
-{
#ifdef CONFIG_SERIAL_SH_SCI_DMA
- struct uart_port *port = ptr;
- struct sci_port *s = to_sci_port(port);
+static void sci_dma_tx_complete(void *arg)
+{
+ struct sci_port *s = arg;
+ struct uart_port *port = &s->port;
+ struct circ_buf *xmit = &port->state->xmit;
+ unsigned long flags;
- if (s->chan_rx) {
- u16 scr = serial_port_in(port, SCSCR);
- u16 ssr = serial_port_in(port, SCxSR);
+ dev_dbg(port->dev, "%s(%d)\n", __func__, port->line);
- /* Disable future Rx interrupts */
- if (port->type = PORT_SCIFA || port->type = PORT_SCIFB) {
- disable_irq_nosync(irq);
- scr |= SCSCR_RDRQE;
- } else {
- scr &= ~SCSCR_RIE;
- }
- serial_port_out(port, SCSCR, scr);
- /* Clear current interrupt */
- serial_port_out(port, SCxSR,
- ssr & ~(SCIF_DR | SCxSR_RDxF(port)));
- dev_dbg(port->dev, "Rx IRQ %lu: setup t-out in %u jiffies\n",
- jiffies, s->rx_timeout);
- mod_timer(&s->rx_timer, jiffies + s->rx_timeout);
+ spin_lock_irqsave(&port->lock, flags);
- return IRQ_HANDLED;
- }
-#endif
+ xmit->tail += s->tx_dma_len;
+ xmit->tail &= UART_XMIT_SIZE - 1;
- /* I think sci_receive_chars has to be called irrespective
- * of whether the I_IXOFF is set, otherwise, how is the interrupt
- * to be disabled?
- */
- sci_receive_chars(ptr);
+ port->icount.tx += s->tx_dma_len;
- return IRQ_HANDLED;
-}
+ if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ uart_write_wakeup(port);
-static irqreturn_t sci_tx_interrupt(int irq, void *ptr)
-{
- struct uart_port *port = ptr;
- unsigned long flags;
+ if (!uart_circ_empty(xmit)) {
+ s->cookie_tx = 0;
+ schedule_work(&s->work_tx);
+ } else {
+ s->cookie_tx = -EINVAL;
+ if (port->type = PORT_SCIFA || port->type = PORT_SCIFB) {
+ u16 ctrl = serial_port_in(port, SCSCR);
+ serial_port_out(port, SCSCR, ctrl & ~SCSCR_TIE);
+ }
+ }
- spin_lock_irqsave(&port->lock, flags);
- sci_transmit_chars(port);
spin_unlock_irqrestore(&port->lock, flags);
-
- return IRQ_HANDLED;
}
-static irqreturn_t sci_er_interrupt(int irq, void *ptr)
+/* Locking: called with port lock held */
+static int sci_dma_rx_push(struct sci_port *s, void *buf, size_t count)
{
- struct uart_port *port = ptr;
- struct sci_port *s = to_sci_port(port);
+ struct uart_port *port = &s->port;
+ struct tty_port *tport = &port->state->port;
+ int copied;
- /* Handle errors */
- if (port->type = PORT_SCI) {
- if (sci_handle_errors(port)) {
- /* discard character in rx buffer */
- serial_port_in(port, SCxSR);
- sci_clear_SCxSR(port, SCxSR_RDxF_CLEAR(port));
- }
- } else {
- sci_handle_fifo_overrun(port);
- if (!s->chan_rx)
- sci_receive_chars(ptr);
+ copied = tty_insert_flip_string(tport, buf, count);
+ if (copied < count) {
+ dev_warn(port->dev, "Rx overrun: dropping %zu bytes\n",
+ count - copied);
+ port->icount.buf_overrun++;
}
- sci_clear_SCxSR(port, SCxSR_ERROR_CLEAR(port));
-
- /* Kick the transmission */
- if (!s->chan_tx)
- sci_tx_interrupt(irq, ptr);
+ port->icount.rx += copied;
- return IRQ_HANDLED;
+ return copied;
}
-static irqreturn_t sci_br_interrupt(int irq, void *ptr)
+static int sci_dma_rx_find_active(struct sci_port *s)
{
- struct uart_port *port = ptr;
+ unsigned int i;
- /* Handle BREAKs */
- sci_handle_breaks(port);
- sci_clear_SCxSR(port, SCxSR_BREAK_CLEAR(port));
+ for (i = 0; i < ARRAY_SIZE(s->cookie_rx); i++)
+ if (s->active_rx = s->cookie_rx[i])
+ return i;
- return IRQ_HANDLED;
+ dev_err(s->port.dev, "%s: Rx cookie %d not found!\n", __func__,
+ s->active_rx);
+ return -1;
}
-static inline unsigned long port_rx_irq_mask(struct uart_port *port)
+static void sci_rx_dma_release(struct sci_port *s, bool enable_pio)
{
- /*
- * Not all ports (such as SCIFA) will support REIE. Rather than
- * special-casing the port type, we check the port initialization
- * IRQ enable mask to see whether the IRQ is desired at all. If
- * it's unset, it's logically inferred that there's no point in
- * testing for it.
- */
- return SCSCR_RIE | (to_sci_port(port)->cfg->scscr & SCSCR_REIE);
+ struct dma_chan *chan = s->chan_rx;
+ struct uart_port *port = &s->port;
+ unsigned long flags;
+
+ spin_lock_irqsave(&port->lock, flags);
+ s->chan_rx = NULL;
+ s->cookie_rx[0] = s->cookie_rx[1] = -EINVAL;
+ spin_unlock_irqrestore(&port->lock, flags);
+ dmaengine_terminate_all(chan);
+ dma_free_coherent(chan->device->dev, s->buf_len_rx * 2, s->rx_buf[0],
+ sg_dma_address(&s->sg_rx[0]));
+ dma_release_channel(chan);
+ if (enable_pio)
+ sci_start_rx(port);
}
-static irqreturn_t sci_mpxed_interrupt(int irq, void *ptr)
+static void sci_dma_rx_complete(void *arg)
{
- unsigned short ssr_status, scr_status, err_enabled, orer_status = 0;
- struct uart_port *port = ptr;
- struct sci_port *s = to_sci_port(port);
- irqreturn_t ret = IRQ_NONE;
+ struct sci_port *s = arg;
+ struct uart_port *port = &s->port;
+ unsigned long flags;
+ int active, count = 0;
- ssr_status = serial_port_in(port, SCxSR);
- scr_status = serial_port_in(port, SCSCR);
- if (s->overrun_reg = SCxSR)
- orer_status = ssr_status;
- else {
- if (sci_getreg(port, s->overrun_reg)->size)
- orer_status = serial_port_in(port, s->overrun_reg);
- }
+ dev_dbg(port->dev, "%s(%d) active cookie %d\n", __func__, port->line,
+ s->active_rx);
- err_enabled = scr_status & port_rx_irq_mask(port);
+ spin_lock_irqsave(&port->lock, flags);
- /* Tx Interrupt */
- if ((ssr_status & SCxSR_TDxE(port)) && (scr_status & SCSCR_TIE) &&
- !s->chan_tx)
- ret = sci_tx_interrupt(irq, ptr);
+ active = sci_dma_rx_find_active(s);
+ if (active >= 0)
+ count = sci_dma_rx_push(s, s->rx_buf[active], s->buf_len_rx);
- /*
- * Rx Interrupt: if we're using DMA, the DMA controller clears RDF /
- * DR flags
- */
- if (((ssr_status & SCxSR_RDxF(port)) || s->chan_rx) &&
- (scr_status & SCSCR_RIE))
- ret = sci_rx_interrupt(irq, ptr);
+ mod_timer(&s->rx_timer, jiffies + s->rx_timeout);
- /* Error Interrupt */
- if ((ssr_status & SCxSR_ERRORS(port)) && err_enabled)
- ret = sci_er_interrupt(irq, ptr);
-
- /* Break Interrupt */
- if ((ssr_status & SCxSR_BRK(port)) && err_enabled)
- ret = sci_br_interrupt(irq, ptr);
+ spin_unlock_irqrestore(&port->lock, flags);
- /* Overrun Interrupt */
- if (orer_status & s->overrun_mask) {
- sci_handle_fifo_overrun(port);
- ret = IRQ_HANDLED;
- }
+ if (count)
+ tty_flip_buffer_push(&port->state->port);
- return ret;
+ schedule_work(&s->work_rx);
}
-/*
- * Here we define a transition notifier so that we can update all of our
- * ports' baud rate when the peripheral clock changes.
- */
-static int sci_notifier(struct notifier_block *self,
- unsigned long phase, void *p)
+static void sci_tx_dma_release(struct sci_port *s, bool enable_pio)
{
- struct sci_port *sci_port;
+ struct dma_chan *chan = s->chan_tx;
+ struct uart_port *port = &s->port;
unsigned long flags;
- sci_port = container_of(self, struct sci_port, freq_transition);
+ spin_lock_irqsave(&port->lock, flags);
+ s->chan_tx = NULL;
+ s->cookie_tx = -EINVAL;
+ spin_unlock_irqrestore(&port->lock, flags);
+ dmaengine_terminate_all(chan);
+ dma_unmap_single(chan->device->dev, s->tx_dma_addr, UART_XMIT_SIZE,
+ DMA_TO_DEVICE);
+ dma_release_channel(chan);
+ if (enable_pio)
+ sci_start_tx(port);
+}
- if (phase = CPUFREQ_POSTCHANGE) {
- struct uart_port *port = &sci_port->port;
+static void sci_submit_rx(struct sci_port *s)
+{
+ struct dma_chan *chan = s->chan_rx;
+ int i;
- spin_lock_irqsave(&port->lock, flags);
- port->uartclk = clk_get_rate(sci_port->iclk);
- spin_unlock_irqrestore(&port->lock, flags);
- }
+ for (i = 0; i < 2; i++) {
+ struct scatterlist *sg = &s->sg_rx[i];
+ struct dma_async_tx_descriptor *desc;
- return NOTIFY_OK;
-}
+ desc = dmaengine_prep_slave_sg(chan,
+ sg, 1, DMA_DEV_TO_MEM,
+ DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+ if (!desc)
+ goto fail;
-static const struct sci_irq_desc {
- const char *desc;
- irq_handler_t handler;
-} sci_irq_desc[] = {
- /*
- * Split out handlers, the default case.
- */
- [SCIx_ERI_IRQ] = {
- .desc = "rx err",
- .handler = sci_er_interrupt,
- },
+ desc->callback = sci_dma_rx_complete;
+ desc->callback_param = s;
+ s->cookie_rx[i] = dmaengine_submit(desc);
+ if (dma_submit_error(s->cookie_rx[i]))
+ goto fail;
- [SCIx_RXI_IRQ] = {
- .desc = "rx full",
- .handler = sci_rx_interrupt,
- },
+ dev_dbg(s->port.dev, "%s(): cookie %d to #%d\n", __func__,
+ s->cookie_rx[i], i);
+ }
- [SCIx_TXI_IRQ] = {
- .desc = "tx empty",
- .handler = sci_tx_interrupt,
- },
+ s->active_rx = s->cookie_rx[0];
- [SCIx_BRI_IRQ] = {
- .desc = "break",
- .handler = sci_br_interrupt,
- },
+ dma_async_issue_pending(chan);
+ return;
- /*
- * Special muxed handler.
- */
- [SCIx_MUX_IRQ] = {
- .desc = "mux",
- .handler = sci_mpxed_interrupt,
- },
-};
+fail:
+ if (i)
+ dmaengine_terminate_all(chan);
+ for (i = 0; i < 2; i++)
+ s->cookie_rx[i] = -EINVAL;
+ s->active_rx = -EINVAL;
+ dev_warn(s->port.dev, "Failed to re-start Rx DMA, using PIO\n");
+ sci_rx_dma_release(s, true);
+}
-static int sci_request_irq(struct sci_port *port)
+static void work_fn_rx(struct work_struct *work)
{
- struct uart_port *up = &port->port;
- int i, j, ret = 0;
+ struct sci_port *s = container_of(work, struct sci_port, work_rx);
+ struct uart_port *port = &s->port;
+ struct dma_async_tx_descriptor *desc;
+ struct dma_tx_state state;
+ enum dma_status status;
+ unsigned long flags;
+ int new;
- for (i = j = 0; i < SCIx_NR_IRQS; i++, j++) {
- const struct sci_irq_desc *desc;
- int irq;
+ spin_lock_irqsave(&port->lock, flags);
+ new = sci_dma_rx_find_active(s);
+ if (new < 0) {
+ spin_unlock_irqrestore(&port->lock, flags);
+ return;
+ }
- if (SCIx_IRQ_IS_MUXED(port)) {
- i = SCIx_MUX_IRQ;
- irq = up->irq;
- } else {
- irq = port->irqs[i];
+ status = dmaengine_tx_status(s->chan_rx, s->active_rx, &state);
+ if (status != DMA_COMPLETE) {
+ /* Handle incomplete DMA receive */
+ struct dma_chan *chan = s->chan_rx;
+ unsigned int read;
+ int count;
- /*
- * Certain port types won't support all of the
- * available interrupt sources.
- */
- if (unlikely(irq < 0))
- continue;
+ dmaengine_terminate_all(chan);
+ read = sg_dma_len(&s->sg_rx[new]) - state.residue;
+ dev_dbg(port->dev, "Read %u bytes with cookie %d\n", read,
+ s->active_rx);
+
+ if (read) {
+ count = sci_dma_rx_push(s, s->rx_buf[new], read);
+ if (count)
+ tty_flip_buffer_push(&port->state->port);
}
- desc = sci_irq_desc + i;
- port->irqstr[j] = kasprintf(GFP_KERNEL, "%s:%s",
- dev_name(up->dev), desc->desc);
- if (!port->irqstr[j])
- goto out_nomem;
+ spin_unlock_irqrestore(&port->lock, flags);
- ret = request_irq(irq, desc->handler, up->irqflags,
- port->irqstr[j], port);
- if (unlikely(ret)) {
- dev_err(up->dev, "Can't allocate %s IRQ\n", desc->desc);
- goto out_noirq;
- }
+ sci_submit_rx(s);
+ return;
}
- return 0;
+ desc = dmaengine_prep_slave_sg(s->chan_rx, &s->sg_rx[new], 1,
+ DMA_DEV_TO_MEM,
+ DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+ if (!desc)
+ goto fail;
-out_noirq:
- while (--i >= 0)
- free_irq(port->irqs[i], port);
+ desc->callback = sci_dma_rx_complete;
+ desc->callback_param = s;
+ s->cookie_rx[new] = dmaengine_submit(desc);
+ if (dma_submit_error(s->cookie_rx[new]))
+ goto fail;
-out_nomem:
- while (--j >= 0)
- kfree(port->irqstr[j]);
+ s->active_rx = s->cookie_rx[!new];
- return ret;
+ dev_dbg(port->dev, "%s: cookie %d #%d, new active cookie %d\n",
+ __func__, s->cookie_rx[new], new, s->active_rx);
+ spin_unlock_irqrestore(&port->lock, flags);
+ return;
+
+fail:
+ spin_unlock_irqrestore(&port->lock, flags);
+ dev_warn(port->dev, "Failed submitting Rx DMA descriptor\n");
+ sci_rx_dma_release(s, true);
}
-static void sci_free_irq(struct sci_port *port)
+static void work_fn_tx(struct work_struct *work)
{
- int i;
+ struct sci_port *s = container_of(work, struct sci_port, work_tx);
+ struct dma_async_tx_descriptor *desc;
+ struct dma_chan *chan = s->chan_tx;
+ struct uart_port *port = &s->port;
+ struct circ_buf *xmit = &port->state->xmit;
+ dma_addr_t buf;
/*
- * Intentionally in reverse order so we iterate over the muxed
- * IRQ first.
+ * DMA is idle now.
+ * Port xmit buffer is already mapped, and it is one page... Just adjust
+ * offsets and lengths. Since it is a circular buffer, we have to
+ * transmit till the end, and then the rest. Take the port lock to get a
+ * consistent xmit buffer state.
*/
- for (i = 0; i < SCIx_NR_IRQS; i++) {
- int irq = port->irqs[i];
+ spin_lock_irq(&port->lock);
+ buf = s->tx_dma_addr + (xmit->tail & (UART_XMIT_SIZE - 1));
+ s->tx_dma_len = min_t(unsigned int,
+ CIRC_CNT(xmit->head, xmit->tail, UART_XMIT_SIZE),
+ CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE));
+ spin_unlock_irq(&port->lock);
- /*
- * Certain port types won't support all of the available
- * interrupt sources.
- */
- if (unlikely(irq < 0))
- continue;
+ desc = dmaengine_prep_slave_single(chan, buf, s->tx_dma_len,
+ DMA_MEM_TO_DEV,
+ DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+ if (!desc) {
+ dev_warn(port->dev, "Failed preparing Tx DMA descriptor\n");
+ /* switch to PIO */
+ sci_tx_dma_release(s, true);
+ return;
+ }
- free_irq(port->irqs[i], port);
- kfree(port->irqstr[i]);
+ dma_sync_single_for_device(chan->device->dev, buf, s->tx_dma_len,
+ DMA_TO_DEVICE);
- if (SCIx_IRQ_IS_MUXED(port)) {
- /* If there's only one IRQ, we're done. */
- return;
- }
+ spin_lock_irq(&port->lock);
+ desc->callback = sci_dma_tx_complete;
+ desc->callback_param = s;
+ spin_unlock_irq(&port->lock);
+ s->cookie_tx = dmaengine_submit(desc);
+ if (dma_submit_error(s->cookie_tx)) {
+ dev_warn(port->dev, "Failed submitting Tx DMA descriptor\n");
+ /* switch to PIO */
+ sci_tx_dma_release(s, true);
+ return;
}
-}
-static unsigned int sci_tx_empty(struct uart_port *port)
-{
- unsigned short status = serial_port_in(port, SCxSR);
- unsigned short in_tx_fifo = sci_txfill(port);
+ dev_dbg(port->dev, "%s: %p: %d...%d, cookie %d\n",
+ __func__, xmit->buf, xmit->tail, xmit->head, s->cookie_tx);
- return (status & SCxSR_TEND(port)) && !in_tx_fifo ? TIOCSER_TEMT : 0;
+ dma_async_issue_pending(chan);
}
-/*
- * Modem control is a bit of a mixed bag for SCI(F) ports. Generally
- * CTS/RTS is supported in hardware by at least one port and controlled
- * via SCSPTR (SCxPCR for SCIFA/B parts), or external pins (presently
- * handled via the ->init_pins() op, which is a bit of a one-way street,
- * lacking any ability to defer pin control -- this will later be
- * converted over to the GPIO framework).
- *
- * Other modes (such as loopback) are supported generically on certain
- * port types, but not others. For these it's sufficient to test for the
- * existence of the support register and simply ignore the port type.
- */
-static void sci_set_mctrl(struct uart_port *port, unsigned int mctrl)
+static bool filter(struct dma_chan *chan, void *slave)
{
- if (mctrl & TIOCM_LOOP) {
- const struct plat_sci_reg *reg;
+ struct sh_dmae_slave *param = slave;
- /*
- * Standard loopback mode for SCFCR ports.
- */
- reg = sci_getreg(port, SCFCR);
- if (reg->size)
- serial_port_out(port, SCFCR,
- serial_port_in(port, SCFCR) |
- SCFCR_LOOP);
- }
+ dev_dbg(chan->device->dev, "%s: slave ID %d\n",
+ __func__, param->shdma_slave.slave_id);
+
+ chan->private = ¶m->shdma_slave;
+ return true;
}
-static unsigned int sci_get_mctrl(struct uart_port *port)
+static void rx_timer_fn(unsigned long arg)
{
- /*
- * CTS/RTS is handled in hardware when supported, while nothing
- * else is wired up. Keep it simple and simply assert DSR/CAR.
- */
- return TIOCM_DSR | TIOCM_CAR;
+ struct sci_port *s = (struct sci_port *)arg;
+ struct uart_port *port = &s->port;
+ u16 scr = serial_port_in(port, SCSCR);
+
+ if (port->type = PORT_SCIFA || port->type = PORT_SCIFB) {
+ scr &= ~SCSCR_RDRQE;
+ enable_irq(s->irqs[SCIx_RXI_IRQ]);
+ }
+ serial_port_out(port, SCSCR, scr | SCSCR_RIE);
+ dev_dbg(port->dev, "DMA Rx timed out\n");
+ schedule_work(&s->work_rx);
}
-#ifdef CONFIG_SERIAL_SH_SCI_DMA
-static void sci_dma_tx_complete(void *arg)
+static void sci_request_dma(struct uart_port *port)
{
- struct sci_port *s = arg;
- struct uart_port *port = &s->port;
- struct circ_buf *xmit = &port->state->xmit;
- unsigned long flags;
+ struct sci_port *s = to_sci_port(port);
+ struct sh_dmae_slave *param;
+ struct dma_chan *chan;
+ dma_cap_mask_t mask;
- dev_dbg(port->dev, "%s(%d)\n", __func__, port->line);
+ dev_dbg(port->dev, "%s: port %d\n", __func__, port->line);
- spin_lock_irqsave(&port->lock, flags);
+ if (s->cfg->dma_slave_tx <= 0 || s->cfg->dma_slave_rx <= 0)
+ return;
- xmit->tail += s->tx_dma_len;
- xmit->tail &= UART_XMIT_SIZE - 1;
+ dma_cap_zero(mask);
+ dma_cap_set(DMA_SLAVE, mask);
- port->icount.tx += s->tx_dma_len;
+ param = &s->param_tx;
- if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
- uart_write_wakeup(port);
+ /* Slave ID, e.g., SHDMA_SLAVE_SCIF0_TX */
+ param->shdma_slave.slave_id = s->cfg->dma_slave_tx;
- if (!uart_circ_empty(xmit)) {
- s->cookie_tx = 0;
- schedule_work(&s->work_tx);
- } else {
- s->cookie_tx = -EINVAL;
- if (port->type = PORT_SCIFA || port->type = PORT_SCIFB) {
- u16 ctrl = serial_port_in(port, SCSCR);
- serial_port_out(port, SCSCR, ctrl & ~SCSCR_TIE);
+ s->cookie_tx = -EINVAL;
+ chan = dma_request_channel(mask, filter, param);
+ dev_dbg(port->dev, "%s: TX: got channel %p\n", __func__, chan);
+ if (chan) {
+ s->chan_tx = chan;
+ /* UART circular tx buffer is an aligned page. */
+ s->tx_dma_addr = dma_map_single(chan->device->dev,
+ port->state->xmit.buf,
+ UART_XMIT_SIZE,
+ DMA_TO_DEVICE);
+ if (dma_mapping_error(chan->device->dev, s->tx_dma_addr)) {
+ dev_warn(port->dev, "Failed mapping Tx DMA descriptor\n");
+ dma_release_channel(chan);
+ s->chan_tx = NULL;
+ } else {
+ dev_dbg(port->dev, "%s: mapped %lu@%p to %pad\n",
+ __func__, UART_XMIT_SIZE,
+ port->state->xmit.buf, &s->tx_dma_addr);
}
+
+ INIT_WORK(&s->work_tx, work_fn_tx);
}
- spin_unlock_irqrestore(&port->lock, flags);
-}
+ param = &s->param_rx;
-/* Locking: called with port lock held */
-static int sci_dma_rx_push(struct sci_port *s, void *buf, size_t count)
-{
- struct uart_port *port = &s->port;
- struct tty_port *tport = &port->state->port;
- int copied;
+ /* Slave ID, e.g., SHDMA_SLAVE_SCIF0_RX */
+ param->shdma_slave.slave_id = s->cfg->dma_slave_rx;
- copied = tty_insert_flip_string(tport, buf, count);
- if (copied < count) {
- dev_warn(port->dev, "Rx overrun: dropping %zu bytes\n",
- count - copied);
- port->icount.buf_overrun++;
- }
+ chan = dma_request_channel(mask, filter, param);
+ dev_dbg(port->dev, "%s: RX: got channel %p\n", __func__, chan);
+ if (chan) {
+ unsigned int i;
+ dma_addr_t dma;
+ void *buf;
- port->icount.rx += copied;
+ s->chan_rx = chan;
- return copied;
-}
+ s->buf_len_rx = 2 * max_t(size_t, 16, port->fifosize);
+ buf = dma_alloc_coherent(chan->device->dev, s->buf_len_rx * 2,
+ &dma, GFP_KERNEL);
+ if (!buf) {
+ dev_warn(port->dev,
+ "Failed to allocate Rx dma buffer, using PIO\n");
+ dma_release_channel(chan);
+ s->chan_rx = NULL;
+ sci_start_rx(port);
+ return;
+ }
-static int sci_dma_rx_find_active(struct sci_port *s)
-{
- unsigned int i;
+ for (i = 0; i < 2; i++) {
+ struct scatterlist *sg = &s->sg_rx[i];
- for (i = 0; i < ARRAY_SIZE(s->cookie_rx); i++)
- if (s->active_rx = s->cookie_rx[i])
- return i;
+ sg_init_table(sg, 1);
+ s->rx_buf[i] = buf;
+ sg_dma_address(sg) = dma;
+ sg->length = s->buf_len_rx;
- dev_err(s->port.dev, "%s: Rx cookie %d not found!\n", __func__,
- s->active_rx);
- return -1;
+ buf += s->buf_len_rx;
+ dma += s->buf_len_rx;
+ }
+
+ INIT_WORK(&s->work_rx, work_fn_rx);
+ setup_timer(&s->rx_timer, rx_timer_fn, (unsigned long)s);
+
+ sci_submit_rx(s);
+ }
}
-static void sci_dma_rx_complete(void *arg)
+static void sci_free_dma(struct uart_port *port)
{
- struct sci_port *s = arg;
- struct uart_port *port = &s->port;
- unsigned long flags;
- int active, count = 0;
+ struct sci_port *s = to_sci_port(port);
- dev_dbg(port->dev, "%s(%d) active cookie %d\n", __func__, port->line,
- s->active_rx);
+ if (s->chan_tx)
+ sci_tx_dma_release(s, false);
+ if (s->chan_rx)
+ sci_rx_dma_release(s, false);
+}
+#else
+static inline void sci_request_dma(struct uart_port *port)
+{
+}
- spin_lock_irqsave(&port->lock, flags);
+static inline void sci_free_dma(struct uart_port *port)
+{
+}
+#endif
- active = sci_dma_rx_find_active(s);
- if (active >= 0)
- count = sci_dma_rx_push(s, s->rx_buf[active], s->buf_len_rx);
+static irqreturn_t sci_rx_interrupt(int irq, void *ptr)
+{
+#ifdef CONFIG_SERIAL_SH_SCI_DMA
+ struct uart_port *port = ptr;
+ struct sci_port *s = to_sci_port(port);
- mod_timer(&s->rx_timer, jiffies + s->rx_timeout);
+ if (s->chan_rx) {
+ u16 scr = serial_port_in(port, SCSCR);
+ u16 ssr = serial_port_in(port, SCxSR);
- spin_unlock_irqrestore(&port->lock, flags);
+ /* Disable future Rx interrupts */
+ if (port->type = PORT_SCIFA || port->type = PORT_SCIFB) {
+ disable_irq_nosync(irq);
+ scr |= SCSCR_RDRQE;
+ } else {
+ scr &= ~SCSCR_RIE;
+ }
+ serial_port_out(port, SCSCR, scr);
+ /* Clear current interrupt */
+ serial_port_out(port, SCxSR,
+ ssr & ~(SCIF_DR | SCxSR_RDxF(port)));
+ dev_dbg(port->dev, "Rx IRQ %lu: setup t-out in %u jiffies\n",
+ jiffies, s->rx_timeout);
+ mod_timer(&s->rx_timer, jiffies + s->rx_timeout);
- if (count)
- tty_flip_buffer_push(&port->state->port);
+ return IRQ_HANDLED;
+ }
+#endif
- schedule_work(&s->work_rx);
+ /* I think sci_receive_chars has to be called irrespective
+ * of whether the I_IXOFF is set, otherwise, how is the interrupt
+ * to be disabled?
+ */
+ sci_receive_chars(ptr);
+
+ return IRQ_HANDLED;
}
-static void sci_rx_dma_release(struct sci_port *s, bool enable_pio)
+static irqreturn_t sci_tx_interrupt(int irq, void *ptr)
{
- struct dma_chan *chan = s->chan_rx;
- struct uart_port *port = &s->port;
+ struct uart_port *port = ptr;
unsigned long flags;
spin_lock_irqsave(&port->lock, flags);
- s->chan_rx = NULL;
- s->cookie_rx[0] = s->cookie_rx[1] = -EINVAL;
+ sci_transmit_chars(port);
spin_unlock_irqrestore(&port->lock, flags);
- dmaengine_terminate_all(chan);
- dma_free_coherent(chan->device->dev, s->buf_len_rx * 2, s->rx_buf[0],
- sg_dma_address(&s->sg_rx[0]));
- dma_release_channel(chan);
- if (enable_pio)
- sci_start_rx(port);
+
+ return IRQ_HANDLED;
}
-static void sci_tx_dma_release(struct sci_port *s, bool enable_pio)
+static irqreturn_t sci_er_interrupt(int irq, void *ptr)
{
- struct dma_chan *chan = s->chan_tx;
- struct uart_port *port = &s->port;
- unsigned long flags;
+ struct uart_port *port = ptr;
+ struct sci_port *s = to_sci_port(port);
- spin_lock_irqsave(&port->lock, flags);
- s->chan_tx = NULL;
- s->cookie_tx = -EINVAL;
- spin_unlock_irqrestore(&port->lock, flags);
- dmaengine_terminate_all(chan);
- dma_unmap_single(chan->device->dev, s->tx_dma_addr, UART_XMIT_SIZE,
- DMA_TO_DEVICE);
- dma_release_channel(chan);
- if (enable_pio)
- sci_start_tx(port);
+ /* Handle errors */
+ if (port->type = PORT_SCI) {
+ if (sci_handle_errors(port)) {
+ /* discard character in rx buffer */
+ serial_port_in(port, SCxSR);
+ sci_clear_SCxSR(port, SCxSR_RDxF_CLEAR(port));
+ }
+ } else {
+ sci_handle_fifo_overrun(port);
+ if (!s->chan_rx)
+ sci_receive_chars(ptr);
+ }
+
+ sci_clear_SCxSR(port, SCxSR_ERROR_CLEAR(port));
+
+ /* Kick the transmission */
+ if (!s->chan_tx)
+ sci_tx_interrupt(irq, ptr);
+
+ return IRQ_HANDLED;
}
-static void sci_submit_rx(struct sci_port *s)
+static irqreturn_t sci_br_interrupt(int irq, void *ptr)
{
- struct dma_chan *chan = s->chan_rx;
- int i;
+ struct uart_port *port = ptr;
- for (i = 0; i < 2; i++) {
- struct scatterlist *sg = &s->sg_rx[i];
- struct dma_async_tx_descriptor *desc;
+ /* Handle BREAKs */
+ sci_handle_breaks(port);
+ sci_clear_SCxSR(port, SCxSR_BREAK_CLEAR(port));
- desc = dmaengine_prep_slave_sg(chan,
- sg, 1, DMA_DEV_TO_MEM,
- DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
- if (!desc)
- goto fail;
+ return IRQ_HANDLED;
+}
- desc->callback = sci_dma_rx_complete;
- desc->callback_param = s;
- s->cookie_rx[i] = dmaengine_submit(desc);
- if (dma_submit_error(s->cookie_rx[i]))
- goto fail;
+static irqreturn_t sci_mpxed_interrupt(int irq, void *ptr)
+{
+ unsigned short ssr_status, scr_status, err_enabled, orer_status = 0;
+ struct uart_port *port = ptr;
+ struct sci_port *s = to_sci_port(port);
+ irqreturn_t ret = IRQ_NONE;
- dev_dbg(s->port.dev, "%s(): cookie %d to #%d\n", __func__,
- s->cookie_rx[i], i);
+ ssr_status = serial_port_in(port, SCxSR);
+ scr_status = serial_port_in(port, SCSCR);
+ if (s->overrun_reg = SCxSR)
+ orer_status = ssr_status;
+ else {
+ if (sci_getreg(port, s->overrun_reg)->size)
+ orer_status = serial_port_in(port, s->overrun_reg);
}
- s->active_rx = s->cookie_rx[0];
+ err_enabled = scr_status & port_rx_irq_mask(port);
- dma_async_issue_pending(chan);
- return;
+ /* Tx Interrupt */
+ if ((ssr_status & SCxSR_TDxE(port)) && (scr_status & SCSCR_TIE) &&
+ !s->chan_tx)
+ ret = sci_tx_interrupt(irq, ptr);
-fail:
- if (i)
- dmaengine_terminate_all(chan);
- for (i = 0; i < 2; i++)
- s->cookie_rx[i] = -EINVAL;
- s->active_rx = -EINVAL;
- dev_warn(s->port.dev, "Failed to re-start Rx DMA, using PIO\n");
- sci_rx_dma_release(s, true);
-}
+ /*
+ * Rx Interrupt: if we're using DMA, the DMA controller clears RDF /
+ * DR flags
+ */
+ if (((ssr_status & SCxSR_RDxF(port)) || s->chan_rx) &&
+ (scr_status & SCSCR_RIE))
+ ret = sci_rx_interrupt(irq, ptr);
-static void work_fn_rx(struct work_struct *work)
-{
- struct sci_port *s = container_of(work, struct sci_port, work_rx);
- struct uart_port *port = &s->port;
- struct dma_async_tx_descriptor *desc;
- struct dma_tx_state state;
- enum dma_status status;
- unsigned long flags;
- int new;
+ /* Error Interrupt */
+ if ((ssr_status & SCxSR_ERRORS(port)) && err_enabled)
+ ret = sci_er_interrupt(irq, ptr);
- spin_lock_irqsave(&port->lock, flags);
- new = sci_dma_rx_find_active(s);
- if (new < 0) {
- spin_unlock_irqrestore(&port->lock, flags);
- return;
+ /* Break Interrupt */
+ if ((ssr_status & SCxSR_BRK(port)) && err_enabled)
+ ret = sci_br_interrupt(irq, ptr);
+
+ /* Overrun Interrupt */
+ if (orer_status & s->overrun_mask) {
+ sci_handle_fifo_overrun(port);
+ ret = IRQ_HANDLED;
}
- status = dmaengine_tx_status(s->chan_rx, s->active_rx, &state);
- if (status != DMA_COMPLETE) {
- /* Handle incomplete DMA receive */
- struct dma_chan *chan = s->chan_rx;
- unsigned int read;
- int count;
+ return ret;
+}
- dmaengine_terminate_all(chan);
- read = sg_dma_len(&s->sg_rx[new]) - state.residue;
- dev_dbg(port->dev, "Read %u bytes with cookie %d\n", read,
- s->active_rx);
+/*
+ * Here we define a transition notifier so that we can update all of our
+ * ports' baud rate when the peripheral clock changes.
+ */
+static int sci_notifier(struct notifier_block *self,
+ unsigned long phase, void *p)
+{
+ struct sci_port *sci_port;
+ unsigned long flags;
- if (read) {
- count = sci_dma_rx_push(s, s->rx_buf[new], read);
- if (count)
- tty_flip_buffer_push(&port->state->port);
- }
+ sci_port = container_of(self, struct sci_port, freq_transition);
- spin_unlock_irqrestore(&port->lock, flags);
+ if (phase = CPUFREQ_POSTCHANGE) {
+ struct uart_port *port = &sci_port->port;
- sci_submit_rx(s);
- return;
+ spin_lock_irqsave(&port->lock, flags);
+ port->uartclk = clk_get_rate(sci_port->iclk);
+ spin_unlock_irqrestore(&port->lock, flags);
}
- desc = dmaengine_prep_slave_sg(s->chan_rx, &s->sg_rx[new], 1,
- DMA_DEV_TO_MEM,
- DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
- if (!desc)
- goto fail;
-
- desc->callback = sci_dma_rx_complete;
- desc->callback_param = s;
- s->cookie_rx[new] = dmaengine_submit(desc);
- if (dma_submit_error(s->cookie_rx[new]))
- goto fail;
+ return NOTIFY_OK;
+}
- s->active_rx = s->cookie_rx[!new];
+static const struct sci_irq_desc {
+ const char *desc;
+ irq_handler_t handler;
+} sci_irq_desc[] = {
+ /*
+ * Split out handlers, the default case.
+ */
+ [SCIx_ERI_IRQ] = {
+ .desc = "rx err",
+ .handler = sci_er_interrupt,
+ },
- dev_dbg(port->dev, "%s: cookie %d #%d, new active cookie %d\n",
- __func__, s->cookie_rx[new], new, s->active_rx);
- spin_unlock_irqrestore(&port->lock, flags);
- return;
+ [SCIx_RXI_IRQ] = {
+ .desc = "rx full",
+ .handler = sci_rx_interrupt,
+ },
-fail:
- spin_unlock_irqrestore(&port->lock, flags);
- dev_warn(port->dev, "Failed submitting Rx DMA descriptor\n");
- sci_rx_dma_release(s, true);
-}
+ [SCIx_TXI_IRQ] = {
+ .desc = "tx empty",
+ .handler = sci_tx_interrupt,
+ },
-static void work_fn_tx(struct work_struct *work)
-{
- struct sci_port *s = container_of(work, struct sci_port, work_tx);
- struct dma_async_tx_descriptor *desc;
- struct dma_chan *chan = s->chan_tx;
- struct uart_port *port = &s->port;
- struct circ_buf *xmit = &port->state->xmit;
- dma_addr_t buf;
+ [SCIx_BRI_IRQ] = {
+ .desc = "break",
+ .handler = sci_br_interrupt,
+ },
/*
- * DMA is idle now.
- * Port xmit buffer is already mapped, and it is one page... Just adjust
- * offsets and lengths. Since it is a circular buffer, we have to
- * transmit till the end, and then the rest. Take the port lock to get a
- * consistent xmit buffer state.
+ * Special muxed handler.
*/
- spin_lock_irq(&port->lock);
- buf = s->tx_dma_addr + (xmit->tail & (UART_XMIT_SIZE - 1));
- s->tx_dma_len = min_t(unsigned int,
- CIRC_CNT(xmit->head, xmit->tail, UART_XMIT_SIZE),
- CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE));
- spin_unlock_irq(&port->lock);
+ [SCIx_MUX_IRQ] = {
+ .desc = "mux",
+ .handler = sci_mpxed_interrupt,
+ },
+};
- desc = dmaengine_prep_slave_single(chan, buf, s->tx_dma_len,
- DMA_MEM_TO_DEV,
- DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
- if (!desc) {
- dev_warn(port->dev, "Failed preparing Tx DMA descriptor\n");
- /* switch to PIO */
- sci_tx_dma_release(s, true);
- return;
- }
+static int sci_request_irq(struct sci_port *port)
+{
+ struct uart_port *up = &port->port;
+ int i, j, ret = 0;
- dma_sync_single_for_device(chan->device->dev, buf, s->tx_dma_len,
- DMA_TO_DEVICE);
+ for (i = j = 0; i < SCIx_NR_IRQS; i++, j++) {
+ const struct sci_irq_desc *desc;
+ int irq;
- spin_lock_irq(&port->lock);
- desc->callback = sci_dma_tx_complete;
- desc->callback_param = s;
- spin_unlock_irq(&port->lock);
- s->cookie_tx = dmaengine_submit(desc);
- if (dma_submit_error(s->cookie_tx)) {
- dev_warn(port->dev, "Failed submitting Tx DMA descriptor\n");
- /* switch to PIO */
- sci_tx_dma_release(s, true);
- return;
+ if (SCIx_IRQ_IS_MUXED(port)) {
+ i = SCIx_MUX_IRQ;
+ irq = up->irq;
+ } else {
+ irq = port->irqs[i];
+
+ /*
+ * Certain port types won't support all of the
+ * available interrupt sources.
+ */
+ if (unlikely(irq < 0))
+ continue;
+ }
+
+ desc = sci_irq_desc + i;
+ port->irqstr[j] = kasprintf(GFP_KERNEL, "%s:%s",
+ dev_name(up->dev), desc->desc);
+ if (!port->irqstr[j])
+ goto out_nomem;
+
+ ret = request_irq(irq, desc->handler, up->irqflags,
+ port->irqstr[j], port);
+ if (unlikely(ret)) {
+ dev_err(up->dev, "Can't allocate %s IRQ\n", desc->desc);
+ goto out_noirq;
+ }
}
- dev_dbg(port->dev, "%s: %p: %d...%d, cookie %d\n",
- __func__, xmit->buf, xmit->tail, xmit->head, s->cookie_tx);
-
- dma_async_issue_pending(chan);
-}
-#endif
-
-static void sci_start_tx(struct uart_port *port)
-{
- struct sci_port *s = to_sci_port(port);
- unsigned short ctrl;
+ return 0;
-#ifdef CONFIG_SERIAL_SH_SCI_DMA
- if (port->type = PORT_SCIFA || port->type = PORT_SCIFB) {
- u16 new, scr = serial_port_in(port, SCSCR);
- if (s->chan_tx)
- new = scr | SCSCR_TDRQE;
- else
- new = scr & ~SCSCR_TDRQE;
- if (new != scr)
- serial_port_out(port, SCSCR, new);
- }
+out_noirq:
+ while (--i >= 0)
+ free_irq(port->irqs[i], port);
- if (s->chan_tx && !uart_circ_empty(&s->port.state->xmit) &&
- dma_submit_error(s->cookie_tx)) {
- s->cookie_tx = 0;
- schedule_work(&s->work_tx);
- }
-#endif
+out_nomem:
+ while (--j >= 0)
+ kfree(port->irqstr[j]);
- if (!s->chan_tx || port->type = PORT_SCIFA || port->type = PORT_SCIFB) {
- /* Set TIE (Transmit Interrupt Enable) bit in SCSCR */
- ctrl = serial_port_in(port, SCSCR);
- serial_port_out(port, SCSCR, ctrl | SCSCR_TIE);
- }
+ return ret;
}
-static void sci_stop_tx(struct uart_port *port)
+static void sci_free_irq(struct sci_port *port)
{
- unsigned short ctrl;
+ int i;
- /* Clear TIE (Transmit Interrupt Enable) bit in SCSCR */
- ctrl = serial_port_in(port, SCSCR);
+ /*
+ * Intentionally in reverse order so we iterate over the muxed
+ * IRQ first.
+ */
+ for (i = 0; i < SCIx_NR_IRQS; i++) {
+ int irq = port->irqs[i];
- if (port->type = PORT_SCIFA || port->type = PORT_SCIFB)
- ctrl &= ~SCSCR_TDRQE;
+ /*
+ * Certain port types won't support all of the available
+ * interrupt sources.
+ */
+ if (unlikely(irq < 0))
+ continue;
- ctrl &= ~SCSCR_TIE;
+ free_irq(port->irqs[i], port);
+ kfree(port->irqstr[i]);
- serial_port_out(port, SCSCR, ctrl);
+ if (SCIx_IRQ_IS_MUXED(port)) {
+ /* If there's only one IRQ, we're done. */
+ return;
+ }
+ }
}
-static void sci_start_rx(struct uart_port *port)
+static unsigned int sci_tx_empty(struct uart_port *port)
{
- unsigned short ctrl;
-
- ctrl = serial_port_in(port, SCSCR) | port_rx_irq_mask(port);
-
- if (port->type = PORT_SCIFA || port->type = PORT_SCIFB)
- ctrl &= ~SCSCR_RDRQE;
+ unsigned short status = serial_port_in(port, SCxSR);
+ unsigned short in_tx_fifo = sci_txfill(port);
- serial_port_out(port, SCSCR, ctrl);
+ return (status & SCxSR_TEND(port)) && !in_tx_fifo ? TIOCSER_TEMT : 0;
}
-static void sci_stop_rx(struct uart_port *port)
+/*
+ * Modem control is a bit of a mixed bag for SCI(F) ports. Generally
+ * CTS/RTS is supported in hardware by at least one port and controlled
+ * via SCSPTR (SCxPCR for SCIFA/B parts), or external pins (presently
+ * handled via the ->init_pins() op, which is a bit of a one-way street,
+ * lacking any ability to defer pin control -- this will later be
+ * converted over to the GPIO framework).
+ *
+ * Other modes (such as loopback) are supported generically on certain
+ * port types, but not others. For these it's sufficient to test for the
+ * existence of the support register and simply ignore the port type.
+ */
+static void sci_set_mctrl(struct uart_port *port, unsigned int mctrl)
{
- unsigned short ctrl;
-
- ctrl = serial_port_in(port, SCSCR);
-
- if (port->type = PORT_SCIFA || port->type = PORT_SCIFB)
- ctrl &= ~SCSCR_RDRQE;
+ if (mctrl & TIOCM_LOOP) {
+ const struct plat_sci_reg *reg;
- ctrl &= ~port_rx_irq_mask(port);
+ /*
+ * Standard loopback mode for SCFCR ports.
+ */
+ reg = sci_getreg(port, SCFCR);
+ if (reg->size)
+ serial_port_out(port, SCFCR,
+ serial_port_in(port, SCFCR) |
+ SCFCR_LOOP);
+ }
+}
- serial_port_out(port, SCSCR, ctrl);
+static unsigned int sci_get_mctrl(struct uart_port *port)
+{
+ /*
+ * CTS/RTS is handled in hardware when supported, while nothing
+ * else is wired up. Keep it simple and simply assert DSR/CAR.
+ */
+ return TIOCM_DSR | TIOCM_CAR;
}
static void sci_break_ctl(struct uart_port *port, int break_state)
@@ -1660,140 +1787,6 @@ static void sci_break_ctl(struct uart_port *port, int break_state)
serial_port_out(port, SCSCR, scscr);
}
-#ifdef CONFIG_SERIAL_SH_SCI_DMA
-static bool filter(struct dma_chan *chan, void *slave)
-{
- struct sh_dmae_slave *param = slave;
-
- dev_dbg(chan->device->dev, "%s: slave ID %d\n",
- __func__, param->shdma_slave.slave_id);
-
- chan->private = ¶m->shdma_slave;
- return true;
-}
-
-static void rx_timer_fn(unsigned long arg)
-{
- struct sci_port *s = (struct sci_port *)arg;
- struct uart_port *port = &s->port;
- u16 scr = serial_port_in(port, SCSCR);
-
- if (port->type = PORT_SCIFA || port->type = PORT_SCIFB) {
- scr &= ~SCSCR_RDRQE;
- enable_irq(s->irqs[SCIx_RXI_IRQ]);
- }
- serial_port_out(port, SCSCR, scr | SCSCR_RIE);
- dev_dbg(port->dev, "DMA Rx timed out\n");
- schedule_work(&s->work_rx);
-}
-
-static void sci_request_dma(struct uart_port *port)
-{
- struct sci_port *s = to_sci_port(port);
- struct sh_dmae_slave *param;
- struct dma_chan *chan;
- dma_cap_mask_t mask;
-
- dev_dbg(port->dev, "%s: port %d\n", __func__, port->line);
-
- if (s->cfg->dma_slave_tx <= 0 || s->cfg->dma_slave_rx <= 0)
- return;
-
- dma_cap_zero(mask);
- dma_cap_set(DMA_SLAVE, mask);
-
- param = &s->param_tx;
-
- /* Slave ID, e.g., SHDMA_SLAVE_SCIF0_TX */
- param->shdma_slave.slave_id = s->cfg->dma_slave_tx;
-
- s->cookie_tx = -EINVAL;
- chan = dma_request_channel(mask, filter, param);
- dev_dbg(port->dev, "%s: TX: got channel %p\n", __func__, chan);
- if (chan) {
- s->chan_tx = chan;
- /* UART circular tx buffer is an aligned page. */
- s->tx_dma_addr = dma_map_single(chan->device->dev,
- port->state->xmit.buf,
- UART_XMIT_SIZE,
- DMA_TO_DEVICE);
- if (dma_mapping_error(chan->device->dev, s->tx_dma_addr)) {
- dev_warn(port->dev, "Failed mapping Tx DMA descriptor\n");
- dma_release_channel(chan);
- s->chan_tx = NULL;
- } else {
- dev_dbg(port->dev, "%s: mapped %lu@%p to %pad\n",
- __func__, UART_XMIT_SIZE,
- port->state->xmit.buf, &s->tx_dma_addr);
- }
-
- INIT_WORK(&s->work_tx, work_fn_tx);
- }
-
- param = &s->param_rx;
-
- /* Slave ID, e.g., SHDMA_SLAVE_SCIF0_RX */
- param->shdma_slave.slave_id = s->cfg->dma_slave_rx;
-
- chan = dma_request_channel(mask, filter, param);
- dev_dbg(port->dev, "%s: RX: got channel %p\n", __func__, chan);
- if (chan) {
- unsigned int i;
- dma_addr_t dma;
- void *buf;
-
- s->chan_rx = chan;
-
- s->buf_len_rx = 2 * max_t(size_t, 16, port->fifosize);
- buf = dma_alloc_coherent(chan->device->dev, s->buf_len_rx * 2,
- &dma, GFP_KERNEL);
- if (!buf) {
- dev_warn(port->dev,
- "Failed to allocate Rx dma buffer, using PIO\n");
- dma_release_channel(chan);
- s->chan_rx = NULL;
- sci_start_rx(port);
- return;
- }
-
- for (i = 0; i < 2; i++) {
- struct scatterlist *sg = &s->sg_rx[i];
-
- sg_init_table(sg, 1);
- s->rx_buf[i] = buf;
- sg_dma_address(sg) = dma;
- sg->length = s->buf_len_rx;
-
- buf += s->buf_len_rx;
- dma += s->buf_len_rx;
- }
-
- INIT_WORK(&s->work_rx, work_fn_rx);
- setup_timer(&s->rx_timer, rx_timer_fn, (unsigned long)s);
-
- sci_submit_rx(s);
- }
-}
-
-static void sci_free_dma(struct uart_port *port)
-{
- struct sci_port *s = to_sci_port(port);
-
- if (s->chan_tx)
- sci_tx_dma_release(s, false);
- if (s->chan_rx)
- sci_rx_dma_release(s, false);
-}
-#else
-static inline void sci_request_dma(struct uart_port *port)
-{
-}
-
-static inline void sci_free_dma(struct uart_port *port)
-{
-}
-#endif
-
static int sci_startup(struct uart_port *port)
{
struct sci_port *s = to_sci_port(port);
--
1.9.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 02/10] serial: sh-sci: Get rid of the workqueue to handle receive DMA requests
2015-09-18 11:08 [PATCH v4 00/10] serial: sh-sci: Add DT DMA support Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 01/10] serial: sh-sci: Shuffle functions around Geert Uytterhoeven
@ 2015-09-18 11:08 ` Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 03/10] serial: sh-sci: Submit RX DMA from RX interrupt on (H)SCIF Geert Uytterhoeven
` (7 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Geert Uytterhoeven @ 2015-09-18 11:08 UTC (permalink / raw)
To: Greg Kroah-Hartman, Jiri Slaby
Cc: Muhammad Hamza Farooq, Magnus Damm, Yoshihiro Shimoda,
Laurent Pinchart, Nobuhiro Iwamatsu, Yoshihiro Kaneko,
Kazuya Mizuguchi, Koji Matsuoka, Wolfram Sang,
Guennadi Liakhovetski, linux-serial, linux-sh, Geert Uytterhoeven
The receive DMA workqueue function work_fn_rx() handles two things:
1. Reception of a full buffer on completion of a receive DMA request,
2. Reception of a partial buffer on receive DMA time-out.
The workqueue is kicked by both the receive DMA completion handler, and
by a timer to handle DMA time-out.
As there are always two receive DMA requests active, it's possible that
the receive DMA completion handler is called a second time before the
workqueue function runs.
As the time-out handler re-enables the receive interrupt, an interrupt
may come in before time-out has been fully handled.
Move part 1 into the receive DMA completion handler, and move part 2
into the receive DMA time-out handler, to fix these race conditions.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
---
v4:
- Dropped RFC status,
- Rebased on top of "[PATCH] serial: sh-sci: Shuffle functions
around", hence it's no longer needed to move sci_rx_dma_release()
up,
v3:
- New.
---
drivers/tty/serial/sh-sci.c | 135 ++++++++++++++++++++------------------------
1 file changed, 61 insertions(+), 74 deletions(-)
diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
index 7d8b2644e06d4b8c..eb2b369b1cf1be0b 100644
--- a/drivers/tty/serial/sh-sci.c
+++ b/drivers/tty/serial/sh-sci.c
@@ -115,7 +115,6 @@ struct sci_port {
struct sh_dmae_slave param_tx;
struct sh_dmae_slave param_rx;
struct work_struct work_tx;
- struct work_struct work_rx;
struct timer_list rx_timer;
unsigned int rx_timeout;
#endif
@@ -1106,6 +1105,7 @@ static void sci_dma_rx_complete(void *arg)
{
struct sci_port *s = arg;
struct uart_port *port = &s->port;
+ struct dma_async_tx_descriptor *desc;
unsigned long flags;
int active, count = 0;
@@ -1120,12 +1120,32 @@ static void sci_dma_rx_complete(void *arg)
mod_timer(&s->rx_timer, jiffies + s->rx_timeout);
- spin_unlock_irqrestore(&port->lock, flags);
-
if (count)
tty_flip_buffer_push(&port->state->port);
- schedule_work(&s->work_rx);
+ desc = dmaengine_prep_slave_sg(s->chan_rx, &s->sg_rx[active], 1,
+ DMA_DEV_TO_MEM,
+ DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+ if (!desc)
+ goto fail;
+
+ desc->callback = sci_dma_rx_complete;
+ desc->callback_param = s;
+ s->cookie_rx[active] = dmaengine_submit(desc);
+ if (dma_submit_error(s->cookie_rx[active]))
+ goto fail;
+
+ s->active_rx = s->cookie_rx[!active];
+
+ dev_dbg(port->dev, "%s: cookie %d #%d, new active cookie %d\n",
+ __func__, s->cookie_rx[active], active, s->active_rx);
+ spin_unlock_irqrestore(&port->lock, flags);
+ return;
+
+fail:
+ spin_unlock_irqrestore(&port->lock, flags);
+ dev_warn(port->dev, "Failed submitting Rx DMA descriptor\n");
+ sci_rx_dma_release(s, true);
}
static void sci_tx_dma_release(struct sci_port *s, bool enable_pio)
@@ -1186,72 +1206,6 @@ fail:
sci_rx_dma_release(s, true);
}
-static void work_fn_rx(struct work_struct *work)
-{
- struct sci_port *s = container_of(work, struct sci_port, work_rx);
- struct uart_port *port = &s->port;
- struct dma_async_tx_descriptor *desc;
- struct dma_tx_state state;
- enum dma_status status;
- unsigned long flags;
- int new;
-
- spin_lock_irqsave(&port->lock, flags);
- new = sci_dma_rx_find_active(s);
- if (new < 0) {
- spin_unlock_irqrestore(&port->lock, flags);
- return;
- }
-
- status = dmaengine_tx_status(s->chan_rx, s->active_rx, &state);
- if (status != DMA_COMPLETE) {
- /* Handle incomplete DMA receive */
- struct dma_chan *chan = s->chan_rx;
- unsigned int read;
- int count;
-
- dmaengine_terminate_all(chan);
- read = sg_dma_len(&s->sg_rx[new]) - state.residue;
- dev_dbg(port->dev, "Read %u bytes with cookie %d\n", read,
- s->active_rx);
-
- if (read) {
- count = sci_dma_rx_push(s, s->rx_buf[new], read);
- if (count)
- tty_flip_buffer_push(&port->state->port);
- }
-
- spin_unlock_irqrestore(&port->lock, flags);
-
- sci_submit_rx(s);
- return;
- }
-
- desc = dmaengine_prep_slave_sg(s->chan_rx, &s->sg_rx[new], 1,
- DMA_DEV_TO_MEM,
- DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
- if (!desc)
- goto fail;
-
- desc->callback = sci_dma_rx_complete;
- desc->callback_param = s;
- s->cookie_rx[new] = dmaengine_submit(desc);
- if (dma_submit_error(s->cookie_rx[new]))
- goto fail;
-
- s->active_rx = s->cookie_rx[!new];
-
- dev_dbg(port->dev, "%s: cookie %d #%d, new active cookie %d\n",
- __func__, s->cookie_rx[new], new, s->active_rx);
- spin_unlock_irqrestore(&port->lock, flags);
- return;
-
-fail:
- spin_unlock_irqrestore(&port->lock, flags);
- dev_warn(port->dev, "Failed submitting Rx DMA descriptor\n");
- sci_rx_dma_release(s, true);
-}
-
static void work_fn_tx(struct work_struct *work)
{
struct sci_port *s = container_of(work, struct sci_port, work_tx);
@@ -1321,15 +1275,49 @@ static void rx_timer_fn(unsigned long arg)
{
struct sci_port *s = (struct sci_port *)arg;
struct uart_port *port = &s->port;
- u16 scr = serial_port_in(port, SCSCR);
+ struct dma_tx_state state;
+ enum dma_status status;
+ unsigned long flags;
+ unsigned int read;
+ int active, count;
+ u16 scr;
+
+ spin_lock_irqsave(&port->lock, flags);
+ dev_dbg(port->dev, "DMA Rx timed out\n");
+ scr = serial_port_in(port, SCSCR);
if (port->type = PORT_SCIFA || port->type = PORT_SCIFB) {
scr &= ~SCSCR_RDRQE;
enable_irq(s->irqs[SCIx_RXI_IRQ]);
}
serial_port_out(port, SCSCR, scr | SCSCR_RIE);
- dev_dbg(port->dev, "DMA Rx timed out\n");
- schedule_work(&s->work_rx);
+
+ active = sci_dma_rx_find_active(s);
+ if (active < 0) {
+ spin_unlock_irqrestore(&port->lock, flags);
+ return;
+ }
+
+ status = dmaengine_tx_status(s->chan_rx, s->active_rx, &state);
+ if (status = DMA_COMPLETE)
+ dev_dbg(port->dev, "Cookie %d #%d has already completed\n",
+ s->active_rx, active);
+
+ /* Handle incomplete DMA receive */
+ dmaengine_terminate_all(s->chan_rx);
+ read = sg_dma_len(&s->sg_rx[active]) - state.residue;
+ dev_dbg(port->dev, "Read %u bytes with cookie %d\n", read,
+ s->active_rx);
+
+ if (read) {
+ count = sci_dma_rx_push(s, s->rx_buf[active], read);
+ if (count)
+ tty_flip_buffer_push(&port->state->port);
+ }
+
+ spin_unlock_irqrestore(&port->lock, flags);
+
+ sci_submit_rx(s);
}
static void sci_request_dma(struct uart_port *port)
@@ -1413,7 +1401,6 @@ static void sci_request_dma(struct uart_port *port)
dma += s->buf_len_rx;
}
- INIT_WORK(&s->work_rx, work_fn_rx);
setup_timer(&s->rx_timer, rx_timer_fn, (unsigned long)s);
sci_submit_rx(s);
--
1.9.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 03/10] serial: sh-sci: Submit RX DMA from RX interrupt on (H)SCIF
2015-09-18 11:08 [PATCH v4 00/10] serial: sh-sci: Add DT DMA support Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 01/10] serial: sh-sci: Shuffle functions around Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 02/10] serial: sh-sci: Get rid of the workqueue to handle receive DMA requests Geert Uytterhoeven
@ 2015-09-18 11:08 ` Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 04/10] serial: sh-sci: Stop calling sci_start_rx() from sci_request_dma() Geert Uytterhoeven
` (6 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Geert Uytterhoeven @ 2015-09-18 11:08 UTC (permalink / raw)
To: Greg Kroah-Hartman, Jiri Slaby
Cc: Muhammad Hamza Farooq, Magnus Damm, Yoshihiro Shimoda,
Laurent Pinchart, Nobuhiro Iwamatsu, Yoshihiro Kaneko,
Kazuya Mizuguchi, Koji Matsuoka, Wolfram Sang,
Guennadi Liakhovetski, linux-serial, linux-sh, Geert Uytterhoeven
For DMA receive requests, the driver is only notified by DMA completion
after the whole DMA request has been transferred. If less data is
received, it will stay stuck until more data arrives. The driver
handles this by setting up a timer handler from the receive interrupt,
after reception of the first character.
Unlike SCIFA and SCIFB, SCIF and HSCIF don't issue receive interrupts on
reception of individual characters if a receive DMA request is in
progress, so the timer is never set up.
To fix receive DMA on SCIF and HSCIF, submit the receive DMA request
from the receive interrupt handler instead.
In some sense this is similar to the SCIFA/SCIFB behavior, where the
RDRQE (Rx Data Transfer Request Enable) bit is also set from the receive
interrupt handler.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
---
v4:
- Dropped RFC status,
- Rebased on top of "[PATCH] serial: sh-sci: Shuffle functions
around", hence the forward declaration of sci_submit_rx() is no
longer needed, and the declared-but-never-defined compiler warning
if CONFIG_SERIAL_SH_SCI_DMA=n is gone.
v3:
- New, this replaces the one-byte DMA transfer from "[PATCH/RFC v2
28/29] serial: sh-sci: Add (H)SCIF DMA support".
---
drivers/tty/serial/sh-sci.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
index eb2b369b1cf1be0b..02aaf4d213d9c280 100644
--- a/drivers/tty/serial/sh-sci.c
+++ b/drivers/tty/serial/sh-sci.c
@@ -1317,7 +1317,8 @@ static void rx_timer_fn(unsigned long arg)
spin_unlock_irqrestore(&port->lock, flags);
- sci_submit_rx(s);
+ if (port->type = PORT_SCIFA || port->type = PORT_SCIFB)
+ sci_submit_rx(s);
}
static void sci_request_dma(struct uart_port *port)
@@ -1403,7 +1404,8 @@ static void sci_request_dma(struct uart_port *port)
setup_timer(&s->rx_timer, rx_timer_fn, (unsigned long)s);
- sci_submit_rx(s);
+ if (port->type = PORT_SCIFA || port->type = PORT_SCIFB)
+ sci_submit_rx(s);
}
}
@@ -1442,6 +1444,7 @@ static irqreturn_t sci_rx_interrupt(int irq, void *ptr)
scr |= SCSCR_RDRQE;
} else {
scr &= ~SCSCR_RIE;
+ sci_submit_rx(s);
}
serial_port_out(port, SCSCR, scr);
/* Clear current interrupt */
--
1.9.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 04/10] serial: sh-sci: Stop calling sci_start_rx() from sci_request_dma()
2015-09-18 11:08 [PATCH v4 00/10] serial: sh-sci: Add DT DMA support Geert Uytterhoeven
` (2 preceding siblings ...)
2015-09-18 11:08 ` [PATCH v4 03/10] serial: sh-sci: Submit RX DMA from RX interrupt on (H)SCIF Geert Uytterhoeven
@ 2015-09-18 11:08 ` Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 05/10] serial: sh-sci: Remove timer on shutdown of port Geert Uytterhoeven
` (5 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Geert Uytterhoeven @ 2015-09-18 11:08 UTC (permalink / raw)
To: Greg Kroah-Hartman, Jiri Slaby
Cc: Muhammad Hamza Farooq, Magnus Damm, Yoshihiro Shimoda,
Laurent Pinchart, Nobuhiro Iwamatsu, Yoshihiro Kaneko,
Kazuya Mizuguchi, Koji Matsuoka, Wolfram Sang,
Guennadi Liakhovetski, linux-serial, linux-sh, Geert Uytterhoeven
There's no need to call sci_start_rx() from sci_request_dma() when DMA
setup fails, as sci_startup() will call sci_start_rx() anyway.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
---
v4:
- New.
---
drivers/tty/serial/sh-sci.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
index 02aaf4d213d9c280..ac9ce8f1ff799a48 100644
--- a/drivers/tty/serial/sh-sci.c
+++ b/drivers/tty/serial/sh-sci.c
@@ -1386,7 +1386,6 @@ static void sci_request_dma(struct uart_port *port)
"Failed to allocate Rx dma buffer, using PIO\n");
dma_release_channel(chan);
s->chan_rx = NULL;
- sci_start_rx(port);
return;
}
--
1.9.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 05/10] serial: sh-sci: Remove timer on shutdown of port
2015-09-18 11:08 [PATCH v4 00/10] serial: sh-sci: Add DT DMA support Geert Uytterhoeven
` (3 preceding siblings ...)
2015-09-18 11:08 ` [PATCH v4 04/10] serial: sh-sci: Stop calling sci_start_rx() from sci_request_dma() Geert Uytterhoeven
@ 2015-09-18 11:08 ` Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 06/10] serial: sh-sci: Redirect port interrupts to CPU _only_ when DMA stops Geert Uytterhoeven
` (4 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Geert Uytterhoeven @ 2015-09-18 11:08 UTC (permalink / raw)
To: Greg Kroah-Hartman, Jiri Slaby
Cc: Muhammad Hamza Farooq, Magnus Damm, Yoshihiro Shimoda,
Laurent Pinchart, Nobuhiro Iwamatsu, Yoshihiro Kaneko,
Kazuya Mizuguchi, Koji Matsuoka, Wolfram Sang,
Guennadi Liakhovetski, linux-serial, linux-sh, Aleksandar Mitev,
Geert Uytterhoeven
From: Aleksandar Mitev <amitev@visteon.com>
This prevents DMA timer timeout that can trigger after the port has
been closed.
Signed-off-by: Aleksandar Mitev <amitev@visteon.com>
[geert: Move del_timer_sync() outside spinlock to avoid circular locking
dependency between rx_timer_fn() and del_timer_sync()]
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
---
v4:
- New.
---
drivers/tty/serial/sh-sci.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
index ac9ce8f1ff799a48..36a11110acf4cab5 100644
--- a/drivers/tty/serial/sh-sci.c
+++ b/drivers/tty/serial/sh-sci.c
@@ -1810,6 +1810,14 @@ static void sci_shutdown(struct uart_port *port)
sci_stop_tx(port);
spin_unlock_irqrestore(&port->lock, flags);
+#ifdef CONFIG_SERIAL_SH_SCI_DMA
+ if (s->chan_rx) {
+ dev_dbg(port->dev, "%s(%d) deleting rx_timer\n", __func__,
+ port->line);
+ del_timer_sync(&s->rx_timer);
+ }
+#endif
+
sci_free_dma(port);
sci_free_irq(s);
}
--
1.9.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 06/10] serial: sh-sci: Redirect port interrupts to CPU _only_ when DMA stops
2015-09-18 11:08 [PATCH v4 00/10] serial: sh-sci: Add DT DMA support Geert Uytterhoeven
` (4 preceding siblings ...)
2015-09-18 11:08 ` [PATCH v4 05/10] serial: sh-sci: Remove timer on shutdown of port Geert Uytterhoeven
@ 2015-09-18 11:08 ` Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 07/10] serial: sh-sci: Call dma_async_issue_pending when transaction completes Geert Uytterhoeven
` (3 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Geert Uytterhoeven @ 2015-09-18 11:08 UTC (permalink / raw)
To: Greg Kroah-Hartman, Jiri Slaby
Cc: Muhammad Hamza Farooq, Magnus Damm, Yoshihiro Shimoda,
Laurent Pinchart, Nobuhiro Iwamatsu, Yoshihiro Kaneko,
Kazuya Mizuguchi, Koji Matsuoka, Wolfram Sang,
Guennadi Liakhovetski, linux-serial, linux-sh, Geert Uytterhoeven
From: Muhammad Hamza Farooq <mfarooq@visteon.com>
Since the DMA engine is not stopped everytime rx_timer_fn is called, the
interrupts have to be redirected back to CPU only when incomplete DMA
transaction is handled
Signed-off-by: Muhammad Hamza Farooq <mfarooq@visteon.com>
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
---
v4:
- New.
---
drivers/tty/serial/sh-sci.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
index 36a11110acf4cab5..5dcd8b382e9053f4 100644
--- a/drivers/tty/serial/sh-sci.c
+++ b/drivers/tty/serial/sh-sci.c
@@ -1285,12 +1285,6 @@ static void rx_timer_fn(unsigned long arg)
spin_lock_irqsave(&port->lock, flags);
dev_dbg(port->dev, "DMA Rx timed out\n");
- scr = serial_port_in(port, SCSCR);
- if (port->type = PORT_SCIFA || port->type = PORT_SCIFB) {
- scr &= ~SCSCR_RDRQE;
- enable_irq(s->irqs[SCIx_RXI_IRQ]);
- }
- serial_port_out(port, SCSCR, scr | SCSCR_RIE);
active = sci_dma_rx_find_active(s);
if (active < 0) {
@@ -1315,10 +1309,18 @@ static void rx_timer_fn(unsigned long arg)
tty_flip_buffer_push(&port->state->port);
}
- spin_unlock_irqrestore(&port->lock, flags);
-
if (port->type = PORT_SCIFA || port->type = PORT_SCIFB)
sci_submit_rx(s);
+
+ /* Direct new serial port interrupts back to CPU */
+ scr = serial_port_in(port, SCSCR);
+ if (port->type = PORT_SCIFA || port->type = PORT_SCIFB) {
+ scr &= ~SCSCR_RDRQE;
+ enable_irq(s->irqs[SCIx_RXI_IRQ]);
+ }
+ serial_port_out(port, SCSCR, scr | SCSCR_RIE);
+
+ spin_unlock_irqrestore(&port->lock, flags);
}
static void sci_request_dma(struct uart_port *port)
--
1.9.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 07/10] serial: sh-sci: Call dma_async_issue_pending when transaction completes
2015-09-18 11:08 [PATCH v4 00/10] serial: sh-sci: Add DT DMA support Geert Uytterhoeven
` (5 preceding siblings ...)
2015-09-18 11:08 ` [PATCH v4 06/10] serial: sh-sci: Redirect port interrupts to CPU _only_ when DMA stops Geert Uytterhoeven
@ 2015-09-18 11:08 ` Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 08/10] serial: sh-sci: Do not terminate DMA engine when race condition occurs Geert Uytterhoeven
` (2 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Geert Uytterhoeven @ 2015-09-18 11:08 UTC (permalink / raw)
To: Greg Kroah-Hartman, Jiri Slaby
Cc: Muhammad Hamza Farooq, Magnus Damm, Yoshihiro Shimoda,
Laurent Pinchart, Nobuhiro Iwamatsu, Yoshihiro Kaneko,
Kazuya Mizuguchi, Koji Matsuoka, Wolfram Sang,
Guennadi Liakhovetski, linux-serial, linux-sh, Geert Uytterhoeven
From: Muhammad Hamza Farooq <mfarooq@visteon.com>
dmaengine_submit() will not start the DMA operation, it merely adds
it to the pending queue. If the queue is no longer running, it won't be
restarted until dma_async_issue_pending() is called.
Signed-off-by: Muhammad Hamza Farooq <mfarooq@visteon.com>
[geert: Add more description]
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
---
v4:
- New.
---
drivers/tty/serial/sh-sci.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
index 5dcd8b382e9053f4..84c15152e111b084 100644
--- a/drivers/tty/serial/sh-sci.c
+++ b/drivers/tty/serial/sh-sci.c
@@ -1104,6 +1104,7 @@ static void sci_rx_dma_release(struct sci_port *s, bool enable_pio)
static void sci_dma_rx_complete(void *arg)
{
struct sci_port *s = arg;
+ struct dma_chan *chan = s->chan_rx;
struct uart_port *port = &s->port;
struct dma_async_tx_descriptor *desc;
unsigned long flags;
@@ -1137,6 +1138,8 @@ static void sci_dma_rx_complete(void *arg)
s->active_rx = s->cookie_rx[!active];
+ dma_async_issue_pending(chan);
+
dev_dbg(port->dev, "%s: cookie %d #%d, new active cookie %d\n",
__func__, s->cookie_rx[active], active, s->active_rx);
spin_unlock_irqrestore(&port->lock, flags);
--
1.9.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 08/10] serial: sh-sci: Do not terminate DMA engine when race condition occurs
2015-09-18 11:08 [PATCH v4 00/10] serial: sh-sci: Add DT DMA support Geert Uytterhoeven
` (6 preceding siblings ...)
2015-09-18 11:08 ` [PATCH v4 07/10] serial: sh-sci: Call dma_async_issue_pending when transaction completes Geert Uytterhoeven
@ 2015-09-18 11:08 ` Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 09/10] serial: sh-sci: Pause DMA engine and get DMA status again Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 10/10] serial: sh-sci: Add DT support to DMA setup Geert Uytterhoeven
9 siblings, 0 replies; 11+ messages in thread
From: Geert Uytterhoeven @ 2015-09-18 11:08 UTC (permalink / raw)
To: Greg Kroah-Hartman, Jiri Slaby
Cc: Muhammad Hamza Farooq, Magnus Damm, Yoshihiro Shimoda,
Laurent Pinchart, Nobuhiro Iwamatsu, Yoshihiro Kaneko,
Kazuya Mizuguchi, Koji Matsuoka, Wolfram Sang,
Guennadi Liakhovetski, linux-serial, linux-sh, Geert Uytterhoeven
From: Muhammad Hamza Farooq <mfarooq@visteon.com>
When DMA packet completion and timer expiry take place at the same time,
do not terminate the DMA engine, leading by submission of new
descriptors, as the DMA communication hasn't necessarily stopped here.
Signed-off-by: Muhammad Hamza Farooq <mfarooq@visteon.com>
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
---
v4:
- New.
---
drivers/tty/serial/sh-sci.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
index 84c15152e111b084..9406fe227bc76c96 100644
--- a/drivers/tty/serial/sh-sci.c
+++ b/drivers/tty/serial/sh-sci.c
@@ -1296,9 +1296,14 @@ static void rx_timer_fn(unsigned long arg)
}
status = dmaengine_tx_status(s->chan_rx, s->active_rx, &state);
- if (status = DMA_COMPLETE)
+ if (status = DMA_COMPLETE) {
dev_dbg(port->dev, "Cookie %d #%d has already completed\n",
s->active_rx, active);
+ spin_unlock_irqrestore(&port->lock, flags);
+
+ /* Let packet complete handler take care of the packet */
+ return;
+ }
/* Handle incomplete DMA receive */
dmaengine_terminate_all(s->chan_rx);
--
1.9.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 09/10] serial: sh-sci: Pause DMA engine and get DMA status again
2015-09-18 11:08 [PATCH v4 00/10] serial: sh-sci: Add DT DMA support Geert Uytterhoeven
` (7 preceding siblings ...)
2015-09-18 11:08 ` [PATCH v4 08/10] serial: sh-sci: Do not terminate DMA engine when race condition occurs Geert Uytterhoeven
@ 2015-09-18 11:08 ` Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 10/10] serial: sh-sci: Add DT support to DMA setup Geert Uytterhoeven
9 siblings, 0 replies; 11+ messages in thread
From: Geert Uytterhoeven @ 2015-09-18 11:08 UTC (permalink / raw)
To: Greg Kroah-Hartman, Jiri Slaby
Cc: Muhammad Hamza Farooq, Magnus Damm, Yoshihiro Shimoda,
Laurent Pinchart, Nobuhiro Iwamatsu, Yoshihiro Kaneko,
Kazuya Mizuguchi, Koji Matsuoka, Wolfram Sang,
Guennadi Liakhovetski, linux-serial, linux-sh, Geert Uytterhoeven
From: Muhammad Hamza Farooq <mfarooq@visteon.com>
Occasionally, DMA transaction completes _after_ DMA engine is stopped.
Verify if the transaction has not finished before forcing the engine to
stop and push the data
Signed-off-by: Muhammad Hamza Farooq <mfarooq@visteon.com>
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
---
v4:
- New.
---
drivers/tty/serial/sh-sci.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
index 9406fe227bc76c96..b1d1ce1986e6c064 100644
--- a/drivers/tty/serial/sh-sci.c
+++ b/drivers/tty/serial/sh-sci.c
@@ -1277,6 +1277,7 @@ static bool filter(struct dma_chan *chan, void *slave)
static void rx_timer_fn(unsigned long arg)
{
struct sci_port *s = (struct sci_port *)arg;
+ struct dma_chan *chan = s->chan_rx;
struct uart_port *port = &s->port;
struct dma_tx_state state;
enum dma_status status;
@@ -1305,6 +1306,21 @@ static void rx_timer_fn(unsigned long arg)
return;
}
+ dmaengine_pause(chan);
+
+ /*
+ * sometimes DMA transfer doesn't stop even if it is stopped and
+ * data keeps on coming until transaction is complete so check
+ * for DMA_COMPLETE again
+ * Let packet complete handler take care of the packet
+ */
+ status = dmaengine_tx_status(s->chan_rx, s->active_rx, &state);
+ if (status = DMA_COMPLETE) {
+ spin_unlock_irqrestore(&port->lock, flags);
+ dev_dbg(port->dev, "Transaction complete after DMA engine was stopped");
+ return;
+ }
+
/* Handle incomplete DMA receive */
dmaengine_terminate_all(s->chan_rx);
read = sg_dma_len(&s->sg_rx[active]) - state.residue;
--
1.9.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 10/10] serial: sh-sci: Add DT support to DMA setup
2015-09-18 11:08 [PATCH v4 00/10] serial: sh-sci: Add DT DMA support Geert Uytterhoeven
` (8 preceding siblings ...)
2015-09-18 11:08 ` [PATCH v4 09/10] serial: sh-sci: Pause DMA engine and get DMA status again Geert Uytterhoeven
@ 2015-09-18 11:08 ` Geert Uytterhoeven
9 siblings, 0 replies; 11+ messages in thread
From: Geert Uytterhoeven @ 2015-09-18 11:08 UTC (permalink / raw)
To: Greg Kroah-Hartman, Jiri Slaby
Cc: Muhammad Hamza Farooq, Magnus Damm, Yoshihiro Shimoda,
Laurent Pinchart, Nobuhiro Iwamatsu, Yoshihiro Kaneko,
Kazuya Mizuguchi, Koji Matsuoka, Wolfram Sang,
Guennadi Liakhovetski, linux-serial, linux-sh, Geert Uytterhoeven
Add support for obtaining DMA channel information from the device tree.
This requires switching from the legacy sh_dmae_slave structures with
hardcoded channel numbers and the corresponding filter function to:
1. dma_request_slave_channel_compat(),
- On legacy platforms, dma_request_slave_channel_compat() uses
the passed DMA channel numbers that originate from platform
device data,
- On DT-based platforms, dma_request_slave_channel_compat() will
retrieve the information from DT.
2. and the generic dmaengine_slave_config() configuration method,
which requires filling in DMA register ports and slave bus widths.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
---
v4:
- Dropped RFC status,
- Moved to end of new series,
v3:
- Moved to end of series, to avoid enabling broken DMA,
v2:
- Add Acked-by.
---
drivers/tty/serial/sh-sci.c | 78 +++++++++++++++++++++++++++------------------
1 file changed, 47 insertions(+), 31 deletions(-)
diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
index b1d1ce1986e6c064..960e50a97558cff5 100644
--- a/drivers/tty/serial/sh-sci.c
+++ b/drivers/tty/serial/sh-sci.c
@@ -112,8 +112,6 @@ struct sci_port {
struct scatterlist sg_rx[2];
void *rx_buf[2];
size_t buf_len_rx;
- struct sh_dmae_slave param_tx;
- struct sh_dmae_slave param_rx;
struct work_struct work_tx;
struct timer_list rx_timer;
unsigned int rx_timeout;
@@ -1263,17 +1261,6 @@ static void work_fn_tx(struct work_struct *work)
dma_async_issue_pending(chan);
}
-static bool filter(struct dma_chan *chan, void *slave)
-{
- struct sh_dmae_slave *param = slave;
-
- dev_dbg(chan->device->dev, "%s: slave ID %d\n",
- __func__, param->shdma_slave.slave_id);
-
- chan->private = ¶m->shdma_slave;
- return true;
-}
-
static void rx_timer_fn(unsigned long arg)
{
struct sci_port *s = (struct sci_port *)arg;
@@ -1347,28 +1334,62 @@ static void rx_timer_fn(unsigned long arg)
spin_unlock_irqrestore(&port->lock, flags);
}
+static struct dma_chan *sci_request_dma_chan(struct uart_port *port,
+ enum dma_transfer_direction dir,
+ unsigned int id)
+{
+ dma_cap_mask_t mask;
+ struct dma_chan *chan;
+ struct dma_slave_config cfg;
+ int ret;
+
+ dma_cap_zero(mask);
+ dma_cap_set(DMA_SLAVE, mask);
+
+ chan = dma_request_slave_channel_compat(mask, shdma_chan_filter,
+ (void *)(unsigned long)id, port->dev,
+ dir = DMA_MEM_TO_DEV ? "tx" : "rx");
+ if (!chan) {
+ dev_warn(port->dev,
+ "dma_request_slave_channel_compat failed\n");
+ return NULL;
+ }
+
+ memset(&cfg, 0, sizeof(cfg));
+ cfg.direction = dir;
+ if (dir = DMA_MEM_TO_DEV) {
+ cfg.dst_addr = port->mapbase +
+ (sci_getreg(port, SCxTDR)->offset << port->regshift);
+ cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+ } else {
+ cfg.src_addr = port->mapbase +
+ (sci_getreg(port, SCxRDR)->offset << port->regshift);
+ cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+ }
+
+ ret = dmaengine_slave_config(chan, &cfg);
+ if (ret) {
+ dev_warn(port->dev, "dmaengine_slave_config failed %d\n", ret);
+ dma_release_channel(chan);
+ return NULL;
+ }
+
+ return chan;
+}
+
static void sci_request_dma(struct uart_port *port)
{
struct sci_port *s = to_sci_port(port);
- struct sh_dmae_slave *param;
struct dma_chan *chan;
- dma_cap_mask_t mask;
dev_dbg(port->dev, "%s: port %d\n", __func__, port->line);
- if (s->cfg->dma_slave_tx <= 0 || s->cfg->dma_slave_rx <= 0)
+ if (!port->dev->of_node &&
+ (s->cfg->dma_slave_tx <= 0 || s->cfg->dma_slave_rx <= 0))
return;
- dma_cap_zero(mask);
- dma_cap_set(DMA_SLAVE, mask);
-
- param = &s->param_tx;
-
- /* Slave ID, e.g., SHDMA_SLAVE_SCIF0_TX */
- param->shdma_slave.slave_id = s->cfg->dma_slave_tx;
-
s->cookie_tx = -EINVAL;
- chan = dma_request_channel(mask, filter, param);
+ chan = sci_request_dma_chan(port, DMA_MEM_TO_DEV, s->cfg->dma_slave_tx);
dev_dbg(port->dev, "%s: TX: got channel %p\n", __func__, chan);
if (chan) {
s->chan_tx = chan;
@@ -1390,12 +1411,7 @@ static void sci_request_dma(struct uart_port *port)
INIT_WORK(&s->work_tx, work_fn_tx);
}
- param = &s->param_rx;
-
- /* Slave ID, e.g., SHDMA_SLAVE_SCIF0_RX */
- param->shdma_slave.slave_id = s->cfg->dma_slave_rx;
-
- chan = dma_request_channel(mask, filter, param);
+ chan = sci_request_dma_chan(port, DMA_DEV_TO_MEM, s->cfg->dma_slave_rx);
dev_dbg(port->dev, "%s: RX: got channel %p\n", __func__, chan);
if (chan) {
unsigned int i;
--
1.9.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
end of thread, other threads:[~2015-09-18 11:08 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-09-18 11:08 [PATCH v4 00/10] serial: sh-sci: Add DT DMA support Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 01/10] serial: sh-sci: Shuffle functions around Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 02/10] serial: sh-sci: Get rid of the workqueue to handle receive DMA requests Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 03/10] serial: sh-sci: Submit RX DMA from RX interrupt on (H)SCIF Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 04/10] serial: sh-sci: Stop calling sci_start_rx() from sci_request_dma() Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 05/10] serial: sh-sci: Remove timer on shutdown of port Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 06/10] serial: sh-sci: Redirect port interrupts to CPU _only_ when DMA stops Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 07/10] serial: sh-sci: Call dma_async_issue_pending when transaction completes Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 08/10] serial: sh-sci: Do not terminate DMA engine when race condition occurs Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 09/10] serial: sh-sci: Pause DMA engine and get DMA status again Geert Uytterhoeven
2015-09-18 11:08 ` [PATCH v4 10/10] serial: sh-sci: Add DT support to DMA setup Geert Uytterhoeven
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).