* RE: [PATCH v4 5/6] dmaengine: Driver for the Synopsys DesignWare DMA controller [not found] <f12847240806270224h696e78a1v4a1aa6a87fb4a171@mail.gmail.com> @ 2008-07-04 15:33 ` Sosnowski, Maciej 2008-07-04 16:10 ` Haavard Skinnemoen 0 siblings, 1 reply; 3+ messages in thread From: Sosnowski, Maciej @ 2008-07-04 15:33 UTC (permalink / raw) To: haavard.skinnemoen Cc: Williams, Dan J, drzeus-list, lkml, linux-embedded, kernel, Nelson, Shannon, david-b > ---------- Original message ---------- > From: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> > Date: Jun 26, 2008 3:23 PM > Subject: [PATCH v4 5/6] dmaengine: Driver for the Synopsys DesignWare > DMA controller > To: Dan Williams <dan.j.williams@intel.com>, Pierre Ossman > <drzeus-list@drzeus.cx> > Cc: linux-kernel@vger.kernel.org, linux-embedded@vger.kernel.org, > kernel@avr32linux.org, shannon.nelson@intel.com, David Brownell > <david-b@pacbell.net>, Haavard Skinnemoen > <haavard.skinnemoen@atmel.com> > > > This adds a driver for the Synopsys DesignWare DMA controller (aka > DMACA on AVR32 systems.) This DMA controller can be found integrated > on the AT32AP7000 chip and is primarily meant for peripheral DMA > transfer, but can also be used for memory-to-memory transfers. > > This patch is based on a driver from David Brownell which was based on > an older version of the DMA Engine framework. It also implements the > proposed extensions to the DMA Engine API for slave DMA operations. > > The dmatest client shows no problems, but there may still be room for > improvement performance-wise. DMA slave transfer performance is > definitely "good enough"; reading 100 MiB from an SD card running at ~20 > MHz yields ~7.2 MiB/s average transfer rate. > > Full documentation for this controller can be found in the Synopsys > DW AHB DMAC Databook: > > http://www.synopsys.com/designware/docs/iip/DW_ahb_dmac/latest/doc/dw_ah b_dmac_db.pdf > > The controller has lots of implementation options, so it's usually a > good idea to check the data sheet of the chip it's intergrated on as > well. The AT32AP7000 data sheet can be found here: > > http://www.atmel.com/dyn/products/datasheets.asp?family_id=682 > > Signed-off-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> > > Changes since v3: > * Update to latest DMA engine and DMA slave APIs > * Embed the hw descriptor into the sw descriptor > * Clean up and update MODULE_DESCRIPTION, copyright date, etc. > > Changes since v2: > * Dequeue all pending transfers in terminate_all() > * Rename dw_dmac.h -> dw_dmac_regs.h > * Define and use controller-specific dma_slave data > * Fix up a few outdated comments > * Define hardware registers as structs (doesn't generate better > code, unfortunately, but it looks nicer.) > * Get number of channels from platform_data instead of hardcoding it > based on CONFIG_WHATEVER_CPU. > * Give slave clients exclusive access to the channel Coulpe of questions and comments from my side below. Apart from that the code looks fine to me. Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> > --- > arch/avr32/mach-at32ap/at32ap700x.c | 26 +- > drivers/dma/Kconfig | 9 + > drivers/dma/Makefile | 1 + > drivers/dma/dw_dmac.c | 1105 > ++++++++++++++++++++++++++++ drivers/dma/dw_dmac_regs.h | > 224 ++++++ include/asm-avr32/arch-at32ap/at32ap700x.h | 16 + > include/linux/dw_dmac.h | 62 ++ > 7 files changed, 1430 insertions(+), 13 deletions(-) > create mode 100644 drivers/dma/dw_dmac.c > create mode 100644 drivers/dma/dw_dmac_regs.h > create mode 100644 include/linux/dw_dmac.h > > diff --git a/arch/avr32/mach-at32ap/at32ap700x.c > b/arch/avr32/mach-at32ap/at32ap700x.c > index 0f24b4f..2b92047 100644 > --- a/arch/avr32/mach-at32ap/at32ap700x.c > +++ b/arch/avr32/mach-at32ap/at32ap700x.c > @@ -599,6 +599,17 @@ static void __init genclk_init_parent(struct clk *clk) > clk->parent = parent; > } > > +static struct dw_dma_platform_data dw_dmac0_data = { > + .nr_channels = 3, > +}; > + > +static struct resource dw_dmac0_resource[] = { > + PBMEM(0xff200000), > + IRQ(2), > +}; > +DEFINE_DEV_DATA(dw_dmac, 0); > +DEV_CLK(hclk, dw_dmac0, hsb, 10); > + > /* -------------------------------------------------------------------- > * System peripherals > * -------------------------------------------------------------------- */ > @@ -705,17 +716,6 @@ static struct clk pico_clk = { > .users = 1, > }; > > -static struct resource dmaca0_resource[] = { > - { > - .start = 0xff200000, > - .end = 0xff20ffff, > - .flags = IORESOURCE_MEM, > - }, > - IRQ(2), > -}; > -DEFINE_DEV(dmaca, 0); > -DEV_CLK(hclk, dmaca0, hsb, 10); > - > /* -------------------------------------------------------------------- > * HMATRIX > * -------------------------------------------------------------------- */ > @@ -828,7 +828,7 @@ void __init at32_add_system_devices(void) > platform_device_register(&at32_eic0_device); > platform_device_register(&smc0_device); > platform_device_register(&pdc_device); > - platform_device_register(&dmaca0_device); > + platform_device_register(&dw_dmac0_device); > > platform_device_register(&at32_tcb0_device); > platform_device_register(&at32_tcb1_device); > @@ -1891,7 +1891,7 @@ struct clk *at32_clock_list[] = { > &smc0_mck, > &pdc_hclk, > &pdc_pclk, > - &dmaca0_hclk, > + &dw_dmac0_hclk, > &pico_clk, > &pio0_mck, > &pio1_mck, > diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig > index 2ac09be..4fac4e3 100644 > --- a/drivers/dma/Kconfig > +++ b/drivers/dma/Kconfig > @@ -37,6 +37,15 @@ config INTEL_IOP_ADMA > help > Enable support for the Intel(R) IOP Series RAID engines. > > +config DW_DMAC > + tristate "Synopsys DesignWare AHB DMA support" > + depends on AVR32 > + select DMA_ENGINE > + default y if CPU_AT32AP7000 > + help > + Support the Synopsys DesignWare AHB DMA controller. This > + can be integrated in chips such as the Atmel AT32ap7000. > + > config FSL_DMA > bool "Freescale MPC85xx/MPC83xx DMA support" > depends on PPC > diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile > index 2ff6d7f..beebae4 100644 > --- a/drivers/dma/Makefile > +++ b/drivers/dma/Makefile > @@ -1,6 +1,7 @@ > obj-$(CONFIG_DMA_ENGINE) += dmaengine.o > obj-$(CONFIG_NET_DMA) += iovlock.o > obj-$(CONFIG_INTEL_IOATDMA) += ioatdma.o > +obj-$(CONFIG_DW_DMAC) += dw_dmac.o > ioatdma-objs := ioat.o ioat_dma.o ioat_dca.o > obj-$(CONFIG_INTEL_IOP_ADMA) += iop-adma.o > obj-$(CONFIG_FSL_DMA) += fsldma.o > diff --git a/drivers/dma/dw_dmac.c b/drivers/dma/dw_dmac.c > new file mode 100644 > index 0000000..e5389e1 > --- /dev/null > +++ b/drivers/dma/dw_dmac.c > @@ -0,0 +1,1105 @@ > +/* > + * Driver for the Synopsys DesignWare DMA Controller (aka DMACA on > + * AVR32 systems.) > + * > + * Copyright (C) 2007-2008 Atmel Corporation > + * > + * This program is free software; you can redistribute it and/or modify > + * it under the terms of the GNU General Public License version 2 as > + * published by the Free Software Foundation. > + */ > +#include <linux/clk.h> > +#include <linux/delay.h> > +#include <linux/dmaengine.h> > +#include <linux/dma-mapping.h> > +#include <linux/init.h> > +#include <linux/interrupt.h> > +#include <linux/io.h> > +#include <linux/mm.h> > +#include <linux/module.h> > +#include <linux/platform_device.h> > +#include <linux/slab.h> > + > +#include "dw_dmac_regs.h" > + > +/* > + * This supports the Synopsys "DesignWare AHB Central DMA Controller", > + * (DW_ahb_dmac) which is used with various AMBA 2.0 systems (not all > + * of which use ARM any more). See the "Databook" from Synopsys for > + * information beyond what licensees probably provide. > + * > + * The driver has currently been tested only with the Atmel AT32AP7000, > + * which does not support descriptor writeback. > + */ > + > +/* NOTE: DMS+SMS is system-specific. We should get this information > + * from the platform code somehow. > + */ > +#define DWC_DEFAULT_CTLLO (DWC_CTLL_DST_MSIZE(0) \ > + | DWC_CTLL_SRC_MSIZE(0) \ > + | DWC_CTLL_DMS(0) \ > + | DWC_CTLL_SMS(1) \ > + | DWC_CTLL_LLP_D_EN \ > + | DWC_CTLL_LLP_S_EN) > + > +/* > + * This is configuration-dependent and usually a funny size like 4095. > + * Let's round it down to the nearest power of two. > + * > + * Note that this is a transfer count, i.e. if we transfer 32-bit > + * words, we can do 8192 bytes per descriptor. > + * > + * This parameter is also system-specific. > + */ > +#define DWC_MAX_COUNT 2048U > + > +/* > + * Number of descriptors to allocate for each channel. This should be > + * made configurable somehow; preferably, the clients (at least the > + * ones using slave transfers) should be able to give us a hint. > + */ > +#define NR_DESCS_PER_CHANNEL 64 > + > +/*--------------------------------------------------------------------- -*/ > + > +/* > + * Because we're not relying on writeback from the controller (it may not > + * even be configured into the core!) we don't need to use dma_pool. These > + * descriptors -- and associated data -- are cacheable. We do need to make > + * sure their dcache entries are written back before handing them off to > + * the controller, though. > + */ > + > +static struct dw_desc *dwc_first_active(struct dw_dma_chan *dwc) > +{ > + return list_entry(dwc->active_list.next, struct dw_desc, desc_node); > +} > + > +static struct dw_desc *dwc_first_queued(struct dw_dma_chan *dwc) > +{ > + return list_entry(dwc->queue.next, struct dw_desc, desc_node); > +} > + > +static struct dw_desc *dwc_desc_get(struct dw_dma_chan *dwc) > +{ > + struct dw_desc *desc, *_desc; > + struct dw_desc *ret = NULL; > + unsigned int i = 0; > + > + spin_lock_bh(&dwc->lock); > + list_for_each_entry_safe(desc, _desc, &dwc->free_list, desc_node) { > + if (async_tx_test_ack(&desc->txd)) { > + list_del(&desc->desc_node); > + ret = desc; > + break; > + } > + dev_dbg(&dwc->chan.dev, "desc %p not ACKed\n", desc); > + i++; > + } > + spin_unlock_bh(&dwc->lock); > + > + dev_vdbg(&dwc->chan.dev, "scanned %u descriptors on freelist\n", i); > + > + return ret; > +} > + > +static void dwc_sync_desc_for_cpu(struct dw_dma_chan *dwc, struct > dw_desc *desc) > +{ > + struct dw_desc *child; > + > + list_for_each_entry(child, &desc->txd.tx_list, desc_node) > + dma_sync_single_for_cpu(dwc->chan.dev.parent, > + child->txd.phys, sizeof(child->lli), > + DMA_TO_DEVICE); > + dma_sync_single_for_cpu(dwc->chan.dev.parent, > + desc->txd.phys, sizeof(desc->lli), > + DMA_TO_DEVICE); > +} > + > +/* > + * Move a descriptor, including any children, to the free list. > + * `desc' must not be on any lists. > + */ > +static void dwc_desc_put(struct dw_dma_chan *dwc, struct dw_desc *desc) > +{ > + if (desc) { > + struct dw_desc *child; > + > + dwc_sync_desc_for_cpu(dwc, desc); > + > + spin_lock_bh(&dwc->lock); > + list_for_each_entry(child, &desc->txd.tx_list, desc_node) > + dev_vdbg(&dwc->chan.dev, > + "moving child desc %p to freelist\n", > + child); > + list_splice_init(&desc->txd.tx_list, &dwc->free_list); > + dev_vdbg(&dwc->chan.dev, "moving desc %p to freelist\n", > desc); + list_add(&desc->desc_node, &dwc->free_list); > + spin_unlock_bh(&dwc->lock); > + } > +} > + > +/* Called with dwc->lock held and bh disabled */ > +static dma_cookie_t > +dwc_assign_cookie(struct dw_dma_chan *dwc, struct dw_desc *desc) > +{ > + dma_cookie_t cookie = dwc->chan.cookie; > + > + if (++cookie < 0) > + cookie = 1; > + > + dwc->chan.cookie = cookie; > + desc->txd.cookie = cookie; > + > + return cookie; > +} > + > +/*--------------------------------------------------------------------- -*/ > + > +/* Called with dwc->lock held and bh disabled */ > +static void dwc_dostart(struct dw_dma_chan *dwc, struct dw_desc *first) > +{ > + struct dw_dma *dw = to_dw_dma(dwc->chan.device); > + > + /* ASSERT: channel is idle */ > + if (dma_readl(dw, CH_EN) & dwc->mask) { > + dev_err(&dwc->chan.dev, > + "BUG: Attempted to start non-idle channel\n"); > + dev_err(&dwc->chan.dev, > + " SAR: 0x%x DAR: 0x%x LLP: 0x%x CTL: 0x%x:%08x\n", > + channel_readl(dwc, SAR), > + channel_readl(dwc, DAR), > + channel_readl(dwc, LLP), > + channel_readl(dwc, CTL_HI), > + channel_readl(dwc, CTL_LO)); > + > + /* The tasklet will hopefully advance the queue... */ > + return; Should not at this point an error status be returned so that it can be handled accordingly by dwc_dostart() caller? > + } > + > + channel_writel(dwc, LLP, first->txd.phys); > + channel_writel(dwc, CTL_LO, > + DWC_CTLL_LLP_D_EN | DWC_CTLL_LLP_S_EN); > + channel_writel(dwc, CTL_HI, 0); > + channel_set_bit(dw, CH_EN, dwc->mask); > +} > + > +/*--------------------------------------------------------------------- -*/ > + > +static void > +dwc_descriptor_complete(struct dw_dma_chan *dwc, struct dw_desc *desc) > +{ > + dma_async_tx_callback callback; > + void *param; > + struct dma_async_tx_descriptor *txd = &desc->txd; > + > + dev_vdbg(&dwc->chan.dev, "descriptor %u complete\n", txd->cookie); > + > + dwc->completed = txd->cookie; > + callback = txd->callback; > + param = txd->callback_param; > + > + dwc_sync_desc_for_cpu(dwc, desc); > + list_splice_init(&txd->tx_list, &dwc->free_list); > + list_move(&desc->desc_node, &dwc->free_list); > + > + /* > + * The API requires that no submissions are done from a > + * callback, so we don't need to drop the lock here > + */ > + if (callback) > + callback(param); > +} > + > +static void dwc_complete_all(struct dw_dma *dw, struct dw_dma_chan *dwc) > +{ > + struct dw_desc *desc, *_desc; > + LIST_HEAD(list); > + > + if (dma_readl(dw, CH_EN) & dwc->mask) { > + dev_err(&dwc->chan.dev, > + "BUG: XFER bit set, but channel not idle!\n"); > + > + /* Try to continue after resetting the channel... */ > + channel_clear_bit(dw, CH_EN, dwc->mask); > + while (dma_readl(dw, CH_EN) & dwc->mask) > + cpu_relax(); > + } > + > + /* > + * Submit queued descriptors ASAP, i.e. before we go through > + * the completed ones. > + */ > + if (!list_empty(&dwc->queue)) > + dwc_dostart(dwc, dwc_first_queued(dwc)); > + list_splice_init(&dwc->active_list, &list); > + list_splice_init(&dwc->queue, &dwc->active_list); > + > + list_for_each_entry_safe(desc, _desc, &list, desc_node) > + dwc_descriptor_complete(dwc, desc); > +} > + > +static void dwc_scan_descriptors(struct dw_dma *dw, struct dw_dma_chan *dwc) > +{ > + dma_addr_t llp; > + struct dw_desc *desc, *_desc; > + struct dw_desc *child; > + u32 status_xfer; > + > + /* > + * Clear block interrupt flag before scanning so that we don't > + * miss any, and read LLP before RAW_XFER to ensure it is > + * valid if we decide to scan the list. > + */ > + dma_writel(dw, CLEAR.BLOCK, dwc->mask); > + llp = channel_readl(dwc, LLP); > + status_xfer = dma_readl(dw, RAW.XFER); > + > + if (status_xfer & dwc->mask) { > + /* Everything we've submitted is done */ > + dma_writel(dw, CLEAR.XFER, dwc->mask); > + dwc_complete_all(dw, dwc); > + return; > + } > + > + dev_vdbg(&dwc->chan.dev, "scan_descriptors: llp=0x%x\n", llp); > + > + list_for_each_entry_safe(desc, _desc, &dwc->active_list, desc_node) { > + if (desc->lli.llp == llp) > + /* This one is currently in progress */ > + return; > + > + list_for_each_entry(child, &desc->txd.tx_list, desc_node) > + if (child->lli.llp == llp) > + /* Currently in progress */ > + return; > + > + /* > + * No descriptors so far seem to be in progress, i.e. > + * this one must be done. > + */ > + dwc_descriptor_complete(dwc, desc); > + } > + > + dev_err(&dwc->chan.dev, > + "BUG: All descriptors done, but channel not idle!\n"); > + > + /* Try to continue after resetting the channel... */ > + channel_clear_bit(dw, CH_EN, dwc->mask); > + while (dma_readl(dw, CH_EN) & dwc->mask) > + cpu_relax(); > + > + if (!list_empty(&dwc->queue)) { > + dwc_dostart(dwc, dwc_first_queued(dwc)); > + list_splice_init(&dwc->queue, &dwc->active_list); > + } > +} > + > +static void dwc_dump_lli(struct dw_dma_chan *dwc, struct dw_lli *lli) > +{ > + dev_printk(KERN_CRIT, &dwc->chan.dev, > + " desc: s0x%x d0x%x l0x%x c0x%x:%x\n", > + lli->sar, lli->dar, lli->llp, > + lli->ctlhi, lli->ctllo); > +} > + > +static void dwc_handle_error(struct dw_dma *dw, struct dw_dma_chan *dwc) > +{ > + struct dw_desc *bad_desc; > + struct dw_desc *child; > + > + dwc_scan_descriptors(dw, dwc); > + > + /* > + * The descriptor currently at the head of the active list is > + * borked. Since we don't have any way to report errors, we'll > + * just have to scream loudly and try to carry on. > + */ > + bad_desc = dwc_first_active(dwc); > + list_del_init(&bad_desc->desc_node); > + list_splice_init(&dwc->queue, dwc->active_list.prev); > + > + /* Clear the error flag and try to restart the controller */ > + dma_writel(dw, CLEAR.ERROR, dwc->mask); > + if (!list_empty(&dwc->active_list)) > + dwc_dostart(dwc, dwc_first_active(dwc)); > + > + /* > + * KERN_CRITICAL may seem harsh, but since this only happens > + * when someone submits a bad physical address in a > + * descriptor, we should consider ourselves lucky that the > + * controller flagged an error instead of scribbling over > + * random memory locations. > + */ > + dev_printk(KERN_CRIT, &dwc->chan.dev, > + "Bad descriptor submitted for DMA!\n"); > + dev_printk(KERN_CRIT, &dwc->chan.dev, > + " cookie: %d\n", bad_desc->txd.cookie); > + dwc_dump_lli(dwc, &bad_desc->lli); > + list_for_each_entry(child, &bad_desc->txd.tx_list, desc_node) > + dwc_dump_lli(dwc, &child->lli); > + > + /* Pretend the descriptor completed successfully */ > + dwc_descriptor_complete(dwc, bad_desc); > +} > + > +static void dw_dma_tasklet(unsigned long data) > +{ > + struct dw_dma *dw = (struct dw_dma *)data; > + struct dw_dma_chan *dwc; > + u32 status_block; > + u32 status_xfer; > + u32 status_err; > + int i; > + > + status_block = dma_readl(dw, RAW.BLOCK); > + status_xfer = dma_readl(dw, RAW.BLOCK); > + status_err = dma_readl(dw, RAW.ERROR); > + > + dev_vdbg(dw->dma.dev, "tasklet: status_block=%x status_err=%x\n", > + status_block, status_err); > + > + for (i = 0; i < dw->dma.chancnt; i++) { > + dwc = &dw->chan[i]; > + spin_lock(&dwc->lock); > + if (status_err & (1 << i)) > + dwc_handle_error(dw, dwc); > + else if ((status_block | status_xfer) & (1 << i)) > + dwc_scan_descriptors(dw, dwc); > + spin_unlock(&dwc->lock); > + } > + > + /* > + * Re-enable interrupts. Block Complete interrupts are only > + * enabled if the INT_EN bit in the descriptor is set. This > + * will trigger a scan before the whole list is done. > + */ > + channel_set_bit(dw, MASK.XFER, dw->all_chan_mask); > + channel_set_bit(dw, MASK.BLOCK, dw->all_chan_mask); > + channel_set_bit(dw, MASK.ERROR, dw->all_chan_mask); > +} > + > +static irqreturn_t dw_dma_interrupt(int irq, void *dev_id) > +{ > + struct dw_dma *dw = dev_id; > + u32 status; > + > + dev_vdbg(dw->dma.dev, "interrupt: status=0x%x\n", > + dma_readl(dw, STATUS_INT)); > + > + /* > + * Just disable the interrupts. We'll turn them back on in the > + * softirq handler. > + */ > + channel_clear_bit(dw, MASK.XFER, dw->all_chan_mask); > + channel_clear_bit(dw, MASK.BLOCK, dw->all_chan_mask); > + channel_clear_bit(dw, MASK.ERROR, dw->all_chan_mask); > + > + status = dma_readl(dw, STATUS_INT); > + if (status) { > + dev_err(dw->dma.dev, > + "BUG: Unexpected interrupts pending: 0x%x\n", > + status); > + > + /* Try to recover */ > + channel_clear_bit(dw, MASK.XFER, (1 << 8) - 1); > + channel_clear_bit(dw, MASK.BLOCK, (1 << 8) - 1); > + channel_clear_bit(dw, MASK.SRC_TRAN, (1 << 8) - 1); > + channel_clear_bit(dw, MASK.DST_TRAN, (1 << 8) - 1); > + channel_clear_bit(dw, MASK.ERROR, (1 << 8) - 1); > + } > + > + tasklet_schedule(&dw->tasklet); > + > + return IRQ_HANDLED; > +} > + > +/*--------------------------------------------------------------------- -*/ > + > +static dma_cookie_t dwc_tx_submit(struct dma_async_tx_descriptor *tx) > +{ > + struct dw_desc *desc = txd_to_dw_desc(tx); > + struct dw_dma_chan *dwc = to_dw_dma_chan(tx->chan); > + dma_cookie_t cookie; > + > + spin_lock_bh(&dwc->lock); > + cookie = dwc_assign_cookie(dwc, desc); > + > + /* > + * REVISIT: We should attempt to chain as many descriptors as > + * possible, perhaps even appending to those already submitted > + * for DMA. But this is hard to do in a race-free manner. > + */ > + if (list_empty(&dwc->active_list)) { > + dev_vdbg(&tx->chan->dev, "tx_submit: started %u\n", > + desc->txd.cookie); > + dwc_dostart(dwc, desc); > + list_add_tail(&desc->desc_node, &dwc->active_list); > + } else { > + dev_vdbg(&tx->chan->dev, "tx_submit: queued %u\n", > + desc->txd.cookie); > + > + list_add_tail(&desc->desc_node, &dwc->queue); > + } > + > + spin_unlock_bh(&dwc->lock); > + > + return cookie; > +} > + > +static struct dma_async_tx_descriptor * > +dwc_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, > + size_t len, unsigned long flags) > +{ > + struct dw_dma_chan *dwc = to_dw_dma_chan(chan); > + struct dw_desc *desc; > + struct dw_desc *first; > + struct dw_desc *prev; > + size_t xfer_count; > + size_t offset; > + unsigned int src_width; > + unsigned int dst_width; > + u32 ctllo; > + > + dev_vdbg(&chan->dev, "prep_dma_memcpy d0x%x s0x%x l0x%zx f0x%lx\n", > + dest, src, len, flags); > + > + if (unlikely(!len)) { > + dev_dbg(&chan->dev, "prep_dma_memcpy: length is zero!\n"); > + return NULL; > + } > + > + /* > + * We can be a lot more clever here, but this should take care > + * of the most common optimization. > + */ > + if (!((src | dest | len) & 3)) > + src_width = dst_width = 2; > + else if (!((src | dest | len) & 1)) > + src_width = dst_width = 1; > + else > + src_width = dst_width = 0; > + > + ctllo = DWC_DEFAULT_CTLLO > + | DWC_CTLL_DST_WIDTH(dst_width) > + | DWC_CTLL_SRC_WIDTH(src_width) > + | DWC_CTLL_DST_INC > + | DWC_CTLL_SRC_INC > + | DWC_CTLL_FC_M2M; > + prev = first = NULL; > + > + for (offset = 0; offset < len; offset += xfer_count << src_width) { > + xfer_count = min_t(size_t, (len - offset) >> src_width, > + DWC_MAX_COUNT); Here it looks like the maximum xfer_count value can change - it depends on src_width, so it may be different for different transactions. Is that ok? > + > + desc = dwc_desc_get(dwc); > + if (!desc) > + goto err_desc_get; > + > + desc->lli.sar = src + offset; > + desc->lli.dar = dest + offset; > + desc->lli.ctllo = ctllo; > + desc->lli.ctlhi = xfer_count; > + > + if (!first) { > + first = desc; > + } else { > + prev->lli.llp = desc->txd.phys; > + dma_sync_single_for_device(chan->dev.parent, > + prev->txd.phys, sizeof(prev->lli), > + DMA_TO_DEVICE); > + list_add_tail(&desc->desc_node, > + &first->txd.tx_list); > + } > + prev = desc; > + } > + > + > + if (flags & DMA_PREP_INTERRUPT) > + /* Trigger interrupt after last block */ > + prev->lli.ctllo |= DWC_CTLL_INT_EN; > + > + prev->lli.llp = 0; > + dma_sync_single_for_device(chan->dev.parent, > + prev->txd.phys, sizeof(prev->lli), > + DMA_TO_DEVICE); > + > + first->txd.flags = flags; > + > + return &first->txd; > + > +err_desc_get: > + dwc_desc_put(dwc, first); > + return NULL; > +} > + > +static struct dma_async_tx_descriptor * > +dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, > + unsigned int sg_len, enum dma_data_direction direction, > + unsigned long flags) > +{ > + struct dw_dma_chan *dwc = to_dw_dma_chan(chan); > + struct dw_dma_slave *dws = dwc->dws; > + struct dw_desc *prev; > + struct dw_desc *first; > + u32 ctllo; > + dma_addr_t reg; > + unsigned int reg_width; > + unsigned int mem_width; > + unsigned int i; > + struct scatterlist *sg; > + > + dev_vdbg(&chan->dev, "prep_dma_slave\n"); > + > + if (unlikely(!dws || !sg_len)) > + return NULL; > + > + reg_width = dws->slave.reg_width; > + prev = first = NULL; > + > + sg_len = dma_map_sg(chan->dev.parent, sgl, sg_len, direction); > + > + switch (direction) { > + case DMA_TO_DEVICE: > + ctllo = (DWC_DEFAULT_CTLLO > + | DWC_CTLL_DST_WIDTH(reg_width) > + | DWC_CTLL_DST_FIX > + | DWC_CTLL_SRC_INC > + | DWC_CTLL_FC_M2P); > + reg = dws->slave.tx_reg; > + for_each_sg(sgl, sg, sg_len, i) { > + struct dw_desc *desc; > + u32 len; > + u32 mem; > + > + desc = dwc_desc_get(dwc); > + if (!desc) { > + dev_err(&chan->dev, > + "not enough descriptors available\n"); > + goto err_desc_get; > + } > + > + mem = sg_phys(sg); > + len = sg_dma_len(sg); > + mem_width = 2; > + if (unlikely(mem & 3 || len & 3)) > + mem_width = 0; > + > + desc->lli.sar = mem; > + desc->lli.dar = reg; > + desc->lli.ctllo = ctllo | > DWC_CTLL_SRC_WIDTH(mem_width); + desc->lli.ctlhi = len > >> mem_width; + > + if (!first) { > + first = desc; > + } else { > + prev->lli.llp = desc->txd.phys; > + dma_sync_single_for_device(chan->dev.parent, > + prev->txd.phys, > + sizeof(prev->lli), > + DMA_TO_DEVICE); > + list_add_tail(&desc->desc_node, > + &first->txd.tx_list); > + } > + prev = desc; > + } > + break; > + case DMA_FROM_DEVICE: > + ctllo = (DWC_DEFAULT_CTLLO > + | DWC_CTLL_SRC_WIDTH(reg_width) > + | DWC_CTLL_DST_INC > + | DWC_CTLL_SRC_FIX > + | DWC_CTLL_FC_P2M); > + > + reg = dws->slave.rx_reg; > + for_each_sg(sgl, sg, sg_len, i) { > + struct dw_desc *desc; > + u32 len; > + u32 mem; > + > + desc = dwc_desc_get(dwc); > + if (!desc) { > + dev_err(&chan->dev, > + "not enough descriptors available\n"); > + goto err_desc_get; > + } > + > + mem = sg_phys(sg); > + len = sg_dma_len(sg); > + mem_width = 2; > + if (unlikely(mem & 3 || len & 3)) > + mem_width = 0; > + > + desc->lli.sar = reg; > + desc->lli.dar = mem; > + desc->lli.ctllo = ctllo | > DWC_CTLL_DST_WIDTH(mem_width); + desc->lli.ctlhi = len > >> reg_width; + > + if (!first) { > + first = desc; > + } else { > + prev->lli.llp = desc->txd.phys; > + dma_sync_single_for_device(chan->dev.parent, > + prev->txd.phys, > + sizeof(prev->lli), > + DMA_TO_DEVICE); > + list_add_tail(&desc->desc_node, > + &first->txd.tx_list); > + } > + prev = desc; > + } > + break; > + default: > + return NULL; > + } > + > + if (flags & DMA_PREP_INTERRUPT) > + /* Trigger interrupt after last block */ > + prev->lli.ctllo |= DWC_CTLL_INT_EN; > + > + prev->lli.llp = 0; > + dma_sync_single_for_device(chan->dev.parent, > + prev->txd.phys, sizeof(prev->lli), > + DMA_TO_DEVICE); > + > + return &first->txd; > + > +err_desc_get: > + dwc_desc_put(dwc, first); > + return NULL; > +} > + > +static void dwc_terminate_all(struct dma_chan *chan) > +{ > + struct dw_dma_chan *dwc = to_dw_dma_chan(chan); > + struct dw_dma *dw = to_dw_dma(chan->device); > + struct dw_desc *desc, *_desc; > + LIST_HEAD(list); > + > + /* > + * This is only called when something went wrong elsewhere, so > + * we don't really care about the data. Just disable the > + * channel. We still have to poll the channel enable bit due > + * to AHB/HSB limitations. > + */ > + spin_lock_bh(&dwc->lock); > + > + channel_clear_bit(dw, CH_EN, dwc->mask); > + > + while (dma_readl(dw, CH_EN) & dwc->mask) > + cpu_relax(); > + > + /* active_list entries will end up before queued entries */ > + list_splice_init(&dwc->queue, &list); > + list_splice_init(&dwc->active_list, &list); > + > + spin_unlock_bh(&dwc->lock); > + > + /* Flush all pending and queued descriptors */ > + list_for_each_entry_safe(desc, _desc, &list, desc_node) > + dwc_descriptor_complete(dwc, desc); > +} > + > +static enum dma_status > +dwc_is_tx_complete(struct dma_chan *chan, > + dma_cookie_t cookie, > + dma_cookie_t *done, dma_cookie_t *used) > +{ > + struct dw_dma_chan *dwc = to_dw_dma_chan(chan); > + dma_cookie_t last_used; > + dma_cookie_t last_complete; > + int ret; > + > + last_complete = dwc->completed; > + last_used = chan->cookie; > + > + ret = dma_async_is_complete(cookie, last_complete, last_used); > + if (ret != DMA_SUCCESS) { > + dwc_scan_descriptors(to_dw_dma(chan->device), dwc); > + > + last_complete = dwc->completed; > + last_used = chan->cookie; > + > + ret = dma_async_is_complete(cookie, last_complete, last_used); > + } > + > + if (done) > + *done = last_complete; > + if (used) > + *used = last_used; > + > + return ret; > +} > + > +static void dwc_issue_pending(struct dma_chan *chan) > +{ > + struct dw_dma_chan *dwc = to_dw_dma_chan(chan); > + > + spin_lock_bh(&dwc->lock); > + if (!list_empty(&dwc->queue)) > + dwc_scan_descriptors(to_dw_dma(chan->device), dwc); > + spin_unlock_bh(&dwc->lock); > +} > + > +static int dwc_alloc_chan_resources(struct dma_chan *chan, > + struct dma_client *client) > +{ > + struct dw_dma_chan *dwc = to_dw_dma_chan(chan); > + struct dw_dma *dw = to_dw_dma(chan->device); > + struct dw_desc *desc; > + struct dma_slave *slave; > + struct dw_dma_slave *dws; > + int i; > + u32 cfghi; > + u32 cfglo; > + > + dev_vdbg(&chan->dev, "alloc_chan_resources\n"); > + > + /* Channels doing slave DMA can only handle one client. */ > + if (dwc->dws || client->slave) { > + if (dma_chan_is_in_use(chan)) > + return -EBUSY; > + } > + > + /* ASSERT: channel is idle */ > + if (dma_readl(dw, CH_EN) & dwc->mask) { > + dev_dbg(&chan->dev, "DMA channel not idle?\n"); > + return -EIO; > + } > + > + dwc->completed = chan->cookie = 1; > + > + cfghi = DWC_CFGH_FIFO_MODE; > + cfglo = 0; > + > + slave = client->slave; > + if (slave) { > + /* > + * We need controller-specific data to set up slave > + * transfers. > + */ > + BUG_ON(!slave->dma_dev || slave->dma_dev != dw->dma.dev); > + > + dws = container_of(slave, struct dw_dma_slave, slave); > + > + dwc->dws = dws; > + cfghi = dws->cfg_hi; > + cfglo = dws->cfg_lo; > + } else { > + dwc->dws = NULL; > + } > + > + channel_writel(dwc, CFG_LO, cfglo); > + channel_writel(dwc, CFG_HI, cfghi); > + > + /* > + * NOTE: some controllers may have additional features that we > + * need to initialize here, like "scatter-gather" (which > + * doesn't mean what you think it means), and status writeback. > + */ > + > + spin_lock_bh(&dwc->lock); > + i = dwc->descs_allocated; > + while (dwc->descs_allocated < NR_DESCS_PER_CHANNEL) { > + spin_unlock_bh(&dwc->lock); > + > + desc = kzalloc(sizeof(struct dw_desc), GFP_KERNEL); > + if (!desc) { > + dev_info(&chan->dev, > + "only allocated %d descriptors\n", i); > + spin_lock_bh(&dwc->lock); > + break; > + } > + > + dma_async_tx_descriptor_init(&desc->txd, chan); > + desc->txd.tx_submit = dwc_tx_submit; > + desc->txd.flags = DMA_CTRL_ACK; > + INIT_LIST_HEAD(&desc->txd.tx_list); > + desc->txd.phys = dma_map_single(chan->dev.parent, &desc->lli, > + sizeof(desc->lli), DMA_TO_DEVICE); > + dwc_desc_put(dwc, desc); > + > + spin_lock_bh(&dwc->lock); > + i = ++dwc->descs_allocated; > + } > + > + /* Enable interrupts */ > + channel_set_bit(dw, MASK.XFER, dwc->mask); > + channel_set_bit(dw, MASK.BLOCK, dwc->mask); > + channel_set_bit(dw, MASK.ERROR, dwc->mask); > + > + spin_unlock_bh(&dwc->lock); > + > + dev_dbg(&chan->dev, > + "alloc_chan_resources allocated %d descriptors\n", i); > + > + return i; > +} > + > +static void dwc_free_chan_resources(struct dma_chan *chan) > +{ > + struct dw_dma_chan *dwc = to_dw_dma_chan(chan); > + struct dw_dma *dw = to_dw_dma(chan->device); > + struct dw_desc *desc, *_desc; > + LIST_HEAD(list); > + > + dev_dbg(&chan->dev, "free_chan_resources (descs allocated=%u)\n", > + dwc->descs_allocated); > + > + /* ASSERT: channel is idle */ > + BUG_ON(!list_empty(&dwc->active_list)); > + BUG_ON(!list_empty(&dwc->queue)); > + BUG_ON(dma_readl(to_dw_dma(chan->device), CH_EN) & dwc->mask); > + > + spin_lock_bh(&dwc->lock); > + list_splice_init(&dwc->free_list, &list); > + dwc->descs_allocated = 0; > + dwc->dws = NULL; > + > + /* Disable interrupts */ > + channel_clear_bit(dw, MASK.XFER, dwc->mask); > + channel_clear_bit(dw, MASK.BLOCK, dwc->mask); > + channel_clear_bit(dw, MASK.ERROR, dwc->mask); > + > + spin_unlock_bh(&dwc->lock); > + > + list_for_each_entry_safe(desc, _desc, &list, desc_node) { > + dev_vdbg(&chan->dev, " freeing descriptor %p\n", desc); > + dma_unmap_single(chan->dev.parent, desc->txd.phys, > + sizeof(desc->lli), DMA_TO_DEVICE); > + kfree(desc); > + } > + > + dev_vdbg(&chan->dev, "free_chan_resources done\n"); > +} > + > +/*--------------------------------------------------------------------- -*/ > + > +static void dw_dma_off(struct dw_dma *dw) > +{ > + dma_writel(dw, CFG, 0); > + > + channel_clear_bit(dw, MASK.XFER, dw->all_chan_mask); > + channel_clear_bit(dw, MASK.BLOCK, dw->all_chan_mask); > + channel_clear_bit(dw, MASK.SRC_TRAN, dw->all_chan_mask); > + channel_clear_bit(dw, MASK.DST_TRAN, dw->all_chan_mask); > + channel_clear_bit(dw, MASK.ERROR, dw->all_chan_mask); > + > + while (dma_readl(dw, CFG) & DW_CFG_DMA_EN) > + cpu_relax(); > +} > + > +static int __init dw_probe(struct platform_device *pdev) > +{ > + struct dw_dma_platform_data *pdata; > + struct resource *io; > + struct dw_dma *dw; > + size_t size; > + int irq; > + int err; > + int i; > + > + pdata = pdev->dev.platform_data; > + if (!pdata || pdata->nr_channels > DW_DMA_MAX_NR_CHANNELS) > + return -EINVAL; > + > + io = platform_get_resource(pdev, IORESOURCE_MEM, 0); > + if (!io) > + return -EINVAL; > + > + irq = platform_get_irq(pdev, 0); > + if (irq < 0) > + return irq; > + > + size = sizeof(struct dw_dma); > + size += pdata->nr_channels * sizeof(struct dw_dma_chan); > + dw = kzalloc(size, GFP_KERNEL); > + if (!dw) > + return -ENOMEM; > + > + if (!request_mem_region(io->start, DW_REGLEN, > pdev->dev.driver->name)) { + err = -EBUSY; > + goto err_kfree; > + } > + > + memset(dw, 0, sizeof *dw); > + > + dw->regs = ioremap(io->start, DW_REGLEN); > + if (!dw->regs) { > + err = -ENOMEM; > + goto err_release_r; > + } > + > + dw->clk = clk_get(&pdev->dev, "hclk"); > + if (IS_ERR(dw->clk)) { > + err = PTR_ERR(dw->clk); > + goto err_clk; > + } > + clk_enable(dw->clk); > + > + /* force dma off, just in case */ > + dw_dma_off(dw); > + > + err = request_irq(irq, dw_dma_interrupt, 0, "dw_dmac", dw); > + if (err) > + goto err_irq; > + > + platform_set_drvdata(pdev, dw); > + > + tasklet_init(&dw->tasklet, dw_dma_tasklet, (unsigned long)dw); > + > + dw->all_chan_mask = (1 << pdata->nr_channels) - 1; > + > + INIT_LIST_HEAD(&dw->dma.channels); > + for (i = 0; i < pdata->nr_channels; i++, dw->dma.chancnt++) { > + struct dw_dma_chan *dwc = &dw->chan[i]; > + > + dwc->chan.device = &dw->dma; > + dwc->chan.cookie = dwc->completed = 1; > + dwc->chan.chan_id = i; > + list_add_tail(&dwc->chan.device_node, &dw->dma.channels); > + > + dwc->ch_regs = &__dw_regs(dw)->CHAN[i]; > + spin_lock_init(&dwc->lock); > + dwc->mask = 1 << i; > + > + INIT_LIST_HEAD(&dwc->active_list); > + INIT_LIST_HEAD(&dwc->queue); > + INIT_LIST_HEAD(&dwc->free_list); > + > + channel_clear_bit(dw, CH_EN, dwc->mask); > + } > + > + /* Clear/disable all interrupts on all channels. */ > + dma_writel(dw, CLEAR.XFER, dw->all_chan_mask); > + dma_writel(dw, CLEAR.BLOCK, dw->all_chan_mask); > + dma_writel(dw, CLEAR.SRC_TRAN, dw->all_chan_mask); > + dma_writel(dw, CLEAR.DST_TRAN, dw->all_chan_mask); > + dma_writel(dw, CLEAR.ERROR, dw->all_chan_mask); > + > + channel_clear_bit(dw, MASK.XFER, dw->all_chan_mask); > + channel_clear_bit(dw, MASK.BLOCK, dw->all_chan_mask); > + channel_clear_bit(dw, MASK.SRC_TRAN, dw->all_chan_mask); > + channel_clear_bit(dw, MASK.DST_TRAN, dw->all_chan_mask); > + channel_clear_bit(dw, MASK.ERROR, dw->all_chan_mask); > + > + dma_cap_set(DMA_MEMCPY, dw->dma.cap_mask); > + dma_cap_set(DMA_SLAVE, dw->dma.cap_mask); > + dw->dma.dev = &pdev->dev; > + dw->dma.device_alloc_chan_resources = dwc_alloc_chan_resources; > + dw->dma.device_free_chan_resources = dwc_free_chan_resources; > + > + dw->dma.device_prep_dma_memcpy = dwc_prep_dma_memcpy; > + > + dw->dma.device_prep_slave_sg = dwc_prep_slave_sg; > + dw->dma.device_terminate_all = dwc_terminate_all; > + > + dw->dma.device_is_tx_complete = dwc_is_tx_complete; > + dw->dma.device_issue_pending = dwc_issue_pending; > + > + dma_writel(dw, CFG, DW_CFG_DMA_EN); > + > + printk(KERN_INFO "%s: DesignWare DMA Controller, %d channels\n", > + pdev->dev.bus_id, dw->dma.chancnt); > + > + dma_async_device_register(&dw->dma); > + > + return 0; > + > +err_irq: > + clk_disable(dw->clk); > + clk_put(dw->clk); > +err_clk: > + iounmap(dw->regs); > + dw->regs = NULL; > +err_release_r: > + release_resource(io); > +err_kfree: > + kfree(dw); > + return err; > +} This driver does not perform any self-test during initialization. What about adding some initial HW checking? > + > +static int __exit dw_remove(struct platform_device *pdev) > +{ > + struct dw_dma *dw = platform_get_drvdata(pdev); > + struct dw_dma_chan *dwc, *_dwc; > + struct resource *io; > + > + dw_dma_off(dw); > + dma_async_device_unregister(&dw->dma); > + > + free_irq(platform_get_irq(pdev, 0), dw); > + tasklet_kill(&dw->tasklet); > + > + list_for_each_entry_safe(dwc, _dwc, &dw->dma.channels, > + chan.device_node) { > + list_del(&dwc->chan.device_node); > + channel_clear_bit(dw, CH_EN, dwc->mask); > + } > + > + clk_disable(dw->clk); > + clk_put(dw->clk); > + > + iounmap(dw->regs); > + dw->regs = NULL; > + > + io = platform_get_resource(pdev, IORESOURCE_MEM, 0); > + release_mem_region(io->start, DW_REGLEN); > + > + kfree(dw); > + > + return 0; > +} > + > +static void dw_shutdown(struct platform_device *pdev) > +{ > + struct dw_dma *dw = platform_get_drvdata(pdev); > + > + dw_dma_off(platform_get_drvdata(pdev)); > + clk_disable(dw->clk); > +} > + > +static int dw_suspend_late(struct platform_device *pdev, pm_message_t mesg) > +{ > + struct dw_dma *dw = platform_get_drvdata(pdev); > + > + dw_dma_off(platform_get_drvdata(pdev)); > + clk_disable(dw->clk); > + return 0; > +} > + > +static int dw_resume_early(struct platform_device *pdev) > +{ > + struct dw_dma *dw = platform_get_drvdata(pdev); > + > + clk_enable(dw->clk); > + dma_writel(dw, CFG, DW_CFG_DMA_EN); > + return 0; > + > +} > + > +static struct platform_driver dw_driver = { > + .remove = __exit_p(dw_remove), > + .shutdown = dw_shutdown, > + .suspend_late = dw_suspend_late, > + .resume_early = dw_resume_early, > + .driver = { > + .name = "dw_dmac", > + }, > +}; > + > +static int __init dw_init(void) > +{ > + return platform_driver_probe(&dw_driver, dw_probe); > +} > +module_init(dw_init); > + > +static void __exit dw_exit(void) > +{ > + platform_driver_unregister(&dw_driver); > +} > +module_exit(dw_exit); > + > +MODULE_LICENSE("GPL v2"); > +MODULE_DESCRIPTION("Synopsys DesignWare DMA Controller driver"); > +MODULE_AUTHOR("Haavard Skinnemoen <haavard.skinnemoen@atmel.com>"); > diff --git a/drivers/dma/dw_dmac_regs.h b/drivers/dma/dw_dmac_regs.h > new file mode 100644 > index 0000000..119e65b > --- /dev/null > +++ b/drivers/dma/dw_dmac_regs.h > @@ -0,0 +1,224 @@ > +/* > + * Driver for the Synopsys DesignWare AHB DMA Controller > + * > + * Copyright (C) 2005-2007 Atmel Corporation > + * > + * This program is free software; you can redistribute it and/or modify > + * it under the terms of the GNU General Public License version 2 as > + * published by the Free Software Foundation. > + */ > + > +#include <linux/dw_dmac.h> > + > +#define DW_DMA_MAX_NR_CHANNELS 8 > + > +/* > + * Redefine this macro to handle differences between 32- and 64-bit > + * addressing, big vs. little endian, etc. > + */ > +#define DW_REG(name) u32 name; u32 __pad_##name > + > +/* Hardware register definitions. */ > +struct dw_dma_chan_regs { > + DW_REG(SAR); /* Source Address Register */ > + DW_REG(DAR); /* Destination Address Register */ > + DW_REG(LLP); /* Linked List Pointer */ > + u32 CTL_LO; /* Control Register Low */ > + u32 CTL_HI; /* Control Register High */ > + DW_REG(SSTAT); > + DW_REG(DSTAT); > + DW_REG(SSTATAR); > + DW_REG(DSTATAR); > + u32 CFG_LO; /* Configuration Register Low */ > + u32 CFG_HI; /* Configuration Register High */ > + DW_REG(SGR); > + DW_REG(DSR); > +}; > + > +struct dw_dma_irq_regs { > + DW_REG(XFER); > + DW_REG(BLOCK); > + DW_REG(SRC_TRAN); > + DW_REG(DST_TRAN); > + DW_REG(ERROR); > +}; > + > +struct dw_dma_regs { > + /* per-channel registers */ > + struct dw_dma_chan_regs CHAN[DW_DMA_MAX_NR_CHANNELS]; > + > + /* irq handling */ > + struct dw_dma_irq_regs RAW; /* r */ > + struct dw_dma_irq_regs STATUS; /* r (raw & mask) */ > + struct dw_dma_irq_regs MASK; /* rw (set = irq enabled) */ > + struct dw_dma_irq_regs CLEAR; /* w (ack, affects "raw") */ > + > + DW_REG(STATUS_INT); /* r */ > + > + /* software handshaking */ > + DW_REG(REQ_SRC); > + DW_REG(REQ_DST); > + DW_REG(SGL_REQ_SRC); > + DW_REG(SGL_REQ_DST); > + DW_REG(LAST_SRC); > + DW_REG(LAST_DST); > + > + /* miscellaneous */ > + DW_REG(CFG); > + DW_REG(CH_EN); > + DW_REG(ID); > + DW_REG(TEST); > + > + /* optional encoded params, 0x3c8..0x3 */ > +}; > + > +/* Bitfields in CTL_LO */ > +#define DWC_CTLL_INT_EN (1 << 0) /* irqs enabled? */ > +#define DWC_CTLL_DST_WIDTH(n) ((n)<<1) /* bytes per element */ > +#define DWC_CTLL_SRC_WIDTH(n) ((n)<<4) > +#define DWC_CTLL_DST_INC (0<<7) /* DAR update/not */ > +#define DWC_CTLL_DST_DEC (1<<7) > +#define DWC_CTLL_DST_FIX (2<<7) > +#define DWC_CTLL_SRC_INC (0<<7) /* SAR update/not */ > +#define DWC_CTLL_SRC_DEC (1<<9) > +#define DWC_CTLL_SRC_FIX (2<<9) > +#define DWC_CTLL_DST_MSIZE(n) ((n)<<11) /* burst, #elements */ > +#define DWC_CTLL_SRC_MSIZE(n) ((n)<<14) > +#define DWC_CTLL_S_GATH_EN (1 << 17) /* src gather, !FIX */ > +#define DWC_CTLL_D_SCAT_EN (1 << 18) /* dst scatter, !FIX */ > +#define DWC_CTLL_FC_M2M (0 << 20) /* mem-to-mem */ > +#define DWC_CTLL_FC_M2P (1 << 20) /* mem-to-periph */ > +#define DWC_CTLL_FC_P2M (2 << 20) /* periph-to-mem */ > +#define DWC_CTLL_FC_P2P (3 << 20) /* periph-to-periph */ > +/* plus 4 transfer types for peripheral-as-flow-controller */ > +#define DWC_CTLL_DMS(n) ((n)<<23) /* dst master select > */ +#define DWC_CTLL_SMS(n) ((n)<<25) /* src master > select */ +#define DWC_CTLL_LLP_D_EN (1 << 27) /* dest block chain > */ +#define DWC_CTLL_LLP_S_EN (1 << 28) /* src block chain */ > + > +/* Bitfields in CTL_HI */ > +#define DWC_CTLH_DONE 0x00001000 > +#define DWC_CTLH_BLOCK_TS_MASK 0x00000fff > + > +/* Bitfields in CFG_LO. Platform-configurable bits are in <linux/dw_dmac.h> > */ +#define DWC_CFGL_CH_SUSP (1 << 8) /* pause xfer */ > +#define DWC_CFGL_FIFO_EMPTY (1 << 9) /* pause xfer */ > +#define DWC_CFGL_HS_DST (1 << 10) /* handshake w/dst */ > +#define DWC_CFGL_HS_SRC (1 << 11) /* handshake w/src */ > +#define DWC_CFGL_MAX_BURST(x) ((x) << 20) > +#define DWC_CFGL_RELOAD_SAR (1 << 30) > +#define DWC_CFGL_RELOAD_DAR (1 << 31) > + > +/* Bitfields in CFG_HI. Platform-configurable bits are in <linux/dw_dmac.h> > */ +#define DWC_CFGH_DS_UPD_EN (1 << 5) > +#define DWC_CFGH_SS_UPD_EN (1 << 6) > + > +/* Bitfields in SGR */ > +#define DWC_SGR_SGI(x) ((x) << 0) > +#define DWC_SGR_SGC(x) ((x) << 20) > + > +/* Bitfields in DSR */ > +#define DWC_DSR_DSI(x) ((x) << 0) > +#define DWC_DSR_DSC(x) ((x) << 20) > + > +/* Bitfields in CFG */ > +#define DW_CFG_DMA_EN (1 << 0) > + > +#define DW_REGLEN 0x400 > + > +struct dw_dma_chan { > + struct dma_chan chan; > + void __iomem *ch_regs; > + u8 mask; > + > + spinlock_t lock; > + > + /* these other elements are all protected by lock */ > + dma_cookie_t completed; > + struct list_head active_list; > + struct list_head queue; > + struct list_head free_list; > + > + struct dw_dma_slave *dws; > + > + unsigned int descs_allocated; > +}; > + > +static inline struct dw_dma_chan_regs __iomem * > +__dwc_regs(struct dw_dma_chan *dwc) > +{ > + return dwc->ch_regs; > +} > + > +#define channel_readl(dwc, name) \ > + __raw_readl(&(__dwc_regs(dwc)->name)) > +#define channel_writel(dwc, name, val) \ > + __raw_writel((val), &(__dwc_regs(dwc)->name)) > + > +static inline struct dw_dma_chan *to_dw_dma_chan(struct dma_chan *chan) > +{ > + return container_of(chan, struct dw_dma_chan, chan); > +} > + > + > +struct dw_dma { > + struct dma_device dma; > + void __iomem *regs; > + struct tasklet_struct tasklet; > + struct clk *clk; > + > + u8 all_chan_mask; > + > + struct dw_dma_chan chan[0]; > +}; > + > +static inline struct dw_dma_regs __iomem *__dw_regs(struct dw_dma *dw) > +{ > + return dw->regs; > +} > + > +#define dma_readl(dw, name) \ > + __raw_readl(&(__dw_regs(dw)->name)) > +#define dma_writel(dw, name, val) \ > + __raw_writel((val), &(__dw_regs(dw)->name)) > + > +#define channel_set_bit(dw, reg, mask) \ > + dma_writel(dw, reg, ((mask) << 8) | (mask)) > +#define channel_clear_bit(dw, reg, mask) \ > + dma_writel(dw, reg, ((mask) << 8) | 0) > + > +static inline struct dw_dma *to_dw_dma(struct dma_device *ddev) > +{ > + return container_of(ddev, struct dw_dma, dma); > +} > + > +/* LLI == Linked List Item; a.k.a. DMA block descriptor */ > +struct dw_lli { > + /* values that are not changed by hardware */ > + dma_addr_t sar; > + dma_addr_t dar; > + dma_addr_t llp; /* chain to next lli */ > + u32 ctllo; > + /* values that may get written back: */ > + u32 ctlhi; > + /* sstat and dstat can snapshot peripheral register state. > + * silicon config may discard either or both... > + */ > + u32 sstat; > + u32 dstat; > +}; > + > +struct dw_desc { > + /* FIRST values the hardware uses */ > + struct dw_lli lli; > + > + /* THEN values for driver housekeeping */ > + struct list_head desc_node; > + struct dma_async_tx_descriptor txd; > +}; > + > +static inline struct dw_desc * > +txd_to_dw_desc(struct dma_async_tx_descriptor *txd) > +{ > + return container_of(txd, struct dw_desc, txd); > +} > diff --git a/include/asm-avr32/arch-at32ap/at32ap700x.h > b/include/asm-avr32/arch-at32ap/at32ap700x.h > index 31e48b0..d18a305 100644 > --- a/include/asm-avr32/arch-at32ap/at32ap700x.h > +++ b/include/asm-avr32/arch-at32ap/at32ap700x.h > @@ -30,4 +30,20 @@ > #define GPIO_PIN_PD(N) (GPIO_PIOD_BASE + (N)) > #define GPIO_PIN_PE(N) (GPIO_PIOE_BASE + (N)) > > + > +/* > + * DMAC peripheral hardware handshaking interfaces, used with dw_dmac > + */ > +#define DMAC_MCI_RX 0 > +#define DMAC_MCI_TX 1 > +#define DMAC_DAC_TX 2 > +#define DMAC_AC97_A_RX 3 > +#define DMAC_AC97_A_TX 4 > +#define DMAC_AC97_B_RX 5 > +#define DMAC_AC97_B_TX 6 > +#define DMAC_DMAREQ_0 7 > +#define DMAC_DMAREQ_1 8 > +#define DMAC_DMAREQ_2 9 > +#define DMAC_DMAREQ_3 10 > + > #endif /* __ASM_ARCH_AT32AP700X_H__ */ > diff --git a/include/linux/dw_dmac.h b/include/linux/dw_dmac.h > new file mode 100644 > index 0000000..04d217b > --- /dev/null > +++ b/include/linux/dw_dmac.h > @@ -0,0 +1,62 @@ > +/* > + * Driver for the Synopsys DesignWare DMA Controller (aka DMACA on > + * AVR32 systems.) > + * > + * Copyright (C) 2007 Atmel Corporation > + * > + * This program is free software; you can redistribute it and/or modify > + * it under the terms of the GNU General Public License version 2 as > + * published by the Free Software Foundation. > + */ > +#ifndef DW_DMAC_H > +#define DW_DMAC_H > + > +#include <linux/dmaengine.h> > + > +/** > + * struct dw_dma_platform_data - Controller configuration parameters > + * @nr_channels: Number of channels supported by hardware (max 8) > + */ > +struct dw_dma_platform_data { > + unsigned int nr_channels; > +}; > + > +/** > + * struct dw_dma_slave - Controller-specific information about a slave > + * @slave: Generic information about the slave > + * @ctl_lo: Platform-specific initializer for the CTL_LO register > + * @cfg_hi: Platform-specific initializer for the CFG_HI register > + * @cfg_lo: Platform-specific initializer for the CFG_LO register > + */ > +struct dw_dma_slave { > + struct dma_slave slave; > + u32 cfg_hi; > + u32 cfg_lo; > +}; > + > +/* Platform-configurable bits in CFG_HI */ > +#define DWC_CFGH_FCMODE (1 << 0) > +#define DWC_CFGH_FIFO_MODE (1 << 1) > +#define DWC_CFGH_PROTCTL(x) ((x) << 2) > +#define DWC_CFGH_SRC_PER(x) ((x) << 7) > +#define DWC_CFGH_DST_PER(x) ((x) << 11) > + > +/* Platform-configurable bits in CFG_LO */ > +#define DWC_CFGL_PRIO(x) ((x) << 5) /* priority */ > +#define DWC_CFGL_LOCK_CH_XFER (0 << 12) /* scope of LOCK_CH */ > +#define DWC_CFGL_LOCK_CH_BLOCK (1 << 12) > +#define DWC_CFGL_LOCK_CH_XACT (2 << 12) > +#define DWC_CFGL_LOCK_BUS_XFER (0 << 14) /* scope of LOCK_BUS */ > +#define DWC_CFGL_LOCK_BUS_BLOCK (1 << 14) > +#define DWC_CFGL_LOCK_BUS_XACT (2 << 14) > +#define DWC_CFGL_LOCK_CH (1 << 15) /* channel lockout */ > +#define DWC_CFGL_LOCK_BUS (1 << 16) /* busmaster lockout */ > +#define DWC_CFGL_HS_DST_POL (1 << 18) /* dst handshake active low */ > +#define DWC_CFGL_HS_SRC_POL (1 << 19) /* src handshake active low */ > + > +static inline struct dw_dma_slave *to_dw_dma_slave(struct dma_slave *slave) > +{ > + return container_of(slave, struct dw_dma_slave, slave); > +} > + > +#endif /* DW_DMAC_H */ > -- > 1.5.5.4 Regards, Maciej ^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH v4 5/6] dmaengine: Driver for the Synopsys DesignWare DMA controller 2008-07-04 15:33 ` [PATCH v4 5/6] dmaengine: Driver for the Synopsys DesignWare DMA controller Sosnowski, Maciej @ 2008-07-04 16:10 ` Haavard Skinnemoen 0 siblings, 0 replies; 3+ messages in thread From: Haavard Skinnemoen @ 2008-07-04 16:10 UTC (permalink / raw) To: Sosnowski, Maciej Cc: Williams, Dan J, drzeus-list, lkml, linux-embedded, kernel, Nelson, Shannon, david-b On Fri, 4 Jul 2008 16:33:53 +0100 "Sosnowski, Maciej" <maciej.sosnowski@intel.com> wrote: > Coulpe of questions and comments from my side below. > Apart from that the code looks fine to me. > > Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Thanks a lot for reviewing! > > +/* Called with dwc->lock held and bh disabled */ > > +static void dwc_dostart(struct dw_dma_chan *dwc, struct dw_desc > *first) > > +{ > > + struct dw_dma *dw = to_dw_dma(dwc->chan.device); > > + > > + /* ASSERT: channel is idle */ > > + if (dma_readl(dw, CH_EN) & dwc->mask) { > > + dev_err(&dwc->chan.dev, > > + "BUG: Attempted to start non-idle channel\n"); > > + dev_err(&dwc->chan.dev, > > + " SAR: 0x%x DAR: 0x%x LLP: 0x%x CTL: > 0x%x:%08x\n", > > + channel_readl(dwc, SAR), > > + channel_readl(dwc, DAR), > > + channel_readl(dwc, LLP), > > + channel_readl(dwc, CTL_HI), > > + channel_readl(dwc, CTL_LO)); > > + > > + /* The tasklet will hopefully advance the queue... */ > > + return; > > Should not at this point an error status be returned > so that it can be handled accordingly by dwc_dostart() caller? There's not a whole lot of meaningful things to do for the caller. It should never happen in the first place, but if the channel _is_ active at this point, we will eventually get an xfer complete interrupt when the currently pending transfers are done. The descriptors have already been added to the list, so the driver should recover from this kind of bug automatically. I've never actually triggered this code, so I can't really say for certain that it works, but at least in theory it makes much more sense to fix things up when the channel eventually becomes idle. > > + ctllo = DWC_DEFAULT_CTLLO > > + | DWC_CTLL_DST_WIDTH(dst_width) > > + | DWC_CTLL_SRC_WIDTH(src_width) > > + | DWC_CTLL_DST_INC > > + | DWC_CTLL_SRC_INC > > + | DWC_CTLL_FC_M2M; > > + prev = first = NULL; > > + > > + for (offset = 0; offset < len; offset += xfer_count << > src_width) { > > + xfer_count = min_t(size_t, (len - offset) >> > src_width, > > + DWC_MAX_COUNT); > > Here it looks like the maximum xfer_count value can change - it depends > on src_width, > so it may be different for different transactions. > Is that ok? Yes, the maximum tranfer count is defined as the maximum number of source transactions on the bus. So if the controller is set up to do 32 bits at a time on the source side, the maximum transfer _length_ is four times the maximum transfer _count_. The value written to the descriptor is also a transaction count, not a byte count. > This driver does not perform any self-test during initialization. > What about adding some initial HW checking? I'm not sure if it makes a lot of sense -- this device is typically integrated on the same silicon as the CPU, so if there are any issues with the DMA controller, they should be caught during production testing. I'm using the dmatest module for validating the driver, so I feel the self-test stuff becomes somewhat redundant. Haavard ^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH v4 0/6] dmaengine/mmc: DMA slave interface and two new drivers @ 2008-06-26 13:23 Haavard Skinnemoen 2008-06-26 13:23 ` [PATCH v4 1/6] dmaengine: Add dma_client parameter to device_alloc_chan_resources Haavard Skinnemoen 0 siblings, 1 reply; 3+ messages in thread From: Haavard Skinnemoen @ 2008-06-26 13:23 UTC (permalink / raw) To: Dan Williams, Pierre Ossman Cc: linux-kernel, linux-embedded, kernel, shannon.nelson, David Brownell, Haavard Skinnemoen First of all, I'm sorry it went so much time between v3 and v4 of this patchset. I was hoping to finish this stuff up before all kinds of other tasks started demanding my attention, but I didn't, so I had to put it on hold for a while. Let's try again... This patchset extends the DMA engine API to allow drivers to offer DMA to and from I/O registers with hardware handshaking, aka slave DMA. Such functionality is very common in DMA controllers integrated on SoC devices, and it's typically used to do DMA transfers to/from other on-SoC peripherals, but it can often do DMA transfers to/from externally connected devices as well (e.g. IDE hard drives). The main differences from v3 of this patchset are: * A DMA descriptor can hold a whole scatterlist. This means that clients using slave DMA can submit large requests in a single call to the driver, and they only need to keep track of a single descriptor. * The dma_slave_descriptor struct is gone since clients no longer need to keep track of multiple descriptors. * The drivers perform better and are more stable. The dw_dmac driver depends on this patch: http://lkml.org/lkml/2008/6/25/148 and the atmel-mci driver depends on this series: http://lkml.org/lkml/2008/6/26/158 as well as all preceding patches in this series, of course. Comments are welcome, as usual! Shortlog and diffstat follow. Haavard Skinnemoen (6): dmaengine: Add dma_client parameter to device_alloc_chan_resources dmaengine: Add dma_chan_is_in_use() function dmaengine: Add slave DMA interface dmaengine: Make DMA Engine menu visible for AVR32 users dmaengine: Driver for the Synopsys DesignWare DMA controller Atmel MCI: Driver for Atmel on-chip MMC controllers arch/avr32/boards/atngw100/setup.c | 7 + arch/avr32/boards/atstk1000/atstk1002.c | 3 + arch/avr32/mach-at32ap/at32ap700x.c | 73 ++- drivers/dma/Kconfig | 11 +- drivers/dma/Makefile | 1 + drivers/dma/dmaengine.c | 31 +- drivers/dma/dw_dmac.c | 1105 +++++++++++++++++++++ drivers/dma/dw_dmac_regs.h | 224 +++++ drivers/dma/ioat_dma.c | 5 +- drivers/dma/iop-adma.c | 7 +- drivers/mmc/host/Kconfig | 10 + drivers/mmc/host/Makefile | 1 + drivers/mmc/host/atmel-mci-regs.h | 194 ++++ drivers/mmc/host/atmel-mci.c | 1428 ++++++++++++++++++++++++++++ include/asm-avr32/arch-at32ap/at32ap700x.h | 16 + include/asm-avr32/arch-at32ap/board.h | 6 +- include/asm-avr32/atmel-mci.h | 12 + include/linux/dmaengine.h | 73 ++- include/linux/dw_dmac.h | 62 ++ 19 files changed, 3229 insertions(+), 40 deletions(-) create mode 100644 drivers/dma/dw_dmac.c create mode 100644 drivers/dma/dw_dmac_regs.h create mode 100644 drivers/mmc/host/atmel-mci-regs.h create mode 100644 drivers/mmc/host/atmel-mci.c create mode 100644 include/asm-avr32/atmel-mci.h create mode 100644 include/linux/dw_dmac.h Haavard ^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH v4 1/6] dmaengine: Add dma_client parameter to device_alloc_chan_resources 2008-06-26 13:23 [PATCH v4 0/6] dmaengine/mmc: DMA slave interface and two new drivers Haavard Skinnemoen @ 2008-06-26 13:23 ` Haavard Skinnemoen 2008-06-26 13:23 ` [PATCH v4 2/6] dmaengine: Add dma_chan_is_in_use() function Haavard Skinnemoen 0 siblings, 1 reply; 3+ messages in thread From: Haavard Skinnemoen @ 2008-06-26 13:23 UTC (permalink / raw) To: Dan Williams, Pierre Ossman Cc: linux-kernel, linux-embedded, kernel, shannon.nelson, David Brownell, Haavard Skinnemoen A DMA controller capable of doing slave transfers may need to know a few things about the slave when preparing the channel. We don't want to add this information to struct dma_channel since the channel hasn't yet been bound to a client at this point. Instead, pass a reference to the client requesting the channel to the driver's device_alloc_chan_resources hook so that it can pick the necessary information from the dma_client struct by itself. Signed-off-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> --- drivers/dma/dmaengine.c | 3 ++- drivers/dma/ioat_dma.c | 5 +++-- drivers/dma/iop-adma.c | 7 ++++--- include/linux/dmaengine.h | 3 ++- 4 files changed, 11 insertions(+), 7 deletions(-) diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index 99c22b4..a57c337 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -174,7 +174,8 @@ static void dma_client_chan_alloc(struct dma_client *client) if (!dma_chan_satisfies_mask(chan, client->cap_mask)) continue; - desc = chan->device->device_alloc_chan_resources(chan); + desc = chan->device->device_alloc_chan_resources( + chan, client); if (desc >= 0) { ack = client->event_callback(client, chan, diff --git a/drivers/dma/ioat_dma.c b/drivers/dma/ioat_dma.c index 318e8a2..90e5b0a 100644 --- a/drivers/dma/ioat_dma.c +++ b/drivers/dma/ioat_dma.c @@ -452,7 +452,8 @@ static void ioat2_dma_massage_chan_desc(struct ioat_dma_chan *ioat_chan) * ioat_dma_alloc_chan_resources - returns the number of allocated descriptors * @chan: the channel to be filled out */ -static int ioat_dma_alloc_chan_resources(struct dma_chan *chan) +static int ioat_dma_alloc_chan_resources(struct dma_chan *chan, + struct dma_client *client) { struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan); struct ioat_desc_sw *desc; @@ -1049,7 +1050,7 @@ static int ioat_dma_self_test(struct ioatdma_device *device) dma_chan = container_of(device->common.channels.next, struct dma_chan, device_node); - if (device->common.device_alloc_chan_resources(dma_chan) < 1) { + if (device->common.device_alloc_chan_resources(dma_chan, NULL) < 1) { dev_err(&device->pdev->dev, "selftest cannot allocate chan resource\n"); err = -ENODEV; diff --git a/drivers/dma/iop-adma.c b/drivers/dma/iop-adma.c index 0ec0f43..2664ea5 100644 --- a/drivers/dma/iop-adma.c +++ b/drivers/dma/iop-adma.c @@ -444,7 +444,8 @@ static void iop_chan_start_null_memcpy(struct iop_adma_chan *iop_chan); static void iop_chan_start_null_xor(struct iop_adma_chan *iop_chan); /* returns the number of allocated descriptors */ -static int iop_adma_alloc_chan_resources(struct dma_chan *chan) +static int iop_adma_alloc_chan_resources(struct dma_chan *chan, + struct dma_client *client) { char *hw_desc; int idx; @@ -838,7 +839,7 @@ static int __devinit iop_adma_memcpy_self_test(struct iop_adma_device *device) dma_chan = container_of(device->common.channels.next, struct dma_chan, device_node); - if (iop_adma_alloc_chan_resources(dma_chan) < 1) { + if (iop_adma_alloc_chan_resources(dma_chan, NULL) < 1) { err = -ENODEV; goto out; } @@ -936,7 +937,7 @@ iop_adma_xor_zero_sum_self_test(struct iop_adma_device *device) dma_chan = container_of(device->common.channels.next, struct dma_chan, device_node); - if (iop_adma_alloc_chan_resources(dma_chan) < 1) { + if (iop_adma_alloc_chan_resources(dma_chan, NULL) < 1) { err = -ENODEV; goto out; } diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index d08a5c5..cffb95f 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -279,7 +279,8 @@ struct dma_device { int dev_id; struct device *dev; - int (*device_alloc_chan_resources)(struct dma_chan *chan); + int (*device_alloc_chan_resources)(struct dma_chan *chan, + struct dma_client *client); void (*device_free_chan_resources)(struct dma_chan *chan); struct dma_async_tx_descriptor *(*device_prep_dma_memcpy)( -- 1.5.5.4 ^ permalink raw reply related [flat|nested] 3+ messages in thread
* [PATCH v4 2/6] dmaengine: Add dma_chan_is_in_use() function 2008-06-26 13:23 ` [PATCH v4 1/6] dmaengine: Add dma_client parameter to device_alloc_chan_resources Haavard Skinnemoen @ 2008-06-26 13:23 ` Haavard Skinnemoen 2008-06-26 13:23 ` [PATCH v4 3/6] dmaengine: Add slave DMA interface Haavard Skinnemoen 0 siblings, 1 reply; 3+ messages in thread From: Haavard Skinnemoen @ 2008-06-26 13:23 UTC (permalink / raw) To: Dan Williams, Pierre Ossman Cc: linux-kernel, linux-embedded, kernel, shannon.nelson, David Brownell, Haavard Skinnemoen This moves the code checking if a DMA channel is in use from show_in_use() into an inline helper function, dma_is_in_use(). DMA controllers can use this in order to give clients exclusive access to channels (usually necessary when setting up slave DMA.) I have to admit that I don't really understand the channel refcounting logic at all... dma_chan_get() simply increments a per-cpu value. How can we be sure that whatever CPU calls dma_chan_is_in_use() sees the same value? Signed-off-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> --- drivers/dma/dmaengine.c | 12 +----------- include/linux/dmaengine.h | 17 +++++++++++++++++ 2 files changed, 18 insertions(+), 11 deletions(-) diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index a57c337..ad8d811 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -105,17 +105,7 @@ static ssize_t show_bytes_transferred(struct device *dev, struct device_attribut static ssize_t show_in_use(struct device *dev, struct device_attribute *attr, char *buf) { struct dma_chan *chan = to_dma_chan(dev); - int in_use = 0; - - if (unlikely(chan->slow_ref) && - atomic_read(&chan->refcount.refcount) > 1) - in_use = 1; - else { - if (local_read(&(per_cpu_ptr(chan->local, - get_cpu())->refcount)) > 0) - in_use = 1; - put_cpu(); - } + int in_use = dma_chan_is_in_use(chan); return sprintf(buf, "%d\n", in_use); } diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index cffb95f..4b602d3 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -180,6 +180,23 @@ static inline void dma_chan_put(struct dma_chan *chan) } } +static inline bool dma_chan_is_in_use(struct dma_chan *chan) +{ + bool in_use = false; + + if (unlikely(chan->slow_ref) && + atomic_read(&chan->refcount.refcount) > 1) + in_use = true; + else { + if (local_read(&(per_cpu_ptr(chan->local, + get_cpu())->refcount)) > 0) + in_use = true; + put_cpu(); + } + + return in_use; +} + /* * typedef dma_event_callback - function pointer to a DMA event callback * For each channel added to the system this routine is called for each client. -- 1.5.5.4 ^ permalink raw reply related [flat|nested] 3+ messages in thread
* [PATCH v4 3/6] dmaengine: Add slave DMA interface 2008-06-26 13:23 ` [PATCH v4 2/6] dmaengine: Add dma_chan_is_in_use() function Haavard Skinnemoen @ 2008-06-26 13:23 ` Haavard Skinnemoen 2008-06-26 13:23 ` [PATCH v4 4/6] dmaengine: Make DMA Engine menu visible for AVR32 users Haavard Skinnemoen 0 siblings, 1 reply; 3+ messages in thread From: Haavard Skinnemoen @ 2008-06-26 13:23 UTC (permalink / raw) To: Dan Williams, Pierre Ossman Cc: linux-kernel, linux-embedded, kernel, shannon.nelson, David Brownell, Haavard Skinnemoen This patch adds the necessary interfaces to the DMA Engine framework to use functionality found on most embedded DMA controllers: DMA from and to I/O registers with hardware handshaking. In this context, hardware hanshaking means that the peripheral that owns the I/O registers in question is able to tell the DMA controller when more data is available for reading, or when there is room for more data to be written. This usually happens internally on the chip, but these signals may also be exported outside the chip for things like IDE DMA, etc. A new struct dma_slave is introduced. This contains information that the DMA engine driver needs to set up slave transfers to and from a slave device. Most engines supporting DMA slave transfers will want to extend this structure with controller-specific parameters. This additional information is usually passed from the platform/board code through the client driver. A "slave" pointer is added to the dma_client struct. This must point to a valid dma_slave structure iff the DMA_SLAVE capability is requested. The DMA engine driver may use this information in its device_alloc_chan_resources hook to configure the DMA controller for slave transfers from and to the given slave device. A new struct dma_slave_descriptor is added. This extends the standard dma_async_tx_descriptor with a few members that are needed for doing slave DMA from/to peripherals. A new operation for creating such descriptors is added to struct dma_device. Another new operation for terminating all pending transfers is added as well. The latter is needed because there may be errors outside the scope of the DMA Engine framework that may require DMA operations to be terminated prematurely. DMA Engine drivers may extend the dma_device, dma_chan and/or dma_slave_descriptor structures to allow controller-specific operations. The client driver can detect such extensions by looking at the DMA Engine's struct device, or it can request a specific DMA Engine device by setting the dma_dev field in struct dma_slave. Signed-off-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> dmaslave interface changes since v3: * Use dma_data_direction instead of a new enum * Submit slave transfers as scatterlists * Remove the DMA slave descriptor struct dmaslave interface changes since v2: * Add a dma_dev field to struct dma_slave. If set, the client can only be bound to the DMA controller that corresponds to this device. This allows controller-specific extensions of the dma_slave structure; if the device matches, the controller may safely assume its extensions are present. * Move reg_width into struct dma_slave as there are currently no users that need to be able to set the width on a per-transfer basis. dmaslave interface changes since v1: * Drop the set_direction and set_width descriptor hooks. Pass the direction and width to the prep function instead. * Declare a dma_slave struct with fixed information about a slave, i.e. register addresses, handshake interfaces and such. * Add pointer to a dma_slave struct to dma_client. Can be NULL if the DMA_SLAVE capability isn't requested. * Drop the set_slave device hook since the alloc_chan_resources hook now has enough information to set up the channel for slave transfers. --- drivers/dma/dmaengine.c | 16 ++++++++++++- include/linux/dmaengine.h | 53 ++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 67 insertions(+), 2 deletions(-) diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index ad8d811..2e0035f 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -159,7 +159,12 @@ static void dma_client_chan_alloc(struct dma_client *client) enum dma_state_client ack; /* Find a channel */ - list_for_each_entry(device, &dma_device_list, global_node) + list_for_each_entry(device, &dma_device_list, global_node) { + /* Does the client require a specific DMA controller? */ + if (client->slave && client->slave->dma_dev + && client->slave->dma_dev != device->dev) + continue; + list_for_each_entry(chan, &device->channels, device_node) { if (!dma_chan_satisfies_mask(chan, client->cap_mask)) continue; @@ -180,6 +185,7 @@ static void dma_client_chan_alloc(struct dma_client *client) return; } } + } } enum dma_status dma_sync_wait(struct dma_chan *chan, dma_cookie_t cookie) @@ -276,6 +282,10 @@ static void dma_clients_notify_removed(struct dma_chan *chan) */ void dma_async_client_register(struct dma_client *client) { + /* validate client data */ + BUG_ON(dma_has_cap(DMA_SLAVE, client->cap_mask) && + !client->slave); + mutex_lock(&dma_list_mutex); list_add_tail(&client->global_node, &dma_client_list); mutex_unlock(&dma_list_mutex); @@ -350,6 +360,10 @@ int dma_async_device_register(struct dma_device *device) !device->device_prep_dma_memset); BUG_ON(dma_has_cap(DMA_INTERRUPT, device->cap_mask) && !device->device_prep_dma_interrupt); + BUG_ON(dma_has_cap(DMA_SLAVE, device->cap_mask) && + !device->device_prep_slave_sg); + BUG_ON(dma_has_cap(DMA_SLAVE, device->cap_mask) && + !device->device_terminate_all); BUG_ON(!device->device_alloc_chan_resources); BUG_ON(!device->device_free_chan_resources); diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index 4b602d3..8ce03e8 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -89,10 +89,23 @@ enum dma_transaction_type { DMA_MEMSET, DMA_MEMCPY_CRC32C, DMA_INTERRUPT, + DMA_SLAVE, }; /* last transaction type for creation of the capabilities mask */ -#define DMA_TX_TYPE_END (DMA_INTERRUPT + 1) +#define DMA_TX_TYPE_END (DMA_SLAVE + 1) + +/** + * enum dma_slave_width - DMA slave register access width. + * @DMA_SLAVE_WIDTH_8BIT: Do 8-bit slave register accesses + * @DMA_SLAVE_WIDTH_16BIT: Do 16-bit slave register accesses + * @DMA_SLAVE_WIDTH_32BIT: Do 32-bit slave register accesses + */ +enum dma_slave_width { + DMA_SLAVE_WIDTH_8BIT, + DMA_SLAVE_WIDTH_16BIT, + DMA_SLAVE_WIDTH_32BIT, +}; /** * enum dma_ctrl_flags - DMA flags to augment operation preparation, @@ -115,6 +128,33 @@ enum dma_ctrl_flags { typedef struct { DECLARE_BITMAP(bits, DMA_TX_TYPE_END); } dma_cap_mask_t; /** + * struct dma_slave - Information about a DMA slave + * @dev: device acting as DMA slave + * @dma_dev: required DMA master device. If non-NULL, the client can not be + * bound to other masters than this. The master driver may use + * this to determine whether it's safe to access + * @tx_reg: physical address of data register used for + * memory-to-peripheral transfers + * @rx_reg: physical address of data register used for + * peripheral-to-memory transfers + * @reg_width: peripheral register width + * + * If dma_dev is non-NULL, the client can not be bound to other DMA + * masters than the one corresponding to this device. The DMA master + * driver may use this to determine if there is controller-specific + * data wrapped around this struct. Drivers of platform code that sets + * the dma_dev field must therefore make sure to use an appropriate + * controller-specific dma slave structure wrapping this struct. + */ +struct dma_slave { + struct device *dev; + struct device *dma_dev; + dma_addr_t tx_reg; + dma_addr_t rx_reg; + enum dma_slave_width reg_width; +}; + +/** * struct dma_chan_percpu - the per-CPU part of struct dma_chan * @refcount: local_t used for open-coded "bigref" counting * @memcpy_count: transaction counter @@ -219,11 +259,14 @@ typedef enum dma_state_client (*dma_event_callback) (struct dma_client *client, * @event_callback: func ptr to call when something happens * @cap_mask: only return channels that satisfy the requested capabilities * a value of zero corresponds to any capability + * @slave: data for preparing slave transfer. Must be non-NULL iff the + * DMA_SLAVE capability is requested. * @global_node: list_head for global dma_client_list */ struct dma_client { dma_event_callback event_callback; dma_cap_mask_t cap_mask; + struct dma_slave *slave; struct list_head global_node; }; @@ -280,6 +323,8 @@ struct dma_async_tx_descriptor { * @device_prep_dma_zero_sum: prepares a zero_sum operation * @device_prep_dma_memset: prepares a memset operation * @device_prep_dma_interrupt: prepares an end of chain interrupt operation + * @device_prep_slave_sg: prepares a slave dma operation + * @device_terminate_all: terminate all pending operations * @device_issue_pending: push pending transactions to hardware */ struct dma_device { @@ -315,6 +360,12 @@ struct dma_device { struct dma_async_tx_descriptor *(*device_prep_dma_interrupt)( struct dma_chan *chan, unsigned long flags); + struct dma_async_tx_descriptor *(*device_prep_slave_sg)( + struct dma_chan *chan, struct scatterlist *sgl, + unsigned int sg_len, enum dma_data_direction direction, + unsigned long flags); + void (*device_terminate_all)(struct dma_chan *chan); + enum dma_status (*device_is_tx_complete)(struct dma_chan *chan, dma_cookie_t cookie, dma_cookie_t *last, dma_cookie_t *used); -- 1.5.5.4 ^ permalink raw reply related [flat|nested] 3+ messages in thread
* [PATCH v4 4/6] dmaengine: Make DMA Engine menu visible for AVR32 users 2008-06-26 13:23 ` [PATCH v4 3/6] dmaengine: Add slave DMA interface Haavard Skinnemoen @ 2008-06-26 13:23 ` Haavard Skinnemoen 2008-06-26 13:23 ` [PATCH v4 5/6] dmaengine: Driver for the Synopsys DesignWare DMA controller Haavard Skinnemoen 0 siblings, 1 reply; 3+ messages in thread From: Haavard Skinnemoen @ 2008-06-26 13:23 UTC (permalink / raw) To: Dan Williams, Pierre Ossman Cc: linux-kernel, linux-embedded, kernel, shannon.nelson, David Brownell, Haavard Skinnemoen, Adrian Bunk This makes the DMA Engine menu visible on AVR32 by adding AVR32 to the (growing) list of architectures DMADEVICES depends on. Though I'd prefer to remove that whole "depends" line entirely... The DMADEVICES menu used to be available for all architectures, but at some point, we started building a huge dependency list with all the architectures that might have support for this kind of hardware. According to Dan Williams: > Adrian had concerns about users enabling NET_DMA when the hardware > capability is relatively rare. which seems very strange as long as (PCI && X86) is enough to enable this menu. In other words, the vast majority of users will see the menu even though the hardware is rare. Also, all DMA clients depend on DMA_ENGINE being set. This symbol is selected by each DMA Engine driver, so users can't select a DMA client without selecting a specific DMA Engine driver first. So, while this patch solves my immediate problem of making DMA Engines available on AVR32, I'd much rather remove the whole arch dependency list because I think it's bogus. Comments? Signed-off-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Cc: Adrian Bunk <bunk@stusta.de> --- drivers/dma/Kconfig | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig index 18f6ef3..2ac09be 100644 --- a/drivers/dma/Kconfig +++ b/drivers/dma/Kconfig @@ -4,7 +4,7 @@ menuconfig DMADEVICES bool "DMA Engine support" - depends on (PCI && X86) || ARCH_IOP32X || ARCH_IOP33X || ARCH_IOP13XX || PPC + depends on (PCI && X86) || ARCH_IOP32X || ARCH_IOP33X || ARCH_IOP13XX || PPC || AVR32 depends on !HIGHMEM64G help DMA engines can do asynchronous data transfers without -- 1.5.5.4 ^ permalink raw reply related [flat|nested] 3+ messages in thread
* [PATCH v4 5/6] dmaengine: Driver for the Synopsys DesignWare DMA controller 2008-06-26 13:23 ` [PATCH v4 4/6] dmaengine: Make DMA Engine menu visible for AVR32 users Haavard Skinnemoen @ 2008-06-26 13:23 ` Haavard Skinnemoen 0 siblings, 0 replies; 3+ messages in thread From: Haavard Skinnemoen @ 2008-06-26 13:23 UTC (permalink / raw) To: Dan Williams, Pierre Ossman Cc: linux-kernel, linux-embedded, kernel, shannon.nelson, David Brownell, Haavard Skinnemoen This adds a driver for the Synopsys DesignWare DMA controller (aka DMACA on AVR32 systems.) This DMA controller can be found integrated on the AT32AP7000 chip and is primarily meant for peripheral DMA transfer, but can also be used for memory-to-memory transfers. This patch is based on a driver from David Brownell which was based on an older version of the DMA Engine framework. It also implements the proposed extensions to the DMA Engine API for slave DMA operations. The dmatest client shows no problems, but there may still be room for improvement performance-wise. DMA slave transfer performance is definitely "good enough"; reading 100 MiB from an SD card running at ~20 MHz yields ~7.2 MiB/s average transfer rate. Full documentation for this controller can be found in the Synopsys DW AHB DMAC Databook: http://www.synopsys.com/designware/docs/iip/DW_ahb_dmac/latest/doc/dw_ahb_dmac_db.pdf The controller has lots of implementation options, so it's usually a good idea to check the data sheet of the chip it's intergrated on as well. The AT32AP7000 data sheet can be found here: http://www.atmel.com/dyn/products/datasheets.asp?family_id=682 Signed-off-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Changes since v3: * Update to latest DMA engine and DMA slave APIs * Embed the hw descriptor into the sw descriptor * Clean up and update MODULE_DESCRIPTION, copyright date, etc. Changes since v2: * Dequeue all pending transfers in terminate_all() * Rename dw_dmac.h -> dw_dmac_regs.h * Define and use controller-specific dma_slave data * Fix up a few outdated comments * Define hardware registers as structs (doesn't generate better code, unfortunately, but it looks nicer.) * Get number of channels from platform_data instead of hardcoding it based on CONFIG_WHATEVER_CPU. * Give slave clients exclusive access to the channel --- arch/avr32/mach-at32ap/at32ap700x.c | 26 +- drivers/dma/Kconfig | 9 + drivers/dma/Makefile | 1 + drivers/dma/dw_dmac.c | 1105 ++++++++++++++++++++++++++++ drivers/dma/dw_dmac_regs.h | 224 ++++++ include/asm-avr32/arch-at32ap/at32ap700x.h | 16 + include/linux/dw_dmac.h | 62 ++ 7 files changed, 1430 insertions(+), 13 deletions(-) create mode 100644 drivers/dma/dw_dmac.c create mode 100644 drivers/dma/dw_dmac_regs.h create mode 100644 include/linux/dw_dmac.h diff --git a/arch/avr32/mach-at32ap/at32ap700x.c b/arch/avr32/mach-at32ap/at32ap700x.c index 0f24b4f..2b92047 100644 --- a/arch/avr32/mach-at32ap/at32ap700x.c +++ b/arch/avr32/mach-at32ap/at32ap700x.c @@ -599,6 +599,17 @@ static void __init genclk_init_parent(struct clk *clk) clk->parent = parent; } +static struct dw_dma_platform_data dw_dmac0_data = { + .nr_channels = 3, +}; + +static struct resource dw_dmac0_resource[] = { + PBMEM(0xff200000), + IRQ(2), +}; +DEFINE_DEV_DATA(dw_dmac, 0); +DEV_CLK(hclk, dw_dmac0, hsb, 10); + /* -------------------------------------------------------------------- * System peripherals * -------------------------------------------------------------------- */ @@ -705,17 +716,6 @@ static struct clk pico_clk = { .users = 1, }; -static struct resource dmaca0_resource[] = { - { - .start = 0xff200000, - .end = 0xff20ffff, - .flags = IORESOURCE_MEM, - }, - IRQ(2), -}; -DEFINE_DEV(dmaca, 0); -DEV_CLK(hclk, dmaca0, hsb, 10); - /* -------------------------------------------------------------------- * HMATRIX * -------------------------------------------------------------------- */ @@ -828,7 +828,7 @@ void __init at32_add_system_devices(void) platform_device_register(&at32_eic0_device); platform_device_register(&smc0_device); platform_device_register(&pdc_device); - platform_device_register(&dmaca0_device); + platform_device_register(&dw_dmac0_device); platform_device_register(&at32_tcb0_device); platform_device_register(&at32_tcb1_device); @@ -1891,7 +1891,7 @@ struct clk *at32_clock_list[] = { &smc0_mck, &pdc_hclk, &pdc_pclk, - &dmaca0_hclk, + &dw_dmac0_hclk, &pico_clk, &pio0_mck, &pio1_mck, diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig index 2ac09be..4fac4e3 100644 --- a/drivers/dma/Kconfig +++ b/drivers/dma/Kconfig @@ -37,6 +37,15 @@ config INTEL_IOP_ADMA help Enable support for the Intel(R) IOP Series RAID engines. +config DW_DMAC + tristate "Synopsys DesignWare AHB DMA support" + depends on AVR32 + select DMA_ENGINE + default y if CPU_AT32AP7000 + help + Support the Synopsys DesignWare AHB DMA controller. This + can be integrated in chips such as the Atmel AT32ap7000. + config FSL_DMA bool "Freescale MPC85xx/MPC83xx DMA support" depends on PPC diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile index 2ff6d7f..beebae4 100644 --- a/drivers/dma/Makefile +++ b/drivers/dma/Makefile @@ -1,6 +1,7 @@ obj-$(CONFIG_DMA_ENGINE) += dmaengine.o obj-$(CONFIG_NET_DMA) += iovlock.o obj-$(CONFIG_INTEL_IOATDMA) += ioatdma.o +obj-$(CONFIG_DW_DMAC) += dw_dmac.o ioatdma-objs := ioat.o ioat_dma.o ioat_dca.o obj-$(CONFIG_INTEL_IOP_ADMA) += iop-adma.o obj-$(CONFIG_FSL_DMA) += fsldma.o diff --git a/drivers/dma/dw_dmac.c b/drivers/dma/dw_dmac.c new file mode 100644 index 0000000..e5389e1 --- /dev/null +++ b/drivers/dma/dw_dmac.c @@ -0,0 +1,1105 @@ +/* + * Driver for the Synopsys DesignWare DMA Controller (aka DMACA on + * AVR32 systems.) + * + * Copyright (C) 2007-2008 Atmel Corporation + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ +#include <linux/clk.h> +#include <linux/delay.h> +#include <linux/dmaengine.h> +#include <linux/dma-mapping.h> +#include <linux/init.h> +#include <linux/interrupt.h> +#include <linux/io.h> +#include <linux/mm.h> +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/slab.h> + +#include "dw_dmac_regs.h" + +/* + * This supports the Synopsys "DesignWare AHB Central DMA Controller", + * (DW_ahb_dmac) which is used with various AMBA 2.0 systems (not all + * of which use ARM any more). See the "Databook" from Synopsys for + * information beyond what licensees probably provide. + * + * The driver has currently been tested only with the Atmel AT32AP7000, + * which does not support descriptor writeback. + */ + +/* NOTE: DMS+SMS is system-specific. We should get this information + * from the platform code somehow. + */ +#define DWC_DEFAULT_CTLLO (DWC_CTLL_DST_MSIZE(0) \ + | DWC_CTLL_SRC_MSIZE(0) \ + | DWC_CTLL_DMS(0) \ + | DWC_CTLL_SMS(1) \ + | DWC_CTLL_LLP_D_EN \ + | DWC_CTLL_LLP_S_EN) + +/* + * This is configuration-dependent and usually a funny size like 4095. + * Let's round it down to the nearest power of two. + * + * Note that this is a transfer count, i.e. if we transfer 32-bit + * words, we can do 8192 bytes per descriptor. + * + * This parameter is also system-specific. + */ +#define DWC_MAX_COUNT 2048U + +/* + * Number of descriptors to allocate for each channel. This should be + * made configurable somehow; preferably, the clients (at least the + * ones using slave transfers) should be able to give us a hint. + */ +#define NR_DESCS_PER_CHANNEL 64 + +/*----------------------------------------------------------------------*/ + +/* + * Because we're not relying on writeback from the controller (it may not + * even be configured into the core!) we don't need to use dma_pool. These + * descriptors -- and associated data -- are cacheable. We do need to make + * sure their dcache entries are written back before handing them off to + * the controller, though. + */ + +static struct dw_desc *dwc_first_active(struct dw_dma_chan *dwc) +{ + return list_entry(dwc->active_list.next, struct dw_desc, desc_node); +} + +static struct dw_desc *dwc_first_queued(struct dw_dma_chan *dwc) +{ + return list_entry(dwc->queue.next, struct dw_desc, desc_node); +} + +static struct dw_desc *dwc_desc_get(struct dw_dma_chan *dwc) +{ + struct dw_desc *desc, *_desc; + struct dw_desc *ret = NULL; + unsigned int i = 0; + + spin_lock_bh(&dwc->lock); + list_for_each_entry_safe(desc, _desc, &dwc->free_list, desc_node) { + if (async_tx_test_ack(&desc->txd)) { + list_del(&desc->desc_node); + ret = desc; + break; + } + dev_dbg(&dwc->chan.dev, "desc %p not ACKed\n", desc); + i++; + } + spin_unlock_bh(&dwc->lock); + + dev_vdbg(&dwc->chan.dev, "scanned %u descriptors on freelist\n", i); + + return ret; +} + +static void dwc_sync_desc_for_cpu(struct dw_dma_chan *dwc, struct dw_desc *desc) +{ + struct dw_desc *child; + + list_for_each_entry(child, &desc->txd.tx_list, desc_node) + dma_sync_single_for_cpu(dwc->chan.dev.parent, + child->txd.phys, sizeof(child->lli), + DMA_TO_DEVICE); + dma_sync_single_for_cpu(dwc->chan.dev.parent, + desc->txd.phys, sizeof(desc->lli), + DMA_TO_DEVICE); +} + +/* + * Move a descriptor, including any children, to the free list. + * `desc' must not be on any lists. + */ +static void dwc_desc_put(struct dw_dma_chan *dwc, struct dw_desc *desc) +{ + if (desc) { + struct dw_desc *child; + + dwc_sync_desc_for_cpu(dwc, desc); + + spin_lock_bh(&dwc->lock); + list_for_each_entry(child, &desc->txd.tx_list, desc_node) + dev_vdbg(&dwc->chan.dev, + "moving child desc %p to freelist\n", + child); + list_splice_init(&desc->txd.tx_list, &dwc->free_list); + dev_vdbg(&dwc->chan.dev, "moving desc %p to freelist\n", desc); + list_add(&desc->desc_node, &dwc->free_list); + spin_unlock_bh(&dwc->lock); + } +} + +/* Called with dwc->lock held and bh disabled */ +static dma_cookie_t +dwc_assign_cookie(struct dw_dma_chan *dwc, struct dw_desc *desc) +{ + dma_cookie_t cookie = dwc->chan.cookie; + + if (++cookie < 0) + cookie = 1; + + dwc->chan.cookie = cookie; + desc->txd.cookie = cookie; + + return cookie; +} + +/*----------------------------------------------------------------------*/ + +/* Called with dwc->lock held and bh disabled */ +static void dwc_dostart(struct dw_dma_chan *dwc, struct dw_desc *first) +{ + struct dw_dma *dw = to_dw_dma(dwc->chan.device); + + /* ASSERT: channel is idle */ + if (dma_readl(dw, CH_EN) & dwc->mask) { + dev_err(&dwc->chan.dev, + "BUG: Attempted to start non-idle channel\n"); + dev_err(&dwc->chan.dev, + " SAR: 0x%x DAR: 0x%x LLP: 0x%x CTL: 0x%x:%08x\n", + channel_readl(dwc, SAR), + channel_readl(dwc, DAR), + channel_readl(dwc, LLP), + channel_readl(dwc, CTL_HI), + channel_readl(dwc, CTL_LO)); + + /* The tasklet will hopefully advance the queue... */ + return; + } + + channel_writel(dwc, LLP, first->txd.phys); + channel_writel(dwc, CTL_LO, + DWC_CTLL_LLP_D_EN | DWC_CTLL_LLP_S_EN); + channel_writel(dwc, CTL_HI, 0); + channel_set_bit(dw, CH_EN, dwc->mask); +} + +/*----------------------------------------------------------------------*/ + +static void +dwc_descriptor_complete(struct dw_dma_chan *dwc, struct dw_desc *desc) +{ + dma_async_tx_callback callback; + void *param; + struct dma_async_tx_descriptor *txd = &desc->txd; + + dev_vdbg(&dwc->chan.dev, "descriptor %u complete\n", txd->cookie); + + dwc->completed = txd->cookie; + callback = txd->callback; + param = txd->callback_param; + + dwc_sync_desc_for_cpu(dwc, desc); + list_splice_init(&txd->tx_list, &dwc->free_list); + list_move(&desc->desc_node, &dwc->free_list); + + /* + * The API requires that no submissions are done from a + * callback, so we don't need to drop the lock here + */ + if (callback) + callback(param); +} + +static void dwc_complete_all(struct dw_dma *dw, struct dw_dma_chan *dwc) +{ + struct dw_desc *desc, *_desc; + LIST_HEAD(list); + + if (dma_readl(dw, CH_EN) & dwc->mask) { + dev_err(&dwc->chan.dev, + "BUG: XFER bit set, but channel not idle!\n"); + + /* Try to continue after resetting the channel... */ + channel_clear_bit(dw, CH_EN, dwc->mask); + while (dma_readl(dw, CH_EN) & dwc->mask) + cpu_relax(); + } + + /* + * Submit queued descriptors ASAP, i.e. before we go through + * the completed ones. + */ + if (!list_empty(&dwc->queue)) + dwc_dostart(dwc, dwc_first_queued(dwc)); + list_splice_init(&dwc->active_list, &list); + list_splice_init(&dwc->queue, &dwc->active_list); + + list_for_each_entry_safe(desc, _desc, &list, desc_node) + dwc_descriptor_complete(dwc, desc); +} + +static void dwc_scan_descriptors(struct dw_dma *dw, struct dw_dma_chan *dwc) +{ + dma_addr_t llp; + struct dw_desc *desc, *_desc; + struct dw_desc *child; + u32 status_xfer; + + /* + * Clear block interrupt flag before scanning so that we don't + * miss any, and read LLP before RAW_XFER to ensure it is + * valid if we decide to scan the list. + */ + dma_writel(dw, CLEAR.BLOCK, dwc->mask); + llp = channel_readl(dwc, LLP); + status_xfer = dma_readl(dw, RAW.XFER); + + if (status_xfer & dwc->mask) { + /* Everything we've submitted is done */ + dma_writel(dw, CLEAR.XFER, dwc->mask); + dwc_complete_all(dw, dwc); + return; + } + + dev_vdbg(&dwc->chan.dev, "scan_descriptors: llp=0x%x\n", llp); + + list_for_each_entry_safe(desc, _desc, &dwc->active_list, desc_node) { + if (desc->lli.llp == llp) + /* This one is currently in progress */ + return; + + list_for_each_entry(child, &desc->txd.tx_list, desc_node) + if (child->lli.llp == llp) + /* Currently in progress */ + return; + + /* + * No descriptors so far seem to be in progress, i.e. + * this one must be done. + */ + dwc_descriptor_complete(dwc, desc); + } + + dev_err(&dwc->chan.dev, + "BUG: All descriptors done, but channel not idle!\n"); + + /* Try to continue after resetting the channel... */ + channel_clear_bit(dw, CH_EN, dwc->mask); + while (dma_readl(dw, CH_EN) & dwc->mask) + cpu_relax(); + + if (!list_empty(&dwc->queue)) { + dwc_dostart(dwc, dwc_first_queued(dwc)); + list_splice_init(&dwc->queue, &dwc->active_list); + } +} + +static void dwc_dump_lli(struct dw_dma_chan *dwc, struct dw_lli *lli) +{ + dev_printk(KERN_CRIT, &dwc->chan.dev, + " desc: s0x%x d0x%x l0x%x c0x%x:%x\n", + lli->sar, lli->dar, lli->llp, + lli->ctlhi, lli->ctllo); +} + +static void dwc_handle_error(struct dw_dma *dw, struct dw_dma_chan *dwc) +{ + struct dw_desc *bad_desc; + struct dw_desc *child; + + dwc_scan_descriptors(dw, dwc); + + /* + * The descriptor currently at the head of the active list is + * borked. Since we don't have any way to report errors, we'll + * just have to scream loudly and try to carry on. + */ + bad_desc = dwc_first_active(dwc); + list_del_init(&bad_desc->desc_node); + list_splice_init(&dwc->queue, dwc->active_list.prev); + + /* Clear the error flag and try to restart the controller */ + dma_writel(dw, CLEAR.ERROR, dwc->mask); + if (!list_empty(&dwc->active_list)) + dwc_dostart(dwc, dwc_first_active(dwc)); + + /* + * KERN_CRITICAL may seem harsh, but since this only happens + * when someone submits a bad physical address in a + * descriptor, we should consider ourselves lucky that the + * controller flagged an error instead of scribbling over + * random memory locations. + */ + dev_printk(KERN_CRIT, &dwc->chan.dev, + "Bad descriptor submitted for DMA!\n"); + dev_printk(KERN_CRIT, &dwc->chan.dev, + " cookie: %d\n", bad_desc->txd.cookie); + dwc_dump_lli(dwc, &bad_desc->lli); + list_for_each_entry(child, &bad_desc->txd.tx_list, desc_node) + dwc_dump_lli(dwc, &child->lli); + + /* Pretend the descriptor completed successfully */ + dwc_descriptor_complete(dwc, bad_desc); +} + +static void dw_dma_tasklet(unsigned long data) +{ + struct dw_dma *dw = (struct dw_dma *)data; + struct dw_dma_chan *dwc; + u32 status_block; + u32 status_xfer; + u32 status_err; + int i; + + status_block = dma_readl(dw, RAW.BLOCK); + status_xfer = dma_readl(dw, RAW.BLOCK); + status_err = dma_readl(dw, RAW.ERROR); + + dev_vdbg(dw->dma.dev, "tasklet: status_block=%x status_err=%x\n", + status_block, status_err); + + for (i = 0; i < dw->dma.chancnt; i++) { + dwc = &dw->chan[i]; + spin_lock(&dwc->lock); + if (status_err & (1 << i)) + dwc_handle_error(dw, dwc); + else if ((status_block | status_xfer) & (1 << i)) + dwc_scan_descriptors(dw, dwc); + spin_unlock(&dwc->lock); + } + + /* + * Re-enable interrupts. Block Complete interrupts are only + * enabled if the INT_EN bit in the descriptor is set. This + * will trigger a scan before the whole list is done. + */ + channel_set_bit(dw, MASK.XFER, dw->all_chan_mask); + channel_set_bit(dw, MASK.BLOCK, dw->all_chan_mask); + channel_set_bit(dw, MASK.ERROR, dw->all_chan_mask); +} + +static irqreturn_t dw_dma_interrupt(int irq, void *dev_id) +{ + struct dw_dma *dw = dev_id; + u32 status; + + dev_vdbg(dw->dma.dev, "interrupt: status=0x%x\n", + dma_readl(dw, STATUS_INT)); + + /* + * Just disable the interrupts. We'll turn them back on in the + * softirq handler. + */ + channel_clear_bit(dw, MASK.XFER, dw->all_chan_mask); + channel_clear_bit(dw, MASK.BLOCK, dw->all_chan_mask); + channel_clear_bit(dw, MASK.ERROR, dw->all_chan_mask); + + status = dma_readl(dw, STATUS_INT); + if (status) { + dev_err(dw->dma.dev, + "BUG: Unexpected interrupts pending: 0x%x\n", + status); + + /* Try to recover */ + channel_clear_bit(dw, MASK.XFER, (1 << 8) - 1); + channel_clear_bit(dw, MASK.BLOCK, (1 << 8) - 1); + channel_clear_bit(dw, MASK.SRC_TRAN, (1 << 8) - 1); + channel_clear_bit(dw, MASK.DST_TRAN, (1 << 8) - 1); + channel_clear_bit(dw, MASK.ERROR, (1 << 8) - 1); + } + + tasklet_schedule(&dw->tasklet); + + return IRQ_HANDLED; +} + +/*----------------------------------------------------------------------*/ + +static dma_cookie_t dwc_tx_submit(struct dma_async_tx_descriptor *tx) +{ + struct dw_desc *desc = txd_to_dw_desc(tx); + struct dw_dma_chan *dwc = to_dw_dma_chan(tx->chan); + dma_cookie_t cookie; + + spin_lock_bh(&dwc->lock); + cookie = dwc_assign_cookie(dwc, desc); + + /* + * REVISIT: We should attempt to chain as many descriptors as + * possible, perhaps even appending to those already submitted + * for DMA. But this is hard to do in a race-free manner. + */ + if (list_empty(&dwc->active_list)) { + dev_vdbg(&tx->chan->dev, "tx_submit: started %u\n", + desc->txd.cookie); + dwc_dostart(dwc, desc); + list_add_tail(&desc->desc_node, &dwc->active_list); + } else { + dev_vdbg(&tx->chan->dev, "tx_submit: queued %u\n", + desc->txd.cookie); + + list_add_tail(&desc->desc_node, &dwc->queue); + } + + spin_unlock_bh(&dwc->lock); + + return cookie; +} + +static struct dma_async_tx_descriptor * +dwc_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, + size_t len, unsigned long flags) +{ + struct dw_dma_chan *dwc = to_dw_dma_chan(chan); + struct dw_desc *desc; + struct dw_desc *first; + struct dw_desc *prev; + size_t xfer_count; + size_t offset; + unsigned int src_width; + unsigned int dst_width; + u32 ctllo; + + dev_vdbg(&chan->dev, "prep_dma_memcpy d0x%x s0x%x l0x%zx f0x%lx\n", + dest, src, len, flags); + + if (unlikely(!len)) { + dev_dbg(&chan->dev, "prep_dma_memcpy: length is zero!\n"); + return NULL; + } + + /* + * We can be a lot more clever here, but this should take care + * of the most common optimization. + */ + if (!((src | dest | len) & 3)) + src_width = dst_width = 2; + else if (!((src | dest | len) & 1)) + src_width = dst_width = 1; + else + src_width = dst_width = 0; + + ctllo = DWC_DEFAULT_CTLLO + | DWC_CTLL_DST_WIDTH(dst_width) + | DWC_CTLL_SRC_WIDTH(src_width) + | DWC_CTLL_DST_INC + | DWC_CTLL_SRC_INC + | DWC_CTLL_FC_M2M; + prev = first = NULL; + + for (offset = 0; offset < len; offset += xfer_count << src_width) { + xfer_count = min_t(size_t, (len - offset) >> src_width, + DWC_MAX_COUNT); + + desc = dwc_desc_get(dwc); + if (!desc) + goto err_desc_get; + + desc->lli.sar = src + offset; + desc->lli.dar = dest + offset; + desc->lli.ctllo = ctllo; + desc->lli.ctlhi = xfer_count; + + if (!first) { + first = desc; + } else { + prev->lli.llp = desc->txd.phys; + dma_sync_single_for_device(chan->dev.parent, + prev->txd.phys, sizeof(prev->lli), + DMA_TO_DEVICE); + list_add_tail(&desc->desc_node, + &first->txd.tx_list); + } + prev = desc; + } + + + if (flags & DMA_PREP_INTERRUPT) + /* Trigger interrupt after last block */ + prev->lli.ctllo |= DWC_CTLL_INT_EN; + + prev->lli.llp = 0; + dma_sync_single_for_device(chan->dev.parent, + prev->txd.phys, sizeof(prev->lli), + DMA_TO_DEVICE); + + first->txd.flags = flags; + + return &first->txd; + +err_desc_get: + dwc_desc_put(dwc, first); + return NULL; +} + +static struct dma_async_tx_descriptor * +dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, + unsigned int sg_len, enum dma_data_direction direction, + unsigned long flags) +{ + struct dw_dma_chan *dwc = to_dw_dma_chan(chan); + struct dw_dma_slave *dws = dwc->dws; + struct dw_desc *prev; + struct dw_desc *first; + u32 ctllo; + dma_addr_t reg; + unsigned int reg_width; + unsigned int mem_width; + unsigned int i; + struct scatterlist *sg; + + dev_vdbg(&chan->dev, "prep_dma_slave\n"); + + if (unlikely(!dws || !sg_len)) + return NULL; + + reg_width = dws->slave.reg_width; + prev = first = NULL; + + sg_len = dma_map_sg(chan->dev.parent, sgl, sg_len, direction); + + switch (direction) { + case DMA_TO_DEVICE: + ctllo = (DWC_DEFAULT_CTLLO + | DWC_CTLL_DST_WIDTH(reg_width) + | DWC_CTLL_DST_FIX + | DWC_CTLL_SRC_INC + | DWC_CTLL_FC_M2P); + reg = dws->slave.tx_reg; + for_each_sg(sgl, sg, sg_len, i) { + struct dw_desc *desc; + u32 len; + u32 mem; + + desc = dwc_desc_get(dwc); + if (!desc) { + dev_err(&chan->dev, + "not enough descriptors available\n"); + goto err_desc_get; + } + + mem = sg_phys(sg); + len = sg_dma_len(sg); + mem_width = 2; + if (unlikely(mem & 3 || len & 3)) + mem_width = 0; + + desc->lli.sar = mem; + desc->lli.dar = reg; + desc->lli.ctllo = ctllo | DWC_CTLL_SRC_WIDTH(mem_width); + desc->lli.ctlhi = len >> mem_width; + + if (!first) { + first = desc; + } else { + prev->lli.llp = desc->txd.phys; + dma_sync_single_for_device(chan->dev.parent, + prev->txd.phys, + sizeof(prev->lli), + DMA_TO_DEVICE); + list_add_tail(&desc->desc_node, + &first->txd.tx_list); + } + prev = desc; + } + break; + case DMA_FROM_DEVICE: + ctllo = (DWC_DEFAULT_CTLLO + | DWC_CTLL_SRC_WIDTH(reg_width) + | DWC_CTLL_DST_INC + | DWC_CTLL_SRC_FIX + | DWC_CTLL_FC_P2M); + + reg = dws->slave.rx_reg; + for_each_sg(sgl, sg, sg_len, i) { + struct dw_desc *desc; + u32 len; + u32 mem; + + desc = dwc_desc_get(dwc); + if (!desc) { + dev_err(&chan->dev, + "not enough descriptors available\n"); + goto err_desc_get; + } + + mem = sg_phys(sg); + len = sg_dma_len(sg); + mem_width = 2; + if (unlikely(mem & 3 || len & 3)) + mem_width = 0; + + desc->lli.sar = reg; + desc->lli.dar = mem; + desc->lli.ctllo = ctllo | DWC_CTLL_DST_WIDTH(mem_width); + desc->lli.ctlhi = len >> reg_width; + + if (!first) { + first = desc; + } else { + prev->lli.llp = desc->txd.phys; + dma_sync_single_for_device(chan->dev.parent, + prev->txd.phys, + sizeof(prev->lli), + DMA_TO_DEVICE); + list_add_tail(&desc->desc_node, + &first->txd.tx_list); + } + prev = desc; + } + break; + default: + return NULL; + } + + if (flags & DMA_PREP_INTERRUPT) + /* Trigger interrupt after last block */ + prev->lli.ctllo |= DWC_CTLL_INT_EN; + + prev->lli.llp = 0; + dma_sync_single_for_device(chan->dev.parent, + prev->txd.phys, sizeof(prev->lli), + DMA_TO_DEVICE); + + return &first->txd; + +err_desc_get: + dwc_desc_put(dwc, first); + return NULL; +} + +static void dwc_terminate_all(struct dma_chan *chan) +{ + struct dw_dma_chan *dwc = to_dw_dma_chan(chan); + struct dw_dma *dw = to_dw_dma(chan->device); + struct dw_desc *desc, *_desc; + LIST_HEAD(list); + + /* + * This is only called when something went wrong elsewhere, so + * we don't really care about the data. Just disable the + * channel. We still have to poll the channel enable bit due + * to AHB/HSB limitations. + */ + spin_lock_bh(&dwc->lock); + + channel_clear_bit(dw, CH_EN, dwc->mask); + + while (dma_readl(dw, CH_EN) & dwc->mask) + cpu_relax(); + + /* active_list entries will end up before queued entries */ + list_splice_init(&dwc->queue, &list); + list_splice_init(&dwc->active_list, &list); + + spin_unlock_bh(&dwc->lock); + + /* Flush all pending and queued descriptors */ + list_for_each_entry_safe(desc, _desc, &list, desc_node) + dwc_descriptor_complete(dwc, desc); +} + +static enum dma_status +dwc_is_tx_complete(struct dma_chan *chan, + dma_cookie_t cookie, + dma_cookie_t *done, dma_cookie_t *used) +{ + struct dw_dma_chan *dwc = to_dw_dma_chan(chan); + dma_cookie_t last_used; + dma_cookie_t last_complete; + int ret; + + last_complete = dwc->completed; + last_used = chan->cookie; + + ret = dma_async_is_complete(cookie, last_complete, last_used); + if (ret != DMA_SUCCESS) { + dwc_scan_descriptors(to_dw_dma(chan->device), dwc); + + last_complete = dwc->completed; + last_used = chan->cookie; + + ret = dma_async_is_complete(cookie, last_complete, last_used); + } + + if (done) + *done = last_complete; + if (used) + *used = last_used; + + return ret; +} + +static void dwc_issue_pending(struct dma_chan *chan) +{ + struct dw_dma_chan *dwc = to_dw_dma_chan(chan); + + spin_lock_bh(&dwc->lock); + if (!list_empty(&dwc->queue)) + dwc_scan_descriptors(to_dw_dma(chan->device), dwc); + spin_unlock_bh(&dwc->lock); +} + +static int dwc_alloc_chan_resources(struct dma_chan *chan, + struct dma_client *client) +{ + struct dw_dma_chan *dwc = to_dw_dma_chan(chan); + struct dw_dma *dw = to_dw_dma(chan->device); + struct dw_desc *desc; + struct dma_slave *slave; + struct dw_dma_slave *dws; + int i; + u32 cfghi; + u32 cfglo; + + dev_vdbg(&chan->dev, "alloc_chan_resources\n"); + + /* Channels doing slave DMA can only handle one client. */ + if (dwc->dws || client->slave) { + if (dma_chan_is_in_use(chan)) + return -EBUSY; + } + + /* ASSERT: channel is idle */ + if (dma_readl(dw, CH_EN) & dwc->mask) { + dev_dbg(&chan->dev, "DMA channel not idle?\n"); + return -EIO; + } + + dwc->completed = chan->cookie = 1; + + cfghi = DWC_CFGH_FIFO_MODE; + cfglo = 0; + + slave = client->slave; + if (slave) { + /* + * We need controller-specific data to set up slave + * transfers. + */ + BUG_ON(!slave->dma_dev || slave->dma_dev != dw->dma.dev); + + dws = container_of(slave, struct dw_dma_slave, slave); + + dwc->dws = dws; + cfghi = dws->cfg_hi; + cfglo = dws->cfg_lo; + } else { + dwc->dws = NULL; + } + + channel_writel(dwc, CFG_LO, cfglo); + channel_writel(dwc, CFG_HI, cfghi); + + /* + * NOTE: some controllers may have additional features that we + * need to initialize here, like "scatter-gather" (which + * doesn't mean what you think it means), and status writeback. + */ + + spin_lock_bh(&dwc->lock); + i = dwc->descs_allocated; + while (dwc->descs_allocated < NR_DESCS_PER_CHANNEL) { + spin_unlock_bh(&dwc->lock); + + desc = kzalloc(sizeof(struct dw_desc), GFP_KERNEL); + if (!desc) { + dev_info(&chan->dev, + "only allocated %d descriptors\n", i); + spin_lock_bh(&dwc->lock); + break; + } + + dma_async_tx_descriptor_init(&desc->txd, chan); + desc->txd.tx_submit = dwc_tx_submit; + desc->txd.flags = DMA_CTRL_ACK; + INIT_LIST_HEAD(&desc->txd.tx_list); + desc->txd.phys = dma_map_single(chan->dev.parent, &desc->lli, + sizeof(desc->lli), DMA_TO_DEVICE); + dwc_desc_put(dwc, desc); + + spin_lock_bh(&dwc->lock); + i = ++dwc->descs_allocated; + } + + /* Enable interrupts */ + channel_set_bit(dw, MASK.XFER, dwc->mask); + channel_set_bit(dw, MASK.BLOCK, dwc->mask); + channel_set_bit(dw, MASK.ERROR, dwc->mask); + + spin_unlock_bh(&dwc->lock); + + dev_dbg(&chan->dev, + "alloc_chan_resources allocated %d descriptors\n", i); + + return i; +} + +static void dwc_free_chan_resources(struct dma_chan *chan) +{ + struct dw_dma_chan *dwc = to_dw_dma_chan(chan); + struct dw_dma *dw = to_dw_dma(chan->device); + struct dw_desc *desc, *_desc; + LIST_HEAD(list); + + dev_dbg(&chan->dev, "free_chan_resources (descs allocated=%u)\n", + dwc->descs_allocated); + + /* ASSERT: channel is idle */ + BUG_ON(!list_empty(&dwc->active_list)); + BUG_ON(!list_empty(&dwc->queue)); + BUG_ON(dma_readl(to_dw_dma(chan->device), CH_EN) & dwc->mask); + + spin_lock_bh(&dwc->lock); + list_splice_init(&dwc->free_list, &list); + dwc->descs_allocated = 0; + dwc->dws = NULL; + + /* Disable interrupts */ + channel_clear_bit(dw, MASK.XFER, dwc->mask); + channel_clear_bit(dw, MASK.BLOCK, dwc->mask); + channel_clear_bit(dw, MASK.ERROR, dwc->mask); + + spin_unlock_bh(&dwc->lock); + + list_for_each_entry_safe(desc, _desc, &list, desc_node) { + dev_vdbg(&chan->dev, " freeing descriptor %p\n", desc); + dma_unmap_single(chan->dev.parent, desc->txd.phys, + sizeof(desc->lli), DMA_TO_DEVICE); + kfree(desc); + } + + dev_vdbg(&chan->dev, "free_chan_resources done\n"); +} + +/*----------------------------------------------------------------------*/ + +static void dw_dma_off(struct dw_dma *dw) +{ + dma_writel(dw, CFG, 0); + + channel_clear_bit(dw, MASK.XFER, dw->all_chan_mask); + channel_clear_bit(dw, MASK.BLOCK, dw->all_chan_mask); + channel_clear_bit(dw, MASK.SRC_TRAN, dw->all_chan_mask); + channel_clear_bit(dw, MASK.DST_TRAN, dw->all_chan_mask); + channel_clear_bit(dw, MASK.ERROR, dw->all_chan_mask); + + while (dma_readl(dw, CFG) & DW_CFG_DMA_EN) + cpu_relax(); +} + +static int __init dw_probe(struct platform_device *pdev) +{ + struct dw_dma_platform_data *pdata; + struct resource *io; + struct dw_dma *dw; + size_t size; + int irq; + int err; + int i; + + pdata = pdev->dev.platform_data; + if (!pdata || pdata->nr_channels > DW_DMA_MAX_NR_CHANNELS) + return -EINVAL; + + io = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (!io) + return -EINVAL; + + irq = platform_get_irq(pdev, 0); + if (irq < 0) + return irq; + + size = sizeof(struct dw_dma); + size += pdata->nr_channels * sizeof(struct dw_dma_chan); + dw = kzalloc(size, GFP_KERNEL); + if (!dw) + return -ENOMEM; + + if (!request_mem_region(io->start, DW_REGLEN, pdev->dev.driver->name)) { + err = -EBUSY; + goto err_kfree; + } + + memset(dw, 0, sizeof *dw); + + dw->regs = ioremap(io->start, DW_REGLEN); + if (!dw->regs) { + err = -ENOMEM; + goto err_release_r; + } + + dw->clk = clk_get(&pdev->dev, "hclk"); + if (IS_ERR(dw->clk)) { + err = PTR_ERR(dw->clk); + goto err_clk; + } + clk_enable(dw->clk); + + /* force dma off, just in case */ + dw_dma_off(dw); + + err = request_irq(irq, dw_dma_interrupt, 0, "dw_dmac", dw); + if (err) + goto err_irq; + + platform_set_drvdata(pdev, dw); + + tasklet_init(&dw->tasklet, dw_dma_tasklet, (unsigned long)dw); + + dw->all_chan_mask = (1 << pdata->nr_channels) - 1; + + INIT_LIST_HEAD(&dw->dma.channels); + for (i = 0; i < pdata->nr_channels; i++, dw->dma.chancnt++) { + struct dw_dma_chan *dwc = &dw->chan[i]; + + dwc->chan.device = &dw->dma; + dwc->chan.cookie = dwc->completed = 1; + dwc->chan.chan_id = i; + list_add_tail(&dwc->chan.device_node, &dw->dma.channels); + + dwc->ch_regs = &__dw_regs(dw)->CHAN[i]; + spin_lock_init(&dwc->lock); + dwc->mask = 1 << i; + + INIT_LIST_HEAD(&dwc->active_list); + INIT_LIST_HEAD(&dwc->queue); + INIT_LIST_HEAD(&dwc->free_list); + + channel_clear_bit(dw, CH_EN, dwc->mask); + } + + /* Clear/disable all interrupts on all channels. */ + dma_writel(dw, CLEAR.XFER, dw->all_chan_mask); + dma_writel(dw, CLEAR.BLOCK, dw->all_chan_mask); + dma_writel(dw, CLEAR.SRC_TRAN, dw->all_chan_mask); + dma_writel(dw, CLEAR.DST_TRAN, dw->all_chan_mask); + dma_writel(dw, CLEAR.ERROR, dw->all_chan_mask); + + channel_clear_bit(dw, MASK.XFER, dw->all_chan_mask); + channel_clear_bit(dw, MASK.BLOCK, dw->all_chan_mask); + channel_clear_bit(dw, MASK.SRC_TRAN, dw->all_chan_mask); + channel_clear_bit(dw, MASK.DST_TRAN, dw->all_chan_mask); + channel_clear_bit(dw, MASK.ERROR, dw->all_chan_mask); + + dma_cap_set(DMA_MEMCPY, dw->dma.cap_mask); + dma_cap_set(DMA_SLAVE, dw->dma.cap_mask); + dw->dma.dev = &pdev->dev; + dw->dma.device_alloc_chan_resources = dwc_alloc_chan_resources; + dw->dma.device_free_chan_resources = dwc_free_chan_resources; + + dw->dma.device_prep_dma_memcpy = dwc_prep_dma_memcpy; + + dw->dma.device_prep_slave_sg = dwc_prep_slave_sg; + dw->dma.device_terminate_all = dwc_terminate_all; + + dw->dma.device_is_tx_complete = dwc_is_tx_complete; + dw->dma.device_issue_pending = dwc_issue_pending; + + dma_writel(dw, CFG, DW_CFG_DMA_EN); + + printk(KERN_INFO "%s: DesignWare DMA Controller, %d channels\n", + pdev->dev.bus_id, dw->dma.chancnt); + + dma_async_device_register(&dw->dma); + + return 0; + +err_irq: + clk_disable(dw->clk); + clk_put(dw->clk); +err_clk: + iounmap(dw->regs); + dw->regs = NULL; +err_release_r: + release_resource(io); +err_kfree: + kfree(dw); + return err; +} + +static int __exit dw_remove(struct platform_device *pdev) +{ + struct dw_dma *dw = platform_get_drvdata(pdev); + struct dw_dma_chan *dwc, *_dwc; + struct resource *io; + + dw_dma_off(dw); + dma_async_device_unregister(&dw->dma); + + free_irq(platform_get_irq(pdev, 0), dw); + tasklet_kill(&dw->tasklet); + + list_for_each_entry_safe(dwc, _dwc, &dw->dma.channels, + chan.device_node) { + list_del(&dwc->chan.device_node); + channel_clear_bit(dw, CH_EN, dwc->mask); + } + + clk_disable(dw->clk); + clk_put(dw->clk); + + iounmap(dw->regs); + dw->regs = NULL; + + io = platform_get_resource(pdev, IORESOURCE_MEM, 0); + release_mem_region(io->start, DW_REGLEN); + + kfree(dw); + + return 0; +} + +static void dw_shutdown(struct platform_device *pdev) +{ + struct dw_dma *dw = platform_get_drvdata(pdev); + + dw_dma_off(platform_get_drvdata(pdev)); + clk_disable(dw->clk); +} + +static int dw_suspend_late(struct platform_device *pdev, pm_message_t mesg) +{ + struct dw_dma *dw = platform_get_drvdata(pdev); + + dw_dma_off(platform_get_drvdata(pdev)); + clk_disable(dw->clk); + return 0; +} + +static int dw_resume_early(struct platform_device *pdev) +{ + struct dw_dma *dw = platform_get_drvdata(pdev); + + clk_enable(dw->clk); + dma_writel(dw, CFG, DW_CFG_DMA_EN); + return 0; + +} + +static struct platform_driver dw_driver = { + .remove = __exit_p(dw_remove), + .shutdown = dw_shutdown, + .suspend_late = dw_suspend_late, + .resume_early = dw_resume_early, + .driver = { + .name = "dw_dmac", + }, +}; + +static int __init dw_init(void) +{ + return platform_driver_probe(&dw_driver, dw_probe); +} +module_init(dw_init); + +static void __exit dw_exit(void) +{ + platform_driver_unregister(&dw_driver); +} +module_exit(dw_exit); + +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("Synopsys DesignWare DMA Controller driver"); +MODULE_AUTHOR("Haavard Skinnemoen <haavard.skinnemoen@atmel.com>"); diff --git a/drivers/dma/dw_dmac_regs.h b/drivers/dma/dw_dmac_regs.h new file mode 100644 index 0000000..119e65b --- /dev/null +++ b/drivers/dma/dw_dmac_regs.h @@ -0,0 +1,224 @@ +/* + * Driver for the Synopsys DesignWare AHB DMA Controller + * + * Copyright (C) 2005-2007 Atmel Corporation + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <linux/dw_dmac.h> + +#define DW_DMA_MAX_NR_CHANNELS 8 + +/* + * Redefine this macro to handle differences between 32- and 64-bit + * addressing, big vs. little endian, etc. + */ +#define DW_REG(name) u32 name; u32 __pad_##name + +/* Hardware register definitions. */ +struct dw_dma_chan_regs { + DW_REG(SAR); /* Source Address Register */ + DW_REG(DAR); /* Destination Address Register */ + DW_REG(LLP); /* Linked List Pointer */ + u32 CTL_LO; /* Control Register Low */ + u32 CTL_HI; /* Control Register High */ + DW_REG(SSTAT); + DW_REG(DSTAT); + DW_REG(SSTATAR); + DW_REG(DSTATAR); + u32 CFG_LO; /* Configuration Register Low */ + u32 CFG_HI; /* Configuration Register High */ + DW_REG(SGR); + DW_REG(DSR); +}; + +struct dw_dma_irq_regs { + DW_REG(XFER); + DW_REG(BLOCK); + DW_REG(SRC_TRAN); + DW_REG(DST_TRAN); + DW_REG(ERROR); +}; + +struct dw_dma_regs { + /* per-channel registers */ + struct dw_dma_chan_regs CHAN[DW_DMA_MAX_NR_CHANNELS]; + + /* irq handling */ + struct dw_dma_irq_regs RAW; /* r */ + struct dw_dma_irq_regs STATUS; /* r (raw & mask) */ + struct dw_dma_irq_regs MASK; /* rw (set = irq enabled) */ + struct dw_dma_irq_regs CLEAR; /* w (ack, affects "raw") */ + + DW_REG(STATUS_INT); /* r */ + + /* software handshaking */ + DW_REG(REQ_SRC); + DW_REG(REQ_DST); + DW_REG(SGL_REQ_SRC); + DW_REG(SGL_REQ_DST); + DW_REG(LAST_SRC); + DW_REG(LAST_DST); + + /* miscellaneous */ + DW_REG(CFG); + DW_REG(CH_EN); + DW_REG(ID); + DW_REG(TEST); + + /* optional encoded params, 0x3c8..0x3 */ +}; + +/* Bitfields in CTL_LO */ +#define DWC_CTLL_INT_EN (1 << 0) /* irqs enabled? */ +#define DWC_CTLL_DST_WIDTH(n) ((n)<<1) /* bytes per element */ +#define DWC_CTLL_SRC_WIDTH(n) ((n)<<4) +#define DWC_CTLL_DST_INC (0<<7) /* DAR update/not */ +#define DWC_CTLL_DST_DEC (1<<7) +#define DWC_CTLL_DST_FIX (2<<7) +#define DWC_CTLL_SRC_INC (0<<7) /* SAR update/not */ +#define DWC_CTLL_SRC_DEC (1<<9) +#define DWC_CTLL_SRC_FIX (2<<9) +#define DWC_CTLL_DST_MSIZE(n) ((n)<<11) /* burst, #elements */ +#define DWC_CTLL_SRC_MSIZE(n) ((n)<<14) +#define DWC_CTLL_S_GATH_EN (1 << 17) /* src gather, !FIX */ +#define DWC_CTLL_D_SCAT_EN (1 << 18) /* dst scatter, !FIX */ +#define DWC_CTLL_FC_M2M (0 << 20) /* mem-to-mem */ +#define DWC_CTLL_FC_M2P (1 << 20) /* mem-to-periph */ +#define DWC_CTLL_FC_P2M (2 << 20) /* periph-to-mem */ +#define DWC_CTLL_FC_P2P (3 << 20) /* periph-to-periph */ +/* plus 4 transfer types for peripheral-as-flow-controller */ +#define DWC_CTLL_DMS(n) ((n)<<23) /* dst master select */ +#define DWC_CTLL_SMS(n) ((n)<<25) /* src master select */ +#define DWC_CTLL_LLP_D_EN (1 << 27) /* dest block chain */ +#define DWC_CTLL_LLP_S_EN (1 << 28) /* src block chain */ + +/* Bitfields in CTL_HI */ +#define DWC_CTLH_DONE 0x00001000 +#define DWC_CTLH_BLOCK_TS_MASK 0x00000fff + +/* Bitfields in CFG_LO. Platform-configurable bits are in <linux/dw_dmac.h> */ +#define DWC_CFGL_CH_SUSP (1 << 8) /* pause xfer */ +#define DWC_CFGL_FIFO_EMPTY (1 << 9) /* pause xfer */ +#define DWC_CFGL_HS_DST (1 << 10) /* handshake w/dst */ +#define DWC_CFGL_HS_SRC (1 << 11) /* handshake w/src */ +#define DWC_CFGL_MAX_BURST(x) ((x) << 20) +#define DWC_CFGL_RELOAD_SAR (1 << 30) +#define DWC_CFGL_RELOAD_DAR (1 << 31) + +/* Bitfields in CFG_HI. Platform-configurable bits are in <linux/dw_dmac.h> */ +#define DWC_CFGH_DS_UPD_EN (1 << 5) +#define DWC_CFGH_SS_UPD_EN (1 << 6) + +/* Bitfields in SGR */ +#define DWC_SGR_SGI(x) ((x) << 0) +#define DWC_SGR_SGC(x) ((x) << 20) + +/* Bitfields in DSR */ +#define DWC_DSR_DSI(x) ((x) << 0) +#define DWC_DSR_DSC(x) ((x) << 20) + +/* Bitfields in CFG */ +#define DW_CFG_DMA_EN (1 << 0) + +#define DW_REGLEN 0x400 + +struct dw_dma_chan { + struct dma_chan chan; + void __iomem *ch_regs; + u8 mask; + + spinlock_t lock; + + /* these other elements are all protected by lock */ + dma_cookie_t completed; + struct list_head active_list; + struct list_head queue; + struct list_head free_list; + + struct dw_dma_slave *dws; + + unsigned int descs_allocated; +}; + +static inline struct dw_dma_chan_regs __iomem * +__dwc_regs(struct dw_dma_chan *dwc) +{ + return dwc->ch_regs; +} + +#define channel_readl(dwc, name) \ + __raw_readl(&(__dwc_regs(dwc)->name)) +#define channel_writel(dwc, name, val) \ + __raw_writel((val), &(__dwc_regs(dwc)->name)) + +static inline struct dw_dma_chan *to_dw_dma_chan(struct dma_chan *chan) +{ + return container_of(chan, struct dw_dma_chan, chan); +} + + +struct dw_dma { + struct dma_device dma; + void __iomem *regs; + struct tasklet_struct tasklet; + struct clk *clk; + + u8 all_chan_mask; + + struct dw_dma_chan chan[0]; +}; + +static inline struct dw_dma_regs __iomem *__dw_regs(struct dw_dma *dw) +{ + return dw->regs; +} + +#define dma_readl(dw, name) \ + __raw_readl(&(__dw_regs(dw)->name)) +#define dma_writel(dw, name, val) \ + __raw_writel((val), &(__dw_regs(dw)->name)) + +#define channel_set_bit(dw, reg, mask) \ + dma_writel(dw, reg, ((mask) << 8) | (mask)) +#define channel_clear_bit(dw, reg, mask) \ + dma_writel(dw, reg, ((mask) << 8) | 0) + +static inline struct dw_dma *to_dw_dma(struct dma_device *ddev) +{ + return container_of(ddev, struct dw_dma, dma); +} + +/* LLI == Linked List Item; a.k.a. DMA block descriptor */ +struct dw_lli { + /* values that are not changed by hardware */ + dma_addr_t sar; + dma_addr_t dar; + dma_addr_t llp; /* chain to next lli */ + u32 ctllo; + /* values that may get written back: */ + u32 ctlhi; + /* sstat and dstat can snapshot peripheral register state. + * silicon config may discard either or both... + */ + u32 sstat; + u32 dstat; +}; + +struct dw_desc { + /* FIRST values the hardware uses */ + struct dw_lli lli; + + /* THEN values for driver housekeeping */ + struct list_head desc_node; + struct dma_async_tx_descriptor txd; +}; + +static inline struct dw_desc * +txd_to_dw_desc(struct dma_async_tx_descriptor *txd) +{ + return container_of(txd, struct dw_desc, txd); +} diff --git a/include/asm-avr32/arch-at32ap/at32ap700x.h b/include/asm-avr32/arch-at32ap/at32ap700x.h index 31e48b0..d18a305 100644 --- a/include/asm-avr32/arch-at32ap/at32ap700x.h +++ b/include/asm-avr32/arch-at32ap/at32ap700x.h @@ -30,4 +30,20 @@ #define GPIO_PIN_PD(N) (GPIO_PIOD_BASE + (N)) #define GPIO_PIN_PE(N) (GPIO_PIOE_BASE + (N)) + +/* + * DMAC peripheral hardware handshaking interfaces, used with dw_dmac + */ +#define DMAC_MCI_RX 0 +#define DMAC_MCI_TX 1 +#define DMAC_DAC_TX 2 +#define DMAC_AC97_A_RX 3 +#define DMAC_AC97_A_TX 4 +#define DMAC_AC97_B_RX 5 +#define DMAC_AC97_B_TX 6 +#define DMAC_DMAREQ_0 7 +#define DMAC_DMAREQ_1 8 +#define DMAC_DMAREQ_2 9 +#define DMAC_DMAREQ_3 10 + #endif /* __ASM_ARCH_AT32AP700X_H__ */ diff --git a/include/linux/dw_dmac.h b/include/linux/dw_dmac.h new file mode 100644 index 0000000..04d217b --- /dev/null +++ b/include/linux/dw_dmac.h @@ -0,0 +1,62 @@ +/* + * Driver for the Synopsys DesignWare DMA Controller (aka DMACA on + * AVR32 systems.) + * + * Copyright (C) 2007 Atmel Corporation + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ +#ifndef DW_DMAC_H +#define DW_DMAC_H + +#include <linux/dmaengine.h> + +/** + * struct dw_dma_platform_data - Controller configuration parameters + * @nr_channels: Number of channels supported by hardware (max 8) + */ +struct dw_dma_platform_data { + unsigned int nr_channels; +}; + +/** + * struct dw_dma_slave - Controller-specific information about a slave + * @slave: Generic information about the slave + * @ctl_lo: Platform-specific initializer for the CTL_LO register + * @cfg_hi: Platform-specific initializer for the CFG_HI register + * @cfg_lo: Platform-specific initializer for the CFG_LO register + */ +struct dw_dma_slave { + struct dma_slave slave; + u32 cfg_hi; + u32 cfg_lo; +}; + +/* Platform-configurable bits in CFG_HI */ +#define DWC_CFGH_FCMODE (1 << 0) +#define DWC_CFGH_FIFO_MODE (1 << 1) +#define DWC_CFGH_PROTCTL(x) ((x) << 2) +#define DWC_CFGH_SRC_PER(x) ((x) << 7) +#define DWC_CFGH_DST_PER(x) ((x) << 11) + +/* Platform-configurable bits in CFG_LO */ +#define DWC_CFGL_PRIO(x) ((x) << 5) /* priority */ +#define DWC_CFGL_LOCK_CH_XFER (0 << 12) /* scope of LOCK_CH */ +#define DWC_CFGL_LOCK_CH_BLOCK (1 << 12) +#define DWC_CFGL_LOCK_CH_XACT (2 << 12) +#define DWC_CFGL_LOCK_BUS_XFER (0 << 14) /* scope of LOCK_BUS */ +#define DWC_CFGL_LOCK_BUS_BLOCK (1 << 14) +#define DWC_CFGL_LOCK_BUS_XACT (2 << 14) +#define DWC_CFGL_LOCK_CH (1 << 15) /* channel lockout */ +#define DWC_CFGL_LOCK_BUS (1 << 16) /* busmaster lockout */ +#define DWC_CFGL_HS_DST_POL (1 << 18) /* dst handshake active low */ +#define DWC_CFGL_HS_SRC_POL (1 << 19) /* src handshake active low */ + +static inline struct dw_dma_slave *to_dw_dma_slave(struct dma_slave *slave) +{ + return container_of(slave, struct dw_dma_slave, slave); +} + +#endif /* DW_DMAC_H */ -- 1.5.5.4 ^ permalink raw reply related [flat|nested] 3+ messages in thread
end of thread, other threads:[~2008-07-04 16:10 UTC | newest] Thread overview: 3+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- [not found] <f12847240806270224h696e78a1v4a1aa6a87fb4a171@mail.gmail.com> 2008-07-04 15:33 ` [PATCH v4 5/6] dmaengine: Driver for the Synopsys DesignWare DMA controller Sosnowski, Maciej 2008-07-04 16:10 ` Haavard Skinnemoen 2008-06-26 13:23 [PATCH v4 0/6] dmaengine/mmc: DMA slave interface and two new drivers Haavard Skinnemoen 2008-06-26 13:23 ` [PATCH v4 1/6] dmaengine: Add dma_client parameter to device_alloc_chan_resources Haavard Skinnemoen 2008-06-26 13:23 ` [PATCH v4 2/6] dmaengine: Add dma_chan_is_in_use() function Haavard Skinnemoen 2008-06-26 13:23 ` [PATCH v4 3/6] dmaengine: Add slave DMA interface Haavard Skinnemoen 2008-06-26 13:23 ` [PATCH v4 4/6] dmaengine: Make DMA Engine menu visible for AVR32 users Haavard Skinnemoen 2008-06-26 13:23 ` [PATCH v4 5/6] dmaengine: Driver for the Synopsys DesignWare DMA controller Haavard Skinnemoen
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).