* [PATCH v3 1/2] Documentation: dt: Add Xilinx zynqmp dma device tree binding documentation @ 2015-06-16 2:34 Punnaiah Choudary Kalluri [not found] ` <1434422083-8653-1-git-send-email-punnaia-gjFFaj9aHVfQT0dZR+AlfA@public.gmane.org> 0 siblings, 1 reply; 10+ messages in thread From: Punnaiah Choudary Kalluri @ 2015-06-16 2:34 UTC (permalink / raw) To: robh+dt-DgEjT+Ai2ygdnm+yROfE0A, pawel.moll-5wv7dgnIgG8, mark.rutland-5wv7dgnIgG8, ijc+devicetree-KcIKpvwj1kUDXYZnReoRVg, galak-sgV2jX0FEOL9JmXXK+q4OQ, michal.simek-gjFFaj9aHVfQT0dZR+AlfA, soren.brinkmann-gjFFaj9aHVfQT0dZR+AlfA, vinod.koul-ral2JQCrhuEAvxtiuMwx3w, dan.j.williams-ral2JQCrhuEAvxtiuMwx3w Cc: dmaengine-u79uwXL29TY76Z2rM5mHXA, devicetree-u79uwXL29TY76Z2rM5mHXA, linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, linux-kernel-u79uwXL29TY76Z2rM5mHXA, kalluripunnaiahchoudary-Re5JQEeQqe8AvxtiuMwx3w, kpc528-Re5JQEeQqe8AvxtiuMwx3w, Punnaiah Choudary Kalluri Device-tree binding documentation for Xilinx zynqmp dma engine used in Zynq UltraScale+ MPSoC. Signed-off-by: Punnaiah Choudary Kalluri <punnaia-gjFFaj9aHVfQT0dZR+AlfA@public.gmane.org> --- Changes in v3: - None Changes in v2: - None --- .../devicetree/bindings/dma/xilinx/zynqmp_dma.txt | 61 ++++++++++++++++++++ 1 files changed, 61 insertions(+), 0 deletions(-) create mode 100644 Documentation/devicetree/bindings/dma/xilinx/zynqmp_dma.txt diff --git a/Documentation/devicetree/bindings/dma/xilinx/zynqmp_dma.txt b/Documentation/devicetree/bindings/dma/xilinx/zynqmp_dma.txt new file mode 100644 index 0000000..e4f92b9 --- /dev/null +++ b/Documentation/devicetree/bindings/dma/xilinx/zynqmp_dma.txt @@ -0,0 +1,61 @@ +Xilinx ZynqMP DMA engine, it does support memory to memory transfers, +memory to device and device to memory transfers. It also has flow +control and rate control support for slave/peripheral dma access. + +Required properties: +- compatible: Should be "xlnx,zynqmp-dma-1.0" +- #dma-cells: Should be <1>, a single cell holding a line request number +- reg: Memory map for module access +- interrupt-parent: Interrupt controller the interrupt is routed through +- interrupts: Should contain DMA channel interrupt +- xlnx,bus-width: AXI buswidth in bits. Should contain 128 or 64 + +Optional properties: +- xlnx,include-sg: Indicates the controller to operate in simple or scatter + gather dma mode +- xlnx,ratectrl: Scheduling interval in terms of clock cycles for + source AXI transaction +- xlnx,overfetch: Tells whether the channel is allowed to over fetch the data +- xlnx,src-issue: Number of AXI outstanding transactions on source side +- xlnx,desc-axi-cohrnt: Tells whether the AXI transactions generated for the + descriptor read are marked Non-coherent +- xlnx,src-axi-cohrnt: Tells whether the AXI transactions generated for the + source descriptor payload are marked Non-coherent +- xlnx,dst-axi-cohrnt: Tells whether the AXI transactions generated for the + dst descriptor payload are marked Non-coherent +- xlnx,desc-axi-qos: AXI QOS bits to be used for descriptor fetch +- xlnx,src-axi-qos: AXI QOS bits to be used for data read +- xlnx,dst-axi-qos: AXI QOS bits to be used for data write +- xlnx,desc-axi-cache: AXI cache bits to be used for descriptor fetch. +- xlnx,desc-axi-cache: AXI cache bits to be used for data read +- xlnx,desc-axi-cache: AXI cache bits to be used for data write +- xlnx,src-burst-len: AXI length for data read. Support only power of 2 values + i.e 1,2,4,8 and 16 +- xlnx,dst-burst-len: AXI length for data write. Support only power of 2 values + i.e 1,2,4,8 and 16 + +Example: +++++++++ +fpd_dma_chan1: dma@FD500000 { + compatible = "xlnx,zynqmp-dma-1.0"; + reg = <0x0 0xFD500000 0x1000>; + #dma_cells = <1>; + interrupt-parent = <&gic>; + interrupts = <0 117 4>; + xlnx,bus-width = <128>; + xlnx,include-sg; + xlnx,overfetch; + xlnx,ratectrl = <0>; + xlnx,src-issue = <16>; + xlnx,desc-axi-cohrnt; + xlnx,src-axi-cohrnt; + xlnx,dst-axi-cohrnt; + xlnx,desc-axi-qos = <0>; + xlnx,desc-axi-cache = <0>; + xlnx,src-axi-qos = <0>; + xlnx,src-axi-cache = <2>; + xlnx,dst-axi-qos = <0>; + xlnx,dst-axi-cache = <2>; + xlnx,src-burst-len = <4>; + xlnx,dst-burst-len = <4>; +}; -- 1.7.4 -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply related [flat|nested] 10+ messages in thread
[parent not found: <1434422083-8653-1-git-send-email-punnaia-gjFFaj9aHVfQT0dZR+AlfA@public.gmane.org>]
* [PATCH v3 2/2] dma: Add Xilinx zynqmp dma engine driver support [not found] ` <1434422083-8653-1-git-send-email-punnaia-gjFFaj9aHVfQT0dZR+AlfA@public.gmane.org> @ 2015-06-16 2:34 ` Punnaiah Choudary Kalluri [not found] ` <1434422083-8653-2-git-send-email-punnaia-gjFFaj9aHVfQT0dZR+AlfA@public.gmane.org> 0 siblings, 1 reply; 10+ messages in thread From: Punnaiah Choudary Kalluri @ 2015-06-16 2:34 UTC (permalink / raw) To: robh+dt-DgEjT+Ai2ygdnm+yROfE0A, pawel.moll-5wv7dgnIgG8, mark.rutland-5wv7dgnIgG8, ijc+devicetree-KcIKpvwj1kUDXYZnReoRVg, galak-sgV2jX0FEOL9JmXXK+q4OQ, michal.simek-gjFFaj9aHVfQT0dZR+AlfA, soren.brinkmann-gjFFaj9aHVfQT0dZR+AlfA, vinod.koul-ral2JQCrhuEAvxtiuMwx3w, dan.j.williams-ral2JQCrhuEAvxtiuMwx3w Cc: dmaengine-u79uwXL29TY76Z2rM5mHXA, devicetree-u79uwXL29TY76Z2rM5mHXA, linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, linux-kernel-u79uwXL29TY76Z2rM5mHXA, kalluripunnaiahchoudary-Re5JQEeQqe8AvxtiuMwx3w, kpc528-Re5JQEeQqe8AvxtiuMwx3w, Punnaiah Choudary Kalluri Added the basic driver for zynqmp dma engine used in Zynq UltraScale+ MPSoC. The initial release of this driver supports only memory to memory transfers. Signed-off-by: Punnaiah Choudary Kalluri <punnaia-gjFFaj9aHVfQT0dZR+AlfA@public.gmane.org> --- Changes in v3: - Modified the zynqmp_dma_chan_is_idle function return type to bool Changes in v2: - Corrected the function header documentation - Framework expects bus-width value in bytes. so, fixed it. - Removed magic numbers for bus-width --- drivers/dma/Kconfig | 6 + drivers/dma/xilinx/Makefile | 1 + drivers/dma/xilinx/zynqmp_dma.c | 1216 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 1223 insertions(+), 0 deletions(-) create mode 100644 drivers/dma/xilinx/zynqmp_dma.c diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig index bda2cb0..d5b57fc 100644 --- a/drivers/dma/Kconfig +++ b/drivers/dma/Kconfig @@ -410,6 +410,12 @@ config XILINX_VDMA channels, Memory Mapped to Stream (MM2S) and Stream to Memory Mapped (S2MM) for the data transfers. +config XILINX_ZYNQMP_DMA + tristate "Xilinx ZynqMP DMA Engine" + select DMA_ENGINE + help + Enable support for Xilinx ZynqMP DMA engine in Zynq UltraScale+ MPSoC. + config DMA_SUN6I tristate "Allwinner A31 SoCs DMA support" depends on MACH_SUN6I || MACH_SUN8I || COMPILE_TEST diff --git a/drivers/dma/xilinx/Makefile b/drivers/dma/xilinx/Makefile index 3c4e9f2..95469dc 100644 --- a/drivers/dma/xilinx/Makefile +++ b/drivers/dma/xilinx/Makefile @@ -1 +1,2 @@ obj-$(CONFIG_XILINX_VDMA) += xilinx_vdma.o +obj-$(CONFIG_XILINX_ZYNQMP_DMA) += zynqmp_dma.o diff --git a/drivers/dma/xilinx/zynqmp_dma.c b/drivers/dma/xilinx/zynqmp_dma.c new file mode 100644 index 0000000..cfb169a --- /dev/null +++ b/drivers/dma/xilinx/zynqmp_dma.c @@ -0,0 +1,1216 @@ +/* + * DMA driver for Xilinx ZynqMP DMA Engine + * + * Copyright (C) 2015 Xilinx, Inc. All rights reserved. + * + * This program is free software: you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation, either version 2 of the License, or + * (at your option) any later version. + */ + +#include <linux/bitops.h> +#include <linux/dma-mapping.h> +#include <linux/dmaengine.h> +#include <linux/dmapool.h> +#include <linux/init.h> +#include <linux/interrupt.h> +#include <linux/io.h> +#include <linux/module.h> +#include <linux/of_address.h> +#include <linux/of_dma.h> +#include <linux/of_irq.h> +#include <linux/of_platform.h> +#include <linux/slab.h> + +#include "../dmaengine.h" + +/* Register Offsets */ +#define ISR 0x100 +#define IMR 0x104 +#define IER 0x108 +#define IDS 0x10C +#define CTRL0 0x110 +#define CTRL1 0x114 +#define STATUS 0x11C +#define DATA_ATTR 0x120 +#define DSCR_ATTR 0x124 +#define SRC_DSCR_WRD0 0x128 +#define SRC_DSCR_WRD1 0x12C +#define SRC_DSCR_WRD2 0x130 +#define SRC_DSCR_WRD3 0x134 +#define DST_DSCR_WRD0 0x138 +#define DST_DSCR_WRD1 0x13C +#define DST_DSCR_WRD2 0x140 +#define DST_DSCR_WRD3 0x144 +#define SRC_START_LSB 0x158 +#define SRC_START_MSB 0x15C +#define DST_START_LSB 0x160 +#define DST_START_MSB 0x164 +#define TOTAL_BYTE 0x188 +#define RATE_CTRL 0x18C +#define IRQ_SRC_ACCT 0x190 +#define IRQ_DST_ACCT 0x194 +#define CTRL2 0x200 + +/* Interrupt registers bit field definitions */ +#define DMA_DONE BIT(10) +#define AXI_WR_DATA BIT(9) +#define AXI_RD_DATA BIT(8) +#define AXI_RD_DST_DSCR BIT(7) +#define AXI_RD_SRC_DSCR BIT(6) +#define IRQ_DST_ACCT_ERR BIT(5) +#define IRQ_SRC_ACCT_ERR BIT(4) +#define BYTE_CNT_OVRFL BIT(3) +#define DST_DSCR_DONE BIT(2) +#define INV_APB BIT(0) + +/* Control 0 register bit field definitions */ +#define OVR_FETCH BIT(7) +#define POINT_TYPE_SG BIT(6) +#define RATE_CTRL_EN BIT(3) + +/* Control 1 register bit field definitions */ +#define SRC_ISSUE GENMASK(4, 0) + +/* Channel status register bit field definitions */ +#define STATUS_BUSY 0x2 + +/* Data Attribute register bit field definitions */ +#define ARBURST GENMASK(27, 26) +#define ARCACHE GENMASK(25, 22) +#define ARCACHE_OFST 22 +#define ARQOS GENMASK(21, 18) +#define ARQOS_OFST 18 +#define ARLEN GENMASK(17, 14) +#define ARLEN_OFST 14 +#define AWBURST GENMASK(13, 12) +#define AWCACHE GENMASK(11, 8) +#define AWCACHE_OFST 8 +#define AWQOS GENMASK(7, 4) +#define AWQOS_OFST 4 +#define AWLEN GENMASK(3, 0) +#define AWLEN_OFST 0 + +/* Descriptor Attribute register bit field definitions */ +#define AXCOHRNT BIT(8) +#define AXCACHE GENMASK(7, 4) +#define AXCACHE_OFST 4 +#define AXQOS GENMASK(3, 0) +#define AXQOS_OFST 0 + +/* Control register 2 bit field definitions */ +#define ENABLE BIT(0) + +/* Buffer Descriptor definitions */ +#define DESC_CTRL_STOP 0x10 +#define DESC_CTRL_COMP_INT 0x4 +#define DESC_CTRL_SIZE_256 0x2 +#define DESC_CTRL_COHRNT 0x1 + +/* Interrupt Mask specific definitions */ +#define INT_ERR (AXI_RD_DATA | AXI_WR_DATA | AXI_RD_DST_DSCR | \ + AXI_RD_SRC_DSCR | INV_APB) +#define INT_OVRFL (BYTE_CNT_OVRFL | IRQ_SRC_ACCT_ERR | IRQ_DST_ACCT_ERR) +#define INT_DONE (DMA_DONE | DST_DSCR_DONE) +#define INT_EN_DEFAULT_MASK (INT_DONE | INT_ERR | INT_OVRFL | DST_DSCR_DONE) + +/* Max number of descriptors per channel */ +#define ZYNQMP_DMA_NUM_DESCS 32 + +/* Max transfer size per descriptor */ +#define ZYNQMP_DMA_MAX_TRANS_LEN 0x40000000 + +/* Reset values for data attributes */ +#define ARCACHE_RST_VAL 0x2 +#define ARLEN_RST_VAL 0xF +#define AWCACHE_RST_VAL 0x2 +#define AWLEN_RST_VAL 0xF + +#define SRC_ISSUE_RST_VAL 0x1F + +#define IDS_DEFAULT_MASK 0xFFF + +/* Bus width in bits */ +#define ZYNQMP_DMA_BUS_WIDTH_64 64 +#define ZYNQMP_DMA_BUS_WIDTH_128 128 + +#define DESC_SIZE(chan) (chan->desc_size) + +#define to_chan(chan) container_of(chan, struct zynqmp_dma_chan, \ + common) +#define tx_to_desc(tx) container_of(tx, struct zynqmp_dma_desc_sw, \ + async_tx) + +/** + * struct zynqmp_dma_desc_ll - Hw linked list descriptor + * @addr: Buffer address + * @size: Size of the buffer + * @ctrl: Control word + * @nxtdscraddr: Next descriptor base address + * @rsvd: Reserved field and for Hw internal use. + */ +struct zynqmp_dma_desc_ll { + u64 addr; + u32 size; + u32 ctrl; + u64 nxtdscraddr; + u64 rsvd; +}; __aligned(64) + +/** + * struct zynqmp_dma_chan - Driver specific DMA channel structure + * @xdev: Driver specific device structure + * @regs: Control registers offset + * @lock: Descriptor operation lock + * @pending_list: Descriptors waiting + * @free_list: Descriptors free + * @active_list: Descriptors active + * @sw_desc_pool: SW descriptor pool + * @done_list: Complete descriptors + * @common: DMA common channel + * @desc_pool_v: Statically allocated descriptor base + * @desc_pool_p: Physical allocated descriptor base + * @desc_free_cnt: Descriptor available count + * @dev: The dma device + * @irq: Channel IRQ + * @has_sg: Support scatter gather transfers + * @ovrfetch: Overfetch status + * @ratectrl: Rate control value + * @tasklet: Cleanup work after irq + * @src_issue: Out standing transactions on source + * @dst_issue: Out standing transactions on destination + * @desc_size: Size of the low level descriptor + * @err: Channel has errors + * @bus_width: Bus width + * @desc_axi_cohrnt: Descriptor axi coherent status + * @desc_axi_cache: Descriptor axi cache attribute + * @desc_axi_qos: Descriptor axi qos attribute + * @src_axi_cohrnt: Source data axi coherent status + * @src_axi_cache: Source data axi cache attribute + * @src_axi_qos: Source data axi qos attribute + * @dst_axi_cohrnt: Dest data axi coherent status + * @dst_axi_cache: Dest data axi cache attribute + * @dst_axi_qos: Dest data axi qos attribute + * @src_burst_len: Source burst length + * @dst_burst_len: Dest burst length + */ +struct zynqmp_dma_chan { + struct zynqmp_dma_device *xdev; + void __iomem *regs; + spinlock_t lock; + struct list_head pending_list; + struct list_head free_list; + struct list_head active_list; + struct zynqmp_dma_desc_sw *sw_desc_pool; + struct list_head done_list; + struct dma_chan common; + void *desc_pool_v; + dma_addr_t desc_pool_p; + u32 desc_free_cnt; + struct device *dev; + int irq; + bool has_sg; + bool ovrfetch; + u32 ratectrl; + struct tasklet_struct tasklet; + u32 src_issue; + u32 dst_issue; + u32 desc_size; + bool err; + u32 bus_width; + u32 desc_axi_cohrnt; + u32 desc_axi_cache; + u32 desc_axi_qos; + u32 src_axi_cohrnt; + u32 src_axi_cache; + u32 src_axi_qos; + u32 dst_axi_cohrnt; + u32 dst_axi_cache; + u32 dst_axi_qos; + u32 src_burst_len; + u32 dst_burst_len; +}; + +/** + * struct zynqmp_dma_desc_sw - Per Transaction structure + * @src: Source address for simple mode dma + * @dst: Destination address for simple mode dma + * @len: Transfer length for simple mode dma + * @node: Node in the channel descriptor list + * @tx_list: List head for the current transfer + * @async_tx: Async transaction descriptor + * @src_v: Virtual address of the src descriptor + * @src_p: Physical address of the src descriptor + * @dst_v: Virtual address of the dst descriptor + * @dst_p: Physical address of the dst descriptor + */ +struct zynqmp_dma_desc_sw { + u64 src; + u64 dst; + u32 len; + struct list_head node; + struct list_head tx_list; + struct dma_async_tx_descriptor async_tx; + struct zynqmp_dma_desc_ll *src_v; + dma_addr_t src_p; + struct zynqmp_dma_desc_ll *dst_v; + dma_addr_t dst_p; +}; + +/** + * struct zynqmp_dma_device - DMA device structure + * @dev: Device Structure + * @common: DMA device structure + * @chan: Driver specific DMA channel + */ +struct zynqmp_dma_device { + struct device *dev; + struct dma_device common; + struct zynqmp_dma_chan *chan; +}; + +/** + * zynqmp_dma_chan_is_idle - Provides the channel idle status + * @chan: ZynqMP DMA DMA channel pointer + * + * Return: 'true' if the channel is idle otherwise 'false' + */ +static bool zynqmp_dma_chan_is_idle(struct zynqmp_dma_chan *chan) +{ + u32 regval; + + regval = readl(chan->regs + STATUS); + if (regval & STATUS_BUSY) + return false; + + return true; +} + +/** + * zynqmp_dma_update_desc_to_ctrlr - Updates descriptor to the controller + * @chan: ZynqMP DMA DMA channel pointer + * @desc: Transaction descriptor pointer + */ +static void zynqmp_dma_update_desc_to_ctrlr(struct zynqmp_dma_chan *chan, + struct zynqmp_dma_desc_sw *desc) +{ + dma_addr_t addr; + + addr = desc->src_p; + writel(addr, chan->regs + SRC_START_LSB); + writel(upper_32_bits(addr), chan->regs + SRC_START_MSB); + addr = desc->dst_p; + writel(addr, chan->regs + DST_START_LSB); + writel(upper_32_bits(addr), chan->regs + DST_START_MSB); +} + +/** + * zynqmp_dma_desc_config_eod - Mark the descriptor as end descriptor + * @chan: ZynqMP DMA channel pointer + * @desc: Hw descriptor pointer + */ +static void zynqmp_dma_desc_config_eod(struct zynqmp_dma_chan *chan, void *desc) +{ + struct zynqmp_dma_desc_ll *hw = (struct zynqmp_dma_desc_ll *)desc; + + hw->ctrl |= DESC_CTRL_STOP; + hw++; + hw->ctrl |= DESC_CTRL_COMP_INT | DESC_CTRL_STOP; +} + +/** + * zynqmp_dma_config_simple_desc - Configure the transfer params to channel registers + * @chan: ZynqMP DMA channel pointer + * @src: Source buffer address + * @dst: Destination buffer address + * @len: Transfer length + */ +static void zynqmp_dma_config_simple_desc(struct zynqmp_dma_chan *chan, + dma_addr_t src, dma_addr_t dst, + size_t len) +{ + u32 val; + + writel(src, chan->regs + SRC_DSCR_WRD0); + writel(upper_32_bits(src), chan->regs + SRC_DSCR_WRD1); + writel(len, chan->regs + SRC_DSCR_WRD2); + + if (chan->src_axi_cohrnt) + writel(DESC_CTRL_COHRNT, chan->regs + SRC_DSCR_WRD3); + else + writel(0, chan->regs + SRC_DSCR_WRD3); + + writel(dst, chan->regs + DST_DSCR_WRD0); + writel(upper_32_bits(dst), chan->regs + DST_DSCR_WRD1); + writel(len, chan->regs + DST_DSCR_WRD2); + + if (chan->dst_axi_cohrnt) + val = DESC_CTRL_COHRNT | DESC_CTRL_COMP_INT; + else + val = DESC_CTRL_COMP_INT; + writel(val, chan->regs + DST_DSCR_WRD3); +} + +/** + * zynqmp_dma_config_sg_ll_desc - Configure the linked list descriptor + * @chan: ZynqMP DMA channel pointer + * @sdesc: Hw descriptor pointer + * @src: Source buffer address + * @dst: Destination buffer address + * @len: Transfer length + * @prev: Previous hw descriptor pointer + */ +static void zynqmp_dma_config_sg_ll_desc(struct zynqmp_dma_chan *chan, + struct zynqmp_dma_desc_ll *sdesc, + dma_addr_t src, dma_addr_t dst, size_t len, + struct zynqmp_dma_desc_ll *prev) +{ + struct zynqmp_dma_desc_ll *ddesc = sdesc + 1; + + sdesc->size = ddesc->size = len; + sdesc->addr = src; + ddesc->addr = dst; + + sdesc->ctrl = ddesc->ctrl = DESC_CTRL_SIZE_256; + if (chan->src_axi_cohrnt) + sdesc->ctrl |= DESC_CTRL_COHRNT; + else + ddesc->ctrl |= DESC_CTRL_COHRNT; + + if (prev) { + dma_addr_t addr = chan->desc_pool_p + + ((dma_addr_t)sdesc - (dma_addr_t)chan->desc_pool_v); + ddesc = prev + 1; + prev->nxtdscraddr = addr; + ddesc->nxtdscraddr = addr + DESC_SIZE(chan); + } +} + +/** + * zynqmp_dma_init - Initialize the channel + * @chan: ZynqMP DMA channel pointer + */ +static void zynqmp_dma_init(struct zynqmp_dma_chan *chan) +{ + u32 val; + + writel(IDS_DEFAULT_MASK, chan->regs + IDS); + val = readl(chan->regs + ISR); + writel(val, chan->regs + ISR); + writel(0x0, chan->regs + TOTAL_BYTE); + + val = readl(chan->regs + CTRL1); + if (chan->src_issue) + val = (val & ~SRC_ISSUE) | chan->src_issue; + writel(val, chan->regs + CTRL1); + + val = 0; + if (chan->ovrfetch) + val |= OVR_FETCH; + if (chan->has_sg) + val |= POINT_TYPE_SG; + if (chan->ratectrl) { + val |= RATE_CTRL_EN; + writel(chan->ratectrl, chan->regs + RATE_CTRL); + } + writel(val, chan->regs + CTRL0); + + val = 0; + if (chan->desc_axi_cohrnt) + val |= AXCOHRNT; + val |= chan->desc_axi_cache; + val = (val & ~AXCACHE) | (chan->desc_axi_cache << AXCACHE_OFST); + val |= chan->desc_axi_qos; + val = (val & ~AXQOS) | (chan->desc_axi_qos << AXQOS_OFST); + writel(val, chan->regs + DSCR_ATTR); + + val = readl(chan->regs + DATA_ATTR); + val = (val & ~ARCACHE) | (chan->src_axi_cache << ARCACHE_OFST); + val = (val & ~AWCACHE) | (chan->dst_axi_cache << AWCACHE_OFST); + val = (val & ~ARQOS) | (chan->src_axi_qos << ARQOS_OFST); + val = (val & ~AWQOS) | (chan->dst_axi_qos << AWQOS_OFST); + val = (val & ~ARLEN) | (chan->src_burst_len << ARLEN_OFST); + val = (val & ~AWLEN) | (chan->dst_burst_len << AWLEN_OFST); + writel(val, chan->regs + DATA_ATTR); + + /* Clearing the interrupt account rgisters */ + val = readl(chan->regs + IRQ_SRC_ACCT); + val = readl(chan->regs + IRQ_DST_ACCT); +} + +/** + * zynqmp_dma_tx_submit - Submit DMA transaction + * @tx: Async transaction descriptor pointer + * + * Return: cookie value + */ +static dma_cookie_t zynqmp_dma_tx_submit(struct dma_async_tx_descriptor *tx) +{ + struct zynqmp_dma_chan *chan = to_chan(tx->chan); + struct zynqmp_dma_desc_sw *desc = NULL, *new; + dma_cookie_t cookie; + + new = tx_to_desc(tx); + spin_lock_bh(&chan->lock); + cookie = dma_cookie_assign(tx); + if (!list_empty(&chan->pending_list) && chan->has_sg) { + desc = list_last_entry(&chan->pending_list, + struct zynqmp_dma_desc_sw, node); + if (!list_empty(&desc->tx_list)) + desc = list_last_entry(&desc->tx_list, + struct zynqmp_dma_desc_sw, node); + desc->src_v->nxtdscraddr = new->src_p; + desc->src_v->ctrl &= ~DESC_CTRL_STOP; + desc->dst_v->nxtdscraddr = new->dst_p; + desc->dst_v->ctrl &= ~DESC_CTRL_STOP; + } + list_add_tail(&new->node, &chan->pending_list); + spin_unlock_bh(&chan->lock); + + return cookie; +} + +/** + * zynqmp_dma_get_descriptor - Get the sw descriptor from the pool + * @chan: ZynqMP DMA channel pointer + * + * Return: The sw descriptor + */ +static struct zynqmp_dma_desc_sw * +zynqmp_dma_get_descriptor(struct zynqmp_dma_chan *chan) +{ + struct zynqmp_dma_desc_sw *desc; + + spin_lock_bh(&chan->lock); + desc = list_first_entry(&chan->free_list, struct zynqmp_dma_desc_sw, + node); + list_del(&desc->node); + spin_unlock_bh(&chan->lock); + + INIT_LIST_HEAD(&desc->tx_list); + /* Clear the src and dst descriptor memory */ + if (chan->has_sg) { + memset((void *)desc->src_v, 0, DESC_SIZE(chan)); + memset((void *)desc->dst_v, 0, DESC_SIZE(chan)); + } + + return desc; +} + +/** + * zynqmp_dma_free_descriptor - Issue pending transactions + * @chan: ZynqMP DMA channel pointer + * @sdesc: Transaction descriptor pointer + */ +static void zynqmp_dma_free_descriptor(struct zynqmp_dma_chan *chan, + struct zynqmp_dma_desc_sw *sdesc) +{ + struct zynqmp_dma_desc_sw *child, *next; + + chan->desc_free_cnt++; + list_add_tail(&sdesc->node, &chan->free_list); + list_for_each_entry_safe(child, next, &sdesc->tx_list, node) { + chan->desc_free_cnt++; + list_move_tail(&child->node, &chan->free_list); + } +} + +/** + * zynqmp_dma_free_desc_list - Free descriptors list + * @chan: ZynqMP DMA channel pointer + * @list: List to parse and delete the descriptor + */ +static void zynqmp_dma_free_desc_list(struct zynqmp_dma_chan *chan, + struct list_head *list) +{ + struct zynqmp_dma_desc_sw *desc, *next; + + list_for_each_entry_safe(desc, next, list, node) + zynqmp_dma_free_descriptor(chan, desc); +} + +/** + * zynqmp_dma_alloc_chan_resources - Allocate channel resources + * @dchan: DMA channel + * + * Return: '0' on success and failure value on error + */ +static int zynqmp_dma_alloc_chan_resources(struct dma_chan *dchan) +{ + struct zynqmp_dma_chan *chan = to_chan(dchan); + struct zynqmp_dma_desc_sw *desc; + int i; + + chan->sw_desc_pool = kzalloc(sizeof(*desc) * ZYNQMP_DMA_NUM_DESCS, + GFP_KERNEL); + if (!chan->sw_desc_pool) + return -ENOMEM; + + chan->desc_free_cnt = ZYNQMP_DMA_NUM_DESCS; + + INIT_LIST_HEAD(&chan->free_list); + + for (i = 0; i < ZYNQMP_DMA_NUM_DESCS; i++) { + desc = chan->sw_desc_pool + i; + dma_async_tx_descriptor_init(&desc->async_tx, &chan->common); + desc->async_tx.tx_submit = zynqmp_dma_tx_submit; + async_tx_ack(&desc->async_tx); + list_add_tail(&desc->node, &chan->free_list); + } + + if (!chan->has_sg) + return 0; + + chan->desc_pool_v = dma_zalloc_coherent(chan->dev, + (2 * chan->desc_size * ZYNQMP_DMA_NUM_DESCS), + &chan->desc_pool_p, GFP_KERNEL); + if (!chan->desc_pool_v) + return -ENOMEM; + + for (i = 0; i < ZYNQMP_DMA_NUM_DESCS; i++) { + desc = chan->sw_desc_pool + i; + desc->src_v = (struct zynqmp_dma_desc_ll *) (chan->desc_pool_v + + (i * DESC_SIZE(chan) * 2)); + desc->dst_v = (struct zynqmp_dma_desc_ll *) (desc->src_v + 1); + desc->src_p = chan->desc_pool_p + (i * DESC_SIZE(chan) * 2); + desc->dst_p = desc->src_p + DESC_SIZE(chan); + } + + return 0; +} + +/** + * zynqmp_dma_start - Start DMA channel + * @chan: ZynqMP DMA channel pointer + */ +static void zynqmp_dma_start(struct zynqmp_dma_chan *chan) +{ + writel(INT_EN_DEFAULT_MASK, chan->regs + IER); + writel(0, chan->regs + TOTAL_BYTE); + writel(ENABLE, chan->regs + CTRL2); +} + +/** + * zynqmp_dma_handle_ovfl_int - Process the overflow interrupt + * @chan: ZynqMP DMA channel pointer + * @status: Interrupt status value + */ +static void zynqmp_dma_handle_ovfl_int(struct zynqmp_dma_chan *chan, u32 status) +{ + u32 val; + + if (status & BYTE_CNT_OVRFL) { + val = readl(chan->regs + TOTAL_BYTE); + writel(0, chan->regs + TOTAL_BYTE); + } + if (status & IRQ_DST_ACCT_ERR) + val = readl(chan->regs + IRQ_DST_ACCT); + if (status & IRQ_SRC_ACCT_ERR) + val = readl(chan->regs + IRQ_SRC_ACCT); +} + +/** + * zynqmp_dma_start_transfer - Initiate the new transfer + * @chan: ZynqMP DMA channel pointer + */ +static void zynqmp_dma_start_transfer(struct zynqmp_dma_chan *chan) +{ + struct zynqmp_dma_desc_sw *desc; + + if (!zynqmp_dma_chan_is_idle(chan)) + return; + + desc = list_first_entry_or_null(&chan->pending_list, + struct zynqmp_dma_desc_sw, node); + if (!desc) + return; + + if (chan->has_sg) + list_splice_tail_init(&chan->pending_list, &chan->active_list); + else + list_move_tail(&desc->node, &chan->active_list); + + if (chan->has_sg) + zynqmp_dma_update_desc_to_ctrlr(chan, desc); + else + zynqmp_dma_config_simple_desc(chan, desc->src, desc->dst, + desc->len); + zynqmp_dma_start(chan); +} + + +/** + * zynqmp_dma_chan_desc_cleanup - Cleanup the completed descriptors + * @chan: ZynqMP DMA channel + */ +static void zynqmp_dma_chan_desc_cleanup(struct zynqmp_dma_chan *chan) +{ + struct zynqmp_dma_desc_sw *desc, *next; + + list_for_each_entry_safe(desc, next, &chan->done_list, node) { + dma_async_tx_callback callback; + void *callback_param; + + list_del(&desc->node); + + callback = desc->async_tx.callback; + callback_param = desc->async_tx.callback_param; + if (callback) + callback(callback_param); + + /* Run any dependencies, then free the descriptor */ + dma_run_dependencies(&desc->async_tx); + zynqmp_dma_free_descriptor(chan, desc); + } +} + +/** + * zynqmp_dma_complete_descriptor - Mark the active descriptor as complete + * @chan: ZynqMP DMA channel pointer + */ +static void zynqmp_dma_complete_descriptor(struct zynqmp_dma_chan *chan) +{ + struct zynqmp_dma_desc_sw *desc; + + desc = list_first_entry_or_null(&chan->active_list, + struct zynqmp_dma_desc_sw, node); + if (!desc) + return; + list_del(&desc->node); + dma_cookie_complete(&desc->async_tx); + list_add_tail(&desc->node, &chan->done_list); +} + +/** + * zynqmp_dma_issue_pending - Issue pending transactions + * @dchan: DMA channel pointer + */ +static void zynqmp_dma_issue_pending(struct dma_chan *dchan) +{ + struct zynqmp_dma_chan *chan = to_chan(dchan); + + spin_lock_bh(&chan->lock); + zynqmp_dma_start_transfer(chan); + spin_unlock_bh(&chan->lock); +} + +/** + * zynqmp_dma_free_chan_resources - Free channel resources + * @dchan: DMA channel pointer + */ +static void zynqmp_dma_free_chan_resources(struct dma_chan *dchan) +{ + struct zynqmp_dma_chan *chan = to_chan(dchan); + + spin_lock_bh(&chan->lock); + + zynqmp_dma_free_desc_list(chan, &chan->active_list); + zynqmp_dma_free_desc_list(chan, &chan->done_list); + zynqmp_dma_free_desc_list(chan, &chan->pending_list); + + spin_unlock_bh(&chan->lock); + if (chan->has_sg) + dma_free_coherent(chan->dev, + (2 * DESC_SIZE(chan) * ZYNQMP_DMA_NUM_DESCS), + chan->desc_pool_v, chan->desc_pool_p); + kfree(chan->sw_desc_pool); +} + +/** + * zynqmp_dma_tx_status - Get dma transaction status + * @dchan: DMA channel pointer + * @cookie: Transaction identifier + * @txstate: Transaction state + * + * Return: DMA transaction status + */ +static enum dma_status zynqmp_dma_tx_status(struct dma_chan *dchan, + dma_cookie_t cookie, + struct dma_tx_state *txstate) +{ + return dma_cookie_status(dchan, cookie, txstate); +} + +/** + * zynqmp_dma_reset - Reset the channel + * @chan: ZynqMP DMA channel pointer + */ +static void zynqmp_dma_reset(struct zynqmp_dma_chan *chan) +{ + writel(IDS_DEFAULT_MASK, chan->regs + IDS); + + zynqmp_dma_complete_descriptor(chan); + zynqmp_dma_chan_desc_cleanup(chan); + + zynqmp_dma_free_desc_list(chan, &chan->active_list); + zynqmp_dma_free_desc_list(chan, &chan->done_list); + zynqmp_dma_free_desc_list(chan, &chan->pending_list); + zynqmp_dma_init(chan); +} + +/** + * zynqmp_dma_irq_handler - ZynqMP DMA Interrupt handler + * @irq: IRQ number + * @data: Pointer to the ZynqMP DMA channel structure + * + * Return: IRQ_HANDLED/IRQ_NONE + */ +static irqreturn_t zynqmp_dma_irq_handler(int irq, void *data) +{ + struct zynqmp_dma_chan *chan = (struct zynqmp_dma_chan *)data; + u32 isr, imr, status; + irqreturn_t ret = IRQ_NONE; + + isr = readl(chan->regs + ISR); + imr = readl(chan->regs + IMR); + status = isr & ~imr; + writel(isr, chan->regs+ISR); + if (status & INT_DONE) { + writel(INT_DONE, chan->regs + IDS); + tasklet_schedule(&chan->tasklet); + ret = IRQ_HANDLED; + } + + if (status & INT_ERR) { + chan->err = true; + writel(INT_ERR, chan->regs + IDS); + tasklet_schedule(&chan->tasklet); + dev_err(chan->dev, "Channel %p has has errors\n", chan); + ret = IRQ_HANDLED; + } + + if (status & INT_OVRFL) { + writel(INT_OVRFL, chan->regs + IDS); + zynqmp_dma_handle_ovfl_int(chan, status); + dev_info(chan->dev, "Channel %p overflow interrupt\n", chan); + ret = IRQ_HANDLED; + } + + return ret; +} + +/** + * zynqmp_dma_do_tasklet - Schedule completion tasklet + * @data: Pointer to the ZynqMP DMA channel structure + */ +static void zynqmp_dma_do_tasklet(unsigned long data) +{ + struct zynqmp_dma_chan *chan = (struct zynqmp_dma_chan *)data; + u32 count; + + spin_lock(&chan->lock); + + if (chan->err) { + zynqmp_dma_reset(chan); + chan->err = false; + goto unlock; + } + + if (zynqmp_dma_chan_is_idle(chan)) + zynqmp_dma_start_transfer(chan); + else + writel(INT_DONE, chan->regs + IER); + + count = readl(chan->regs + IRQ_DST_ACCT); + while (count) { + zynqmp_dma_complete_descriptor(chan); + zynqmp_dma_chan_desc_cleanup(chan); + count--; + } + +unlock: + spin_unlock(&chan->lock); +} + +/** + * zynqmp_dma_device_terminate_all - Aborts all transfers on a channel + * @dchan: DMA channel pointer + * + * Return: Always '0' + */ +static int zynqmp_dma_device_terminate_all(struct dma_chan *dchan) +{ + struct zynqmp_dma_chan *chan = to_chan(dchan); + + spin_lock_bh(&chan->lock); + zynqmp_dma_reset(chan); + spin_unlock_bh(&chan->lock); + + return 0; +} + +/** + * zynqmp_dma_prep_memcpy - prepare descriptors for memcpy transaction + * @dchan: DMA channel + * @dma_dst: Destination buffer address + * @dma_src: Source buffer address + * @len: Transfer length + * @flags: transfer ack flags + * + * Return: Async transaction descriptor on success and NULL on failure + */ +static struct dma_async_tx_descriptor *zynqmp_dma_prep_memcpy( + struct dma_chan *dchan, dma_addr_t dma_dst, + dma_addr_t dma_src, size_t len, ulong flags) +{ + struct zynqmp_dma_chan *chan; + struct zynqmp_dma_desc_sw *new, *first = NULL; + void *desc = NULL, *prev = NULL; + size_t copy; + u32 desc_cnt; + + chan = to_chan(dchan); + + if ((len > ZYNQMP_DMA_MAX_TRANS_LEN) && !chan->has_sg) + return NULL; + + desc_cnt = DIV_ROUND_UP(len, ZYNQMP_DMA_MAX_TRANS_LEN); + + spin_lock_bh(&chan->lock); + if ((desc_cnt > chan->desc_free_cnt) && chan->has_sg) { + spin_unlock_bh(&chan->lock); + dev_dbg(chan->dev, "chan %p descs are not available\n", chan); + return NULL; + } + chan->desc_free_cnt = chan->desc_free_cnt - desc_cnt; + spin_unlock_bh(&chan->lock); + + do { + new = zynqmp_dma_get_descriptor(chan); + copy = min_t(size_t, len, ZYNQMP_DMA_MAX_TRANS_LEN); + if (chan->has_sg) { + desc = (struct zynqmp_dma_desc_ll *)new->src_v; + zynqmp_dma_config_sg_ll_desc(chan, desc, dma_src, + dma_dst, copy, prev); + } else { + new->src = dma_src; + new->dst = dma_dst; + new->len = len; + } + + prev = desc; + len -= copy; + dma_src += copy; + dma_dst += copy; + if (!first) + first = new; + else + list_add_tail(&new->node, &first->tx_list); + } while (len); + + if (chan->has_sg) + zynqmp_dma_desc_config_eod(chan, desc); + + first->async_tx.flags = flags; + return &first->async_tx; +} + +/** + * zynqmp_dma_prep_slave_sg - prepare descriptors for a memory sg transaction + * @dchan: DMA channel + * @dst_sg: Destination scatter list + * @dst_sg_len: Number of entries in destination scatter list + * @src_sg: Source scatter list + * @src_sg_len: Number of entries in source scatter list + * @flags: transfer ack flags + * + * Return: Async transaction descriptor on success and NULL on failure + */ +static struct dma_async_tx_descriptor *zynqmp_dma_prep_sg( + struct dma_chan *dchan, struct scatterlist *dst_sg, + unsigned int dst_sg_len, struct scatterlist *src_sg, + unsigned int src_sg_len, unsigned long flags) +{ + struct zynqmp_dma_desc_sw *new, *first = NULL; + struct zynqmp_dma_chan *chan = to_chan(dchan); + void *desc = NULL, *prev = NULL; + size_t len, dst_avail, src_avail; + dma_addr_t dma_dst, dma_src; + u32 desc_cnt = 0, i; + struct scatterlist *sg; + + if (!chan->has_sg) + return NULL; + + for_each_sg(src_sg, sg, src_sg_len, i) + desc_cnt += DIV_ROUND_UP(sg_dma_len(sg), + ZYNQMP_DMA_MAX_TRANS_LEN); + + spin_lock_bh(&chan->lock); + if (desc_cnt > chan->desc_free_cnt) { + spin_unlock_bh(&chan->lock); + dev_dbg(chan->dev, "chan %p descs are not available\n", chan); + return NULL; + } + chan->desc_free_cnt = chan->desc_free_cnt - desc_cnt; + spin_unlock_bh(&chan->lock); + + dst_avail = sg_dma_len(dst_sg); + src_avail = sg_dma_len(src_sg); + + /* Run until we are out of scatterlist entries */ + while (true) { + /* Allocate and populate the descriptor */ + new = zynqmp_dma_get_descriptor(chan); + desc = (struct zynqmp_dma_desc_ll *)new->src_v; + len = min_t(size_t, src_avail, dst_avail); + len = min_t(size_t, len, ZYNQMP_DMA_MAX_TRANS_LEN); + if (len == 0) + goto fetch; + dma_dst = sg_dma_address(dst_sg) + sg_dma_len(dst_sg) - + dst_avail; + dma_src = sg_dma_address(src_sg) + sg_dma_len(src_sg) - + src_avail; + + zynqmp_dma_config_sg_ll_desc(chan, desc, dma_src, dma_dst, + len, prev); + prev = desc; + dst_avail -= len; + src_avail -= len; + + if (!first) + first = new; + else + list_add_tail(&new->node, &first->tx_list); +fetch: + /* Fetch the next dst scatterlist entry */ + if (dst_avail == 0) { + if (dst_sg_len == 0) + break; + dst_sg = sg_next(dst_sg); + if (dst_sg == NULL) + break; + dst_sg_len--; + dst_avail = sg_dma_len(dst_sg); + } + /* Fetch the next src scatterlist entry */ + if (src_avail == 0) { + if (src_sg_len == 0) + break; + src_sg = sg_next(src_sg); + if (src_sg == NULL) + break; + src_sg_len--; + src_avail = sg_dma_len(src_sg); + } + } + + zynqmp_dma_desc_config_eod(chan, desc); + first->async_tx.flags = flags; + return &first->async_tx; +} + +/** + * zynqmp_dma_chan_remove - Channel remove function + * @chan: ZynqMP DMA channel pointer + */ +static void zynqmp_dma_chan_remove(struct zynqmp_dma_chan *chan) +{ + if (!chan) + return; + + tasklet_kill(&chan->tasklet); + list_del(&chan->common.device_node); +} + +/** + * zynqmp_dma_chan_probe - Per Channel Probing + * @xdev: Driver specific device structure + * @pdev: Pointer to the platform_device structure + * + * Return: '0' on success and failure value on error + */ +static int zynqmp_dma_chan_probe(struct zynqmp_dma_device *xdev, + struct platform_device *pdev) +{ + struct zynqmp_dma_chan *chan; + struct resource *res; + struct device_node *node = pdev->dev.of_node; + int err; + + chan = devm_kzalloc(xdev->dev, sizeof(*chan), GFP_KERNEL); + if (!chan) + return -ENOMEM; + chan->dev = xdev->dev; + chan->xdev = xdev; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + chan->regs = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(chan->regs)) + return PTR_ERR(chan->regs); + + chan->bus_width = ZYNQMP_DMA_BUS_WIDTH_64; + chan->src_issue = SRC_ISSUE_RST_VAL; + chan->dst_burst_len = AWLEN_RST_VAL; + chan->src_burst_len = ARLEN_RST_VAL; + chan->dst_axi_cache = AWCACHE_RST_VAL; + chan->src_axi_cache = ARCACHE_RST_VAL; + err = of_property_read_u32(node, "xlnx,bus-width", &chan->bus_width); + if ((err < 0) && ((chan->bus_width != ZYNQMP_DMA_BUS_WIDTH_64) || + (chan->bus_width != ZYNQMP_DMA_BUS_WIDTH_128))) { + dev_err(xdev->dev, "invalid bus-width value"); + return err; + } + chan->has_sg = of_property_read_bool(node, "xlnx,include-sg"); + chan->ovrfetch = of_property_read_bool(node, "xlnx,overfetch"); + chan->desc_axi_cohrnt = + of_property_read_bool(node, "xlnx,desc-axi-cohrnt"); + chan->src_axi_cohrnt = + of_property_read_bool(node, "xlnx,src-axi-cohrnt"); + chan->dst_axi_cohrnt = + of_property_read_bool(node, "xlnx,dst-axi-cohrnt"); + + of_property_read_u32(node, "xlnx,desc-axi-qos", &chan->desc_axi_qos); + of_property_read_u32(node, "xlnx,desc-axi-cache", + &chan->desc_axi_cache); + of_property_read_u32(node, "xlnx,src-axi-qos", &chan->src_axi_qos); + of_property_read_u32(node, "xlnx,src-axi-cache", &chan->src_axi_cache); + of_property_read_u32(node, "xlnx,dst-axi-qos", &chan->dst_axi_qos); + of_property_read_u32(node, "xlnx,dst-axi-cache", &chan->dst_axi_cache); + of_property_read_u32(node, "xlnx,src-burst-len", &chan->src_burst_len); + of_property_read_u32(node, "xlnx,dst-burst-len", &chan->dst_burst_len); + of_property_read_u32(node, "xlnx,ratectrl", &chan->ratectrl); + of_property_read_u32(node, "xlnx,src-issue", &chan->src_issue); + + xdev->chan = chan; + tasklet_init(&chan->tasklet, zynqmp_dma_do_tasklet, (ulong)chan); + spin_lock_init(&chan->lock); + INIT_LIST_HEAD(&chan->active_list); + INIT_LIST_HEAD(&chan->pending_list); + INIT_LIST_HEAD(&chan->done_list); + INIT_LIST_HEAD(&chan->free_list); + + dma_cookie_init(&chan->common); + chan->common.device = &xdev->common; + list_add_tail(&chan->common.device_node, &xdev->common.channels); + + zynqmp_dma_init(chan); + chan->irq = platform_get_irq(pdev, 0); + if (chan->irq < 0) + return -ENXIO; + err = devm_request_irq(&pdev->dev, chan->irq, zynqmp_dma_irq_handler, 0, + "zynqmp-dma", chan); + if (err) + return err; + + chan->desc_size = sizeof(struct zynqmp_dma_desc_ll); + return 0; +} + +/** + * of_zynqmp_dma_xlate - Translation function + * @dma_spec: Pointer to DMA specifier as found in the device tree + * @ofdma: Pointer to DMA controller data + * + * Return: DMA channel pointer on success and NULL on error + */ +static struct dma_chan *of_zynqmp_dma_xlate(struct of_phandle_args *dma_spec, + struct of_dma *ofdma) +{ + struct zynqmp_dma_device *xdev = ofdma->of_dma_data; + + return dma_get_slave_channel(&xdev->chan->common); +} + +/** + * zynqmp_dma_probe - Driver probe function + * @pdev: Pointer to the platform_device structure + * + * Return: '0' on success and failure value on error + */ +static int zynqmp_dma_probe(struct platform_device *pdev) +{ + struct zynqmp_dma_device *xdev; + struct dma_device *p; + int ret; + + xdev = devm_kzalloc(&pdev->dev, sizeof(*xdev), GFP_KERNEL); + if (!xdev) + return -ENOMEM; + + xdev->dev = &pdev->dev; + INIT_LIST_HEAD(&xdev->common.channels); + + dma_set_mask(&pdev->dev, DMA_BIT_MASK(44)); + dma_cap_set(DMA_SG, xdev->common.cap_mask); + dma_cap_set(DMA_MEMCPY, xdev->common.cap_mask); + + p = &xdev->common; + p->device_prep_dma_sg = zynqmp_dma_prep_sg; + p->device_prep_dma_memcpy = zynqmp_dma_prep_memcpy; + p->device_terminate_all = zynqmp_dma_device_terminate_all; + p->device_issue_pending = zynqmp_dma_issue_pending; + p->device_alloc_chan_resources = zynqmp_dma_alloc_chan_resources; + p->device_free_chan_resources = zynqmp_dma_free_chan_resources; + p->device_tx_status = zynqmp_dma_tx_status; + p->dev = &pdev->dev; + + platform_set_drvdata(pdev, xdev); + + ret = zynqmp_dma_chan_probe(xdev, pdev); + if (ret) { + dev_err(&pdev->dev, "Probing channel failed\n"); + goto free_chan_resources; + } + + p->dst_addr_widths = xdev->chan->bus_width / 8; + p->src_addr_widths = xdev->chan->bus_width / 8; + + dma_async_device_register(&xdev->common); + + ret = of_dma_controller_register(pdev->dev.of_node, + of_zynqmp_dma_xlate, xdev); + if (ret) { + dev_err(&pdev->dev, "Unable to register DMA to DT\n"); + dma_async_device_unregister(&xdev->common); + goto free_chan_resources; + } + + dev_info(&pdev->dev, "ZynqMP DMA driver Probe success\n"); + + return 0; + +free_chan_resources: + zynqmp_dma_chan_remove(xdev->chan); + return ret; +} + +/** + * zynqmp_dma_remove - Driver remove function + * @pdev: Pointer to the platform_device structure + * + * Return: Always '0' + */ +static int zynqmp_dma_remove(struct platform_device *pdev) +{ + struct zynqmp_dma_device *xdev = platform_get_drvdata(pdev); + + of_dma_controller_free(pdev->dev.of_node); + dma_async_device_unregister(&xdev->common); + + zynqmp_dma_chan_remove(xdev->chan); + + return 0; +} + +static const struct of_device_id zynqmp_dma_of_match[] = { + { .compatible = "xlnx,zynqmp-dma-1.0", }, + {} +}; +MODULE_DEVICE_TABLE(of, zynqmp_dma_of_match); + +static struct platform_driver zynqmp_dma_driver = { + .driver = { + .name = "xilinx-zynqmp-dma", + .of_match_table = zynqmp_dma_of_match, + }, + .probe = zynqmp_dma_probe, + .remove = zynqmp_dma_remove, +}; + +module_platform_driver(zynqmp_dma_driver); + +MODULE_AUTHOR("Xilinx, Inc."); +MODULE_DESCRIPTION("Xilinx ZynqMP DMA driver"); +MODULE_LICENSE("GPL"); -- 1.7.4 -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply related [flat|nested] 10+ messages in thread
[parent not found: <1434422083-8653-2-git-send-email-punnaia-gjFFaj9aHVfQT0dZR+AlfA@public.gmane.org>]
* Re: [PATCH v3 2/2] dma: Add Xilinx zynqmp dma engine driver support [not found] ` <1434422083-8653-2-git-send-email-punnaia-gjFFaj9aHVfQT0dZR+AlfA@public.gmane.org> @ 2015-07-16 12:35 ` Vinod Koul 2015-07-17 0:52 ` punnaiah choudary kalluri 2015-07-17 2:33 ` Punnaiah Choudary 0 siblings, 2 replies; 10+ messages in thread From: Vinod Koul @ 2015-07-16 12:35 UTC (permalink / raw) To: Punnaiah Choudary Kalluri Cc: robh+dt-DgEjT+Ai2ygdnm+yROfE0A, pawel.moll-5wv7dgnIgG8, mark.rutland-5wv7dgnIgG8, ijc+devicetree-KcIKpvwj1kUDXYZnReoRVg, galak-sgV2jX0FEOL9JmXXK+q4OQ, michal.simek-gjFFaj9aHVfQT0dZR+AlfA, soren.brinkmann-gjFFaj9aHVfQT0dZR+AlfA, dan.j.williams-ral2JQCrhuEAvxtiuMwx3w, dmaengine-u79uwXL29TY76Z2rM5mHXA, devicetree-u79uwXL29TY76Z2rM5mHXA, linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, linux-kernel-u79uwXL29TY76Z2rM5mHXA, kalluripunnaiahchoudary-Re5JQEeQqe8AvxtiuMwx3w, kpc528-Re5JQEeQqe8AvxtiuMwx3w, Punnaiah Choudary Kalluri On Tue, Jun 16, 2015 at 08:04:43AM +0530, Punnaiah Choudary Kalluri wrote: > +/* Register Offsets */ > +#define ISR 0x100 > +#define IMR 0x104 > +#define IER 0x108 > +#define IDS 0x10C > +#define CTRL0 0x110 > +#define CTRL1 0x114 > +#define STATUS 0x11C > +#define DATA_ATTR 0x120 > +#define DSCR_ATTR 0x124 > +#define SRC_DSCR_WRD0 0x128 > +#define SRC_DSCR_WRD1 0x12C > +#define SRC_DSCR_WRD2 0x130 > +#define SRC_DSCR_WRD3 0x134 > +#define DST_DSCR_WRD0 0x138 > +#define DST_DSCR_WRD1 0x13C > +#define DST_DSCR_WRD2 0x140 > +#define DST_DSCR_WRD3 0x144 > +#define SRC_START_LSB 0x158 > +#define SRC_START_MSB 0x15C > +#define DST_START_LSB 0x160 > +#define DST_START_MSB 0x164 > +#define TOTAL_BYTE 0x188 > +#define RATE_CTRL 0x18C > +#define IRQ_SRC_ACCT 0x190 > +#define IRQ_DST_ACCT 0x194 > +#define CTRL2 0x200 Can you namespace these and other defines > +struct zynqmp_dma_chan { > + struct zynqmp_dma_device *xdev; xdev seems an odd name, i suspect copy paste error! > + void __iomem *regs; > + spinlock_t lock; > + struct list_head pending_list; > + struct list_head free_list; > + struct list_head active_list; > + struct zynqmp_dma_desc_sw *sw_desc_pool; > + struct list_head done_list; > + struct dma_chan common; > + void *desc_pool_v; > + dma_addr_t desc_pool_p; why two pools and one void *? > +/** > + * zynqmp_dma_alloc_chan_resources - Allocate channel resources > + * @dchan: DMA channel > + * > + * Return: '0' on success and failure value on error wrong, its number of descriptors which maybe 0 > + */ > +static int zynqmp_dma_alloc_chan_resources(struct dma_chan *dchan) > +{ > + struct zynqmp_dma_chan *chan = to_chan(dchan); > + struct zynqmp_dma_desc_sw *desc; > + int i; > + > + chan->sw_desc_pool = kzalloc(sizeof(*desc) * ZYNQMP_DMA_NUM_DESCS, > + GFP_KERNEL); and you are allocating the number so pls return that Btw is ZYNQMP_DMA_NUM_DESCS sw or HW limit > +static void zynqmp_dma_start_transfer(struct zynqmp_dma_chan *chan) > +{ > + struct zynqmp_dma_desc_sw *desc; > + > + if (!zynqmp_dma_chan_is_idle(chan)) > + return; > + > + desc = list_first_entry_or_null(&chan->pending_list, > + struct zynqmp_dma_desc_sw, node); > + if (!desc) > + return; > + > + if (chan->has_sg) > + list_splice_tail_init(&chan->pending_list, &chan->active_list); > + else > + list_move_tail(&desc->node, &chan->active_list); > + > + if (chan->has_sg) > + zynqmp_dma_update_desc_to_ctrlr(chan, desc); > + else > + zynqmp_dma_config_simple_desc(chan, desc->src, desc->dst, > + desc->len); > + zynqmp_dma_start(chan); > +} Lots of the list management will get simplified if you use the vchan to do so > + > + > +/** > + * zynqmp_dma_chan_desc_cleanup - Cleanup the completed descriptors > + * @chan: ZynqMP DMA channel > + */ > +static void zynqmp_dma_chan_desc_cleanup(struct zynqmp_dma_chan *chan) > +{ > + struct zynqmp_dma_desc_sw *desc, *next; > + > + list_for_each_entry_safe(desc, next, &chan->done_list, node) { > + dma_async_tx_callback callback; > + void *callback_param; > + > + list_del(&desc->node); > + > + callback = desc->async_tx.callback; > + callback_param = desc->async_tx.callback_param; > + if (callback) > + callback(callback_param); > + and you are calling the callback with lock held and user can do further submissions so this is wrong! > +static enum dma_status zynqmp_dma_tx_status(struct dma_chan *dchan, > + dma_cookie_t cookie, > + struct dma_tx_state *txstate) > +{ > + return dma_cookie_status(dchan, cookie, txstate); no residue calculation? > +static struct dma_async_tx_descriptor *zynqmp_dma_prep_memcpy( > + struct dma_chan *dchan, dma_addr_t dma_dst, > + dma_addr_t dma_src, size_t len, ulong flags) > +{ > + struct zynqmp_dma_chan *chan; > + struct zynqmp_dma_desc_sw *new, *first = NULL; > + void *desc = NULL, *prev = NULL; > + size_t copy; > + u32 desc_cnt; > + > + chan = to_chan(dchan); > + > + if ((len > ZYNQMP_DMA_MAX_TRANS_LEN) && !chan->has_sg) why sg? > +static int zynqmp_dma_remove(struct platform_device *pdev) > +{ > + struct zynqmp_dma_device *xdev = platform_get_drvdata(pdev); > + > + of_dma_controller_free(pdev->dev.of_node); > + dma_async_device_unregister(&xdev->common); > + > + zynqmp_dma_chan_remove(xdev->chan); Please free up irq here explictly and also ensure tasklet is not running -- ~Vinod -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 2/2] dma: Add Xilinx zynqmp dma engine driver support 2015-07-16 12:35 ` Vinod Koul @ 2015-07-17 0:52 ` punnaiah choudary kalluri 2015-07-17 2:33 ` Punnaiah Choudary 1 sibling, 0 replies; 10+ messages in thread From: punnaiah choudary kalluri @ 2015-07-17 0:52 UTC (permalink / raw) To: Vinod Koul Cc: Punnaiah Choudary Kalluri, robh+dt-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, pawel.moll-5wv7dgnIgG8@public.gmane.org, mark.rutland-5wv7dgnIgG8@public.gmane.org, ijc+devicetree-KcIKpvwj1kUDXYZnReoRVg@public.gmane.org, Kumar Gala, michal.simek-gjFFaj9aHVfQT0dZR+AlfA@public.gmane.org, Sören Brinkmann, dan.j.williams-ral2JQCrhuEAvxtiuMwx3w, dmaengine-u79uwXL29TY76Z2rM5mHXA, devicetree-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Punnaiah Choudary On Thu, Jul 16, 2015 at 6:05 PM, Vinod Koul <vinod.koul-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> wrote: > On Tue, Jun 16, 2015 at 08:04:43AM +0530, Punnaiah Choudary Kalluri wrote: >> +/* Register Offsets */ >> +#define ISR 0x100 >> +#define IMR 0x104 >> +#define IER 0x108 >> +#define IDS 0x10C >> +#define CTRL0 0x110 >> +#define CTRL1 0x114 >> +#define STATUS 0x11C >> +#define DATA_ATTR 0x120 >> +#define DSCR_ATTR 0x124 >> +#define SRC_DSCR_WRD0 0x128 >> +#define SRC_DSCR_WRD1 0x12C >> +#define SRC_DSCR_WRD2 0x130 >> +#define SRC_DSCR_WRD3 0x134 >> +#define DST_DSCR_WRD0 0x138 >> +#define DST_DSCR_WRD1 0x13C >> +#define DST_DSCR_WRD2 0x140 >> +#define DST_DSCR_WRD3 0x144 >> +#define SRC_START_LSB 0x158 >> +#define SRC_START_MSB 0x15C >> +#define DST_START_LSB 0x160 >> +#define DST_START_MSB 0x164 >> +#define TOTAL_BYTE 0x188 >> +#define RATE_CTRL 0x18C >> +#define IRQ_SRC_ACCT 0x190 >> +#define IRQ_DST_ACCT 0x194 >> +#define CTRL2 0x200 > Can you namespace these and other defines Ok. just to be clear, you want to me to prefix ZYNQMP_DMA? or you want me to add comments ? I have defined the exact register names as per the zynqmp dma spec, > >> +struct zynqmp_dma_chan { >> + struct zynqmp_dma_device *xdev; > xdev seems an odd name, i suspect copy paste error! I will change the name. > >> + void __iomem *regs; >> + spinlock_t lock; >> + struct list_head pending_list; >> + struct list_head free_list; >> + struct list_head active_list; >> + struct zynqmp_dma_desc_sw *sw_desc_pool; >> + struct list_head done_list; >> + struct dma_chan common; >> + void *desc_pool_v; >> + dma_addr_t desc_pool_p; > why two pools and one void *? > >> +/** >> + * zynqmp_dma_alloc_chan_resources - Allocate channel resources >> + * @dchan: DMA channel >> + * >> + * Return: '0' on success and failure value on error > wrong, its number of descriptors which maybe 0 Correct. I Will modify this. > >> + */ >> +static int zynqmp_dma_alloc_chan_resources(struct dma_chan *dchan) >> +{ >> + struct zynqmp_dma_chan *chan = to_chan(dchan); >> + struct zynqmp_dma_desc_sw *desc; >> + int i; >> + >> + chan->sw_desc_pool = kzalloc(sizeof(*desc) * ZYNQMP_DMA_NUM_DESCS, >> + GFP_KERNEL); > and you are allocating the number so pls return that Ok. > > Btw is ZYNQMP_DMA_NUM_DESCS sw or HW limit Sw limit. > >> +static void zynqmp_dma_start_transfer(struct zynqmp_dma_chan *chan) >> +{ >> + struct zynqmp_dma_desc_sw *desc; >> + >> + if (!zynqmp_dma_chan_is_idle(chan)) >> + return; >> + >> + desc = list_first_entry_or_null(&chan->pending_list, >> + struct zynqmp_dma_desc_sw, node); >> + if (!desc) >> + return; >> + >> + if (chan->has_sg) >> + list_splice_tail_init(&chan->pending_list, &chan->active_list); >> + else >> + list_move_tail(&desc->node, &chan->active_list); >> + >> + if (chan->has_sg) >> + zynqmp_dma_update_desc_to_ctrlr(chan, desc); >> + else >> + zynqmp_dma_config_simple_desc(chan, desc->src, desc->dst, >> + desc->len); >> + zynqmp_dma_start(chan); >> +} > Lots of the list management will get simplified if you use the vchan to do > so I have explored using the virt-dma to reduce the common list processing, But in this driver descriptor processing and cleaning is happening inside the tasklet context and virt-dma assumes it is happening in interrupt context and using the spin locks accordingly. So, added code for the list management in side the driver. > >> + >> + >> +/** >> + * zynqmp_dma_chan_desc_cleanup - Cleanup the completed descriptors >> + * @chan: ZynqMP DMA channel >> + */ >> +static void zynqmp_dma_chan_desc_cleanup(struct zynqmp_dma_chan *chan) >> +{ >> + struct zynqmp_dma_desc_sw *desc, *next; >> + >> + list_for_each_entry_safe(desc, next, &chan->done_list, node) { >> + dma_async_tx_callback callback; >> + void *callback_param; >> + >> + list_del(&desc->node); >> + >> + callback = desc->async_tx.callback; >> + callback_param = desc->async_tx.callback_param; >> + if (callback) >> + callback(callback_param); >> + > and you are calling the callback with lock held and user can do further > submissions so this is wrong! is it that driver should have separate temp list and fill this list with the descriptors in done_list and process them with out lock held ? please suggest. > >> +static enum dma_status zynqmp_dma_tx_status(struct dma_chan *dchan, >> + dma_cookie_t cookie, >> + struct dma_tx_state *txstate) >> +{ >> + return dma_cookie_status(dchan, cookie, txstate); > > no residue calculation? Controller has total byte count register that Indicates total number of bytes transferred since last clear. But since we implemented the Hw queuing it is difficult to get residue calculation by relying on the above register value. So, at least for this version of the driver no residue calculation. > >> +static struct dma_async_tx_descriptor *zynqmp_dma_prep_memcpy( >> + struct dma_chan *dchan, dma_addr_t dma_dst, >> + dma_addr_t dma_src, size_t len, ulong flags) >> +{ >> + struct zynqmp_dma_chan *chan; >> + struct zynqmp_dma_desc_sw *new, *first = NULL; >> + void *desc = NULL, *prev = NULL; >> + size_t copy; >> + u32 desc_cnt; >> + >> + chan = to_chan(dchan); >> + >> + if ((len > ZYNQMP_DMA_MAX_TRANS_LEN) && !chan->has_sg) > why sg? Controller supports two types of modes. Simple dma mode: In this mode, DMA transfer parameters are specified in APB registers So, queuing the new request in hw is not possible when the channel is busy with previous transfer and also it doesn't have SG support. Max transfer size per transfer is 1GB. Scatter gather dma mode: Transfer parameters are specified in the buffer descriptors (BD). Max transfer size per BD is 1GB and Dma transaction can have multiple BDs The above condition is to ensure that when the controller is configured for Simple Dma mode, allow the transfers that are with in 1GB size if not it return error. > >> +static int zynqmp_dma_remove(struct platform_device *pdev) >> +{ >> + struct zynqmp_dma_device *xdev = platform_get_drvdata(pdev); >> + >> + of_dma_controller_free(pdev->dev.of_node); >> + dma_async_device_unregister(&xdev->common); >> + >> + zynqmp_dma_chan_remove(xdev->chan); > Please free up irq here explictly and also ensure tasklet is not running irq free is not required as cleaning will be taken care by the devm framework. tasklet cleaning is happening inside the zynqmp_chan_remove function. Regards, Punnaiah > > -- > ~Vinod > -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 2/2] dma: Add Xilinx zynqmp dma engine driver support 2015-07-16 12:35 ` Vinod Koul 2015-07-17 0:52 ` punnaiah choudary kalluri @ 2015-07-17 2:33 ` Punnaiah Choudary [not found] ` <CAHLkNH+VLccurDfxEnwSEbr9YFv8ymZuibci7TW=WPutL7qbAQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 1 sibling, 1 reply; 10+ messages in thread From: Punnaiah Choudary @ 2015-07-17 2:33 UTC (permalink / raw) To: Vinod Koul Cc: Punnaiah Choudary Kalluri, Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell, Kumar Gala, Michal Simek, soren.brinkmann, dan.j.williams, dmaengine, devicetree@vger.kernel.org, linux-arm-kernel, linux-kernel@vger.kernel.org, punnaia, Punnaiah Choudary Kalluri [-- Attachment #1: Type: text/plain, Size: 6152 bytes --] On Thu, Jul 16, 2015 at 6:05 PM, Vinod Koul <vinod.koul@intel.com> wrote: > On Tue, Jun 16, 2015 at 08:04:43AM +0530, Punnaiah Choudary Kalluri wrote: > > +/* Register Offsets */ > > +#define ISR 0x100 > > +#define IMR 0x104 > > +#define IER 0x108 > > +#define IDS 0x10C > > +#define CTRL0 0x110 > > +#define CTRL1 0x114 > > +#define STATUS 0x11C > > +#define DATA_ATTR 0x120 > > +#define DSCR_ATTR 0x124 > > +#define SRC_DSCR_WRD0 0x128 > > +#define SRC_DSCR_WRD1 0x12C > > +#define SRC_DSCR_WRD2 0x130 > > +#define SRC_DSCR_WRD3 0x134 > > +#define DST_DSCR_WRD0 0x138 > > +#define DST_DSCR_WRD1 0x13C > > +#define DST_DSCR_WRD2 0x140 > > +#define DST_DSCR_WRD3 0x144 > > +#define SRC_START_LSB 0x158 > > +#define SRC_START_MSB 0x15C > > +#define DST_START_LSB 0x160 > > +#define DST_START_MSB 0x164 > > +#define TOTAL_BYTE 0x188 > > +#define RATE_CTRL 0x18C > > +#define IRQ_SRC_ACCT 0x190 > > +#define IRQ_DST_ACCT 0x194 > > +#define CTRL2 0x200 > Can you namespace these and other defines > > > +struct zynqmp_dma_chan { > > + struct zynqmp_dma_device *xdev; > xdev seems an odd name, i suspect copy paste error! > > > + void __iomem *regs; > > + spinlock_t lock; > > + struct list_head pending_list; > > + struct list_head free_list; > > + struct list_head active_list; > > + struct zynqmp_dma_desc_sw *sw_desc_pool; > > + struct list_head done_list; > > + struct dma_chan common; > > + void *desc_pool_v; > > + dma_addr_t desc_pool_p; > why two pools and one void *? > One for pre-allocated sw descriptor pool and the other for the Hw BDs For Hw BDs, it is void * because there are two types BDs i.e linear and linked list in Hw with different size. But the current implementation supports only linked list mode, we will add linear mode descriptor support later Regards, Punnaiah > > > +/** > > + * zynqmp_dma_alloc_chan_resources - Allocate channel resources > > + * @dchan: DMA channel > > + * > > + * Return: '0' on success and failure value on error > wrong, its number of descriptors which maybe 0 > > > + */ > > +static int zynqmp_dma_alloc_chan_resources(struct dma_chan *dchan) > > +{ > > + struct zynqmp_dma_chan *chan = to_chan(dchan); > > + struct zynqmp_dma_desc_sw *desc; > > + int i; > > + > > + chan->sw_desc_pool = kzalloc(sizeof(*desc) * ZYNQMP_DMA_NUM_DESCS, > > + GFP_KERNEL); > and you are allocating the number so pls return that > > Btw is ZYNQMP_DMA_NUM_DESCS sw or HW limit > > > +static void zynqmp_dma_start_transfer(struct zynqmp_dma_chan *chan) > > +{ > > + struct zynqmp_dma_desc_sw *desc; > > + > > + if (!zynqmp_dma_chan_is_idle(chan)) > > + return; > > + > > + desc = list_first_entry_or_null(&chan->pending_list, > > + struct zynqmp_dma_desc_sw, node); > > + if (!desc) > > + return; > > + > > + if (chan->has_sg) > > + list_splice_tail_init(&chan->pending_list, > &chan->active_list); > > + else > > + list_move_tail(&desc->node, &chan->active_list); > > + > > + if (chan->has_sg) > > + zynqmp_dma_update_desc_to_ctrlr(chan, desc); > > + else > > + zynqmp_dma_config_simple_desc(chan, desc->src, desc->dst, > > + desc->len); > > + zynqmp_dma_start(chan); > > +} > Lots of the list management will get simplified if you use the vchan to do > so > > > + > > + > > +/** > > + * zynqmp_dma_chan_desc_cleanup - Cleanup the completed descriptors > > + * @chan: ZynqMP DMA channel > > + */ > > +static void zynqmp_dma_chan_desc_cleanup(struct zynqmp_dma_chan *chan) > > +{ > > + struct zynqmp_dma_desc_sw *desc, *next; > > + > > + list_for_each_entry_safe(desc, next, &chan->done_list, node) { > > + dma_async_tx_callback callback; > > + void *callback_param; > > + > > + list_del(&desc->node); > > + > > + callback = desc->async_tx.callback; > > + callback_param = desc->async_tx.callback_param; > > + if (callback) > > + callback(callback_param); > > + > and you are calling the callback with lock held and user can do further > submissions so this is wrong! > > > +static enum dma_status zynqmp_dma_tx_status(struct dma_chan *dchan, > > + dma_cookie_t cookie, > > + struct dma_tx_state *txstate) > > +{ > > + return dma_cookie_status(dchan, cookie, txstate); > > no residue calculation? > > > +static struct dma_async_tx_descriptor *zynqmp_dma_prep_memcpy( > > + struct dma_chan *dchan, dma_addr_t dma_dst, > > + dma_addr_t dma_src, size_t len, ulong > flags) > > +{ > > + struct zynqmp_dma_chan *chan; > > + struct zynqmp_dma_desc_sw *new, *first = NULL; > > + void *desc = NULL, *prev = NULL; > > + size_t copy; > > + u32 desc_cnt; > > + > > + chan = to_chan(dchan); > > + > > + if ((len > ZYNQMP_DMA_MAX_TRANS_LEN) && !chan->has_sg) > why sg? > > > +static int zynqmp_dma_remove(struct platform_device *pdev) > > +{ > > + struct zynqmp_dma_device *xdev = platform_get_drvdata(pdev); > > + > > + of_dma_controller_free(pdev->dev.of_node); > > + dma_async_device_unregister(&xdev->common); > > + > > + zynqmp_dma_chan_remove(xdev->chan); > Please free up irq here explictly and also ensure tasklet is not running > > -- > ~Vinod > > [-- Attachment #2: Type: text/html, Size: 8568 bytes --] ^ permalink raw reply [flat|nested] 10+ messages in thread
[parent not found: <CAHLkNH+VLccurDfxEnwSEbr9YFv8ymZuibci7TW=WPutL7qbAQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>]
[parent not found: <CAGnW=BbUBOzm5V_LP_YcCfJ9X+Yw8qGp6PE2xSM13E=qUXxUsw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>]
* Re: [PATCH v3 2/2] dma: Add Xilinx zynqmp dma engine driver support [not found] ` <CAGnW=BbUBOzm5V_LP_YcCfJ9X+Yw8qGp6PE2xSM13E=qUXxUsw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> @ 2015-07-17 3:05 ` Vinod Koul 2015-07-17 4:24 ` punnaiah choudary kalluri 0 siblings, 1 reply; 10+ messages in thread From: Vinod Koul @ 2015-07-17 3:05 UTC (permalink / raw) To: punnaiah choudary kalluri, Punnaiah Choudary Cc: Punnaiah Choudary Kalluri, robh+dt-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, pawel.moll-5wv7dgnIgG8@public.gmane.org, mark.rutland-5wv7dgnIgG8@public.gmane.org, ijc+devicetree-KcIKpvwj1kUDXYZnReoRVg@public.gmane.org, Kumar Gala, michal.simek-gjFFaj9aHVfQT0dZR+AlfA@public.gmane.org, Sören Brinkmann, dan.j.williams-ral2JQCrhuEAvxtiuMwx3w, dmaengine-u79uwXL29TY76Z2rM5mHXA, devicetree-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, punnaia On Fri, Jul 17, 2015 at 06:22:42AM +0530, punnaiah choudary kalluri wrote: > On Thu, Jul 16, 2015 at 6:05 PM, Vinod Koul <vinod.koul-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> wrote: > > On Tue, Jun 16, 2015 at 08:04:43AM +0530, Punnaiah Choudary Kalluri wrote: > >> +/* Register Offsets */ > >> +#define ISR 0x100 > >> +#define IMR 0x104 > >> +#define IER 0x108 > >> +#define IDS 0x10C > >> +#define CTRL0 0x110 > >> +#define CTRL1 0x114 > >> +#define STATUS 0x11C > >> +#define DATA_ATTR 0x120 > >> +#define DSCR_ATTR 0x124 > >> +#define SRC_DSCR_WRD0 0x128 > >> +#define SRC_DSCR_WRD1 0x12C > >> +#define SRC_DSCR_WRD2 0x130 > >> +#define SRC_DSCR_WRD3 0x134 > >> +#define DST_DSCR_WRD0 0x138 > >> +#define DST_DSCR_WRD1 0x13C > >> +#define DST_DSCR_WRD2 0x140 > >> +#define DST_DSCR_WRD3 0x144 > >> +#define SRC_START_LSB 0x158 > >> +#define SRC_START_MSB 0x15C > >> +#define DST_START_LSB 0x160 > >> +#define DST_START_MSB 0x164 > >> +#define TOTAL_BYTE 0x188 > >> +#define RATE_CTRL 0x18C > >> +#define IRQ_SRC_ACCT 0x190 > >> +#define IRQ_DST_ACCT 0x194 > >> +#define CTRL2 0x200 > > Can you namespace these and other defines > > Ok. just to be clear, you want to me to prefix ZYNQMP_DMA? > or you want me to add comments ? > I have defined the exact register names as per the zynqmp dma > spec, former please > > Btw is ZYNQMP_DMA_NUM_DESCS sw or HW limit > > Sw limit. so why put the limit then? > >> +static void zynqmp_dma_start_transfer(struct zynqmp_dma_chan *chan) > >> +{ > >> + struct zynqmp_dma_desc_sw *desc; > >> + > >> + if (!zynqmp_dma_chan_is_idle(chan)) > >> + return; > >> + > >> + desc = list_first_entry_or_null(&chan->pending_list, > >> + struct zynqmp_dma_desc_sw, node); > >> + if (!desc) > >> + return; > >> + > >> + if (chan->has_sg) > >> + list_splice_tail_init(&chan->pending_list, &chan->active_list); > >> + else > >> + list_move_tail(&desc->node, &chan->active_list); > >> + > >> + if (chan->has_sg) > >> + zynqmp_dma_update_desc_to_ctrlr(chan, desc); > >> + else > >> + zynqmp_dma_config_simple_desc(chan, desc->src, desc->dst, > >> + desc->len); > >> + zynqmp_dma_start(chan); > >> +} > > Lots of the list management will get simplified if you use the vchan to do > > so > > I have explored using the virt-dma to reduce the common list processing, But > in this driver descriptor processing and cleaning is happening inside > the tasklet > context and virt-dma assumes it is happening in interrupt context and using the > spin locks accordingly. So, added code for the list management in side > the driver. And why would it bother you. There is a reson for that, it tries to submit next txn as soon as possible, which is the right thing > >> +/** > >> + * zynqmp_dma_chan_desc_cleanup - Cleanup the completed descriptors > >> + * @chan: ZynqMP DMA channel > >> + */ > >> +static void zynqmp_dma_chan_desc_cleanup(struct zynqmp_dma_chan *chan) > >> +{ > >> + struct zynqmp_dma_desc_sw *desc, *next; > >> + > >> + list_for_each_entry_safe(desc, next, &chan->done_list, node) { > >> + dma_async_tx_callback callback; > >> + void *callback_param; > >> + > >> + list_del(&desc->node); > >> + > >> + callback = desc->async_tx.callback; > >> + callback_param = desc->async_tx.callback_param; > >> + if (callback) > >> + callback(callback_param); > >> + > > and you are calling the callback with lock held and user can do further > > submissions so this is wrong! > > is it that driver should have separate temp list and fill this list > with the descriptors in > done_list and process them with out lock held ? > > please suggest. Nope, you need to drop locks while invoking callback > >> +static struct dma_async_tx_descriptor *zynqmp_dma_prep_memcpy( > >> + struct dma_chan *dchan, dma_addr_t dma_dst, > >> + dma_addr_t dma_src, size_t len, ulong flags) > >> +{ > >> + struct zynqmp_dma_chan *chan; > >> + struct zynqmp_dma_desc_sw *new, *first = NULL; > >> + void *desc = NULL, *prev = NULL; > >> + size_t copy; > >> + u32 desc_cnt; > >> + > >> + chan = to_chan(dchan); > >> + > >> + if ((len > ZYNQMP_DMA_MAX_TRANS_LEN) && !chan->has_sg) > > why sg? > > Controller supports two types of modes. > > Simple dma mode: > In this mode, DMA transfer parameters are specified in APB registers > So, queuing the new request in hw is not possible when the channel > is busy with > previous transfer and also it doesn't have SG support. > Max transfer size per transfer is 1GB. > > Scatter gather dma mode: > Transfer parameters are specified in the buffer descriptors (BD). > Max transfer size per BD is 1GB and Dma transaction can have multiple BDs > > The above condition is to ensure that when the controller is > configured for Simple > Dma mode, allow the transfers that are with in 1GB size if not it return error. But then you can "queue" up in SW and switch from sg to simple mode and vice-versa right? > >> +static int zynqmp_dma_remove(struct platform_device *pdev) > >> +{ > >> + struct zynqmp_dma_device *xdev = platform_get_drvdata(pdev); > >> + > >> + of_dma_controller_free(pdev->dev.of_node); > >> + dma_async_device_unregister(&xdev->common); > >> + > >> + zynqmp_dma_chan_remove(xdev->chan); > > Please free up irq here explictly and also ensure tasklet is not running > > irq free is not required as cleaning will be taken care by the devm framework. > tasklet cleaning is happening inside the zynqmp_chan_remove function. Wrong on two counts - did you ensure tasklet would not be traoggered again after cleanup, i think no - did you ensure irq will not be triggered again before you return remove, i think no. devm_irq_free should be invoked explictly here to ensure this -- ~Vinod -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 2/2] dma: Add Xilinx zynqmp dma engine driver support 2015-07-17 3:05 ` Vinod Koul @ 2015-07-17 4:24 ` punnaiah choudary kalluri 2015-07-17 9:08 ` Vinod Koul 0 siblings, 1 reply; 10+ messages in thread From: punnaiah choudary kalluri @ 2015-07-17 4:24 UTC (permalink / raw) To: Vinod Koul Cc: Punnaiah Choudary, Punnaiah Choudary Kalluri, robh+dt-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, pawel.moll-5wv7dgnIgG8@public.gmane.org, mark.rutland-5wv7dgnIgG8@public.gmane.org, ijc+devicetree-KcIKpvwj1kUDXYZnReoRVg@public.gmane.org, Kumar Gala, michal.simek-gjFFaj9aHVfQT0dZR+AlfA@public.gmane.org, Sören Brinkmann, dan.j.williams-ral2JQCrhuEAvxtiuMwx3w, dmaengine-u79uwXL29TY76Z2rM5mHXA, devicetree-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org On Fri, Jul 17, 2015 at 8:35 AM, Vinod Koul <vinod.koul-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> wrote: > On Fri, Jul 17, 2015 at 06:22:42AM +0530, punnaiah choudary kalluri wrote: >> On Thu, Jul 16, 2015 at 6:05 PM, Vinod Koul <vinod.koul-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> wrote: >> > On Tue, Jun 16, 2015 at 08:04:43AM +0530, Punnaiah Choudary Kalluri wrote: >> >> +/* Register Offsets */ >> >> +#define ISR 0x100 >> >> +#define IMR 0x104 >> >> +#define IER 0x108 >> >> +#define IDS 0x10C >> >> +#define CTRL0 0x110 >> >> +#define CTRL1 0x114 >> >> +#define STATUS 0x11C >> >> +#define DATA_ATTR 0x120 >> >> +#define DSCR_ATTR 0x124 >> >> +#define SRC_DSCR_WRD0 0x128 >> >> +#define SRC_DSCR_WRD1 0x12C >> >> +#define SRC_DSCR_WRD2 0x130 >> >> +#define SRC_DSCR_WRD3 0x134 >> >> +#define DST_DSCR_WRD0 0x138 >> >> +#define DST_DSCR_WRD1 0x13C >> >> +#define DST_DSCR_WRD2 0x140 >> >> +#define DST_DSCR_WRD3 0x144 >> >> +#define SRC_START_LSB 0x158 >> >> +#define SRC_START_MSB 0x15C >> >> +#define DST_START_LSB 0x160 >> >> +#define DST_START_MSB 0x164 >> >> +#define TOTAL_BYTE 0x188 >> >> +#define RATE_CTRL 0x18C >> >> +#define IRQ_SRC_ACCT 0x190 >> >> +#define IRQ_DST_ACCT 0x194 >> >> +#define CTRL2 0x200 >> > Can you namespace these and other defines >> >> Ok. just to be clear, you want to me to prefix ZYNQMP_DMA? >> or you want me to add comments ? >> I have defined the exact register names as per the zynqmp dma >> spec, > > former please > >> > Btw is ZYNQMP_DMA_NUM_DESCS sw or HW limit >> >> Sw limit. > so why put the limit then? Considering the performance, i want to avoid runtime memory allocation for each dma request and want to use pre-allocated pool for each request. Since its a pre-allocated pool during the channel resource allocation, i need to limit the size and one can change this as per the requirement. > >> >> +static void zynqmp_dma_start_transfer(struct zynqmp_dma_chan *chan) >> >> +{ >> >> + struct zynqmp_dma_desc_sw *desc; >> >> + >> >> + if (!zynqmp_dma_chan_is_idle(chan)) >> >> + return; >> >> + >> >> + desc = list_first_entry_or_null(&chan->pending_list, >> >> + struct zynqmp_dma_desc_sw, node); >> >> + if (!desc) >> >> + return; >> >> + >> >> + if (chan->has_sg) >> >> + list_splice_tail_init(&chan->pending_list, &chan->active_list); >> >> + else >> >> + list_move_tail(&desc->node, &chan->active_list); >> >> + >> >> + if (chan->has_sg) >> >> + zynqmp_dma_update_desc_to_ctrlr(chan, desc); >> >> + else >> >> + zynqmp_dma_config_simple_desc(chan, desc->src, desc->dst, >> >> + desc->len); >> >> + zynqmp_dma_start(chan); >> >> +} >> > Lots of the list management will get simplified if you use the vchan to do >> > so >> >> I have explored using the virt-dma to reduce the common list processing, But >> in this driver descriptor processing and cleaning is happening inside >> the tasklet >> context and virt-dma assumes it is happening in interrupt context and using the >> spin locks accordingly. So, added code for the list management in side >> the driver. > And why would it bother you. There is a reson for that, it tries to submit > next txn as soon as possible, which is the right thing We have implemented hw descriptor queuing in tx_submit function, so controller can process descriptors with out interruption. We want to minimize disabling all the interrupts for the purpose of submitting new req or cleaning the descriptors. Also if we adopt virt-dma, need to override the tx_submit function So, decided to go with any one cleaner approach. > >> >> +/** >> >> + * zynqmp_dma_chan_desc_cleanup - Cleanup the completed descriptors >> >> + * @chan: ZynqMP DMA channel >> >> + */ >> >> +static void zynqmp_dma_chan_desc_cleanup(struct zynqmp_dma_chan *chan) >> >> +{ >> >> + struct zynqmp_dma_desc_sw *desc, *next; >> >> + >> >> + list_for_each_entry_safe(desc, next, &chan->done_list, node) { >> >> + dma_async_tx_callback callback; >> >> + void *callback_param; >> >> + >> >> + list_del(&desc->node); >> >> + >> >> + callback = desc->async_tx.callback; >> >> + callback_param = desc->async_tx.callback_param; >> >> + if (callback) >> >> + callback(callback_param); >> >> + >> > and you are calling the callback with lock held and user can do further >> > submissions so this is wrong! >> >> is it that driver should have separate temp list and fill this list >> with the descriptors in >> done_list and process them with out lock held ? >> >> please suggest. > Nope, you need to drop locks while invoking callback Got it > >> >> +static struct dma_async_tx_descriptor *zynqmp_dma_prep_memcpy( >> >> + struct dma_chan *dchan, dma_addr_t dma_dst, >> >> + dma_addr_t dma_src, size_t len, ulong flags) >> >> +{ >> >> + struct zynqmp_dma_chan *chan; >> >> + struct zynqmp_dma_desc_sw *new, *first = NULL; >> >> + void *desc = NULL, *prev = NULL; >> >> + size_t copy; >> >> + u32 desc_cnt; >> >> + >> >> + chan = to_chan(dchan); >> >> + >> >> + if ((len > ZYNQMP_DMA_MAX_TRANS_LEN) && !chan->has_sg) >> > why sg? >> >> Controller supports two types of modes. >> >> Simple dma mode: >> In this mode, DMA transfer parameters are specified in APB registers >> So, queuing the new request in hw is not possible when the channel >> is busy with >> previous transfer and also it doesn't have SG support. >> Max transfer size per transfer is 1GB. >> >> Scatter gather dma mode: >> Transfer parameters are specified in the buffer descriptors (BD). >> Max transfer size per BD is 1GB and Dma transaction can have multiple BDs >> >> The above condition is to ensure that when the controller is >> configured for Simple >> Dma mode, allow the transfers that are with in 1GB size if not it return error. > But then you can "queue" up in SW and switch from sg to simple mode and > vice-versa right? It can be done. Idea is first to upstream basic version and add this kind of features and support later. > >> >> +static int zynqmp_dma_remove(struct platform_device *pdev) >> >> +{ >> >> + struct zynqmp_dma_device *xdev = platform_get_drvdata(pdev); >> >> + >> >> + of_dma_controller_free(pdev->dev.of_node); >> >> + dma_async_device_unregister(&xdev->common); >> >> + >> >> + zynqmp_dma_chan_remove(xdev->chan); >> > Please free up irq here explictly and also ensure tasklet is not running >> >> irq free is not required as cleaning will be taken care by the devm framework. >> tasklet cleaning is happening inside the zynqmp_chan_remove function. > Wrong on two counts > - did you ensure tasklet would not be traoggered again after cleanup, i > think no > - did you ensure irq will not be triggered again before you return remove, i > think no. > devm_irq_free should be invoked explictly here to ensure this Ok. Regards, Punnaiah > > -- > ~Vinod > -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 2/2] dma: Add Xilinx zynqmp dma engine driver support 2015-07-17 4:24 ` punnaiah choudary kalluri @ 2015-07-17 9:08 ` Vinod Koul 2015-07-17 9:45 ` punnaiah choudary kalluri 0 siblings, 1 reply; 10+ messages in thread From: Vinod Koul @ 2015-07-17 9:08 UTC (permalink / raw) To: punnaiah choudary kalluri Cc: Punnaiah Choudary, Punnaiah Choudary Kalluri, robh+dt@kernel.org, pawel.moll@arm.com, mark.rutland@arm.com, ijc+devicetree@hellion.org.uk, Kumar Gala, michal.simek@xilinx.com, Sören Brinkmann, dan.j.williams, dmaengine, devicetree@vger.kernel.org, linux-arm-kernel, linux-kernel@vger.kernel.org On Fri, Jul 17, 2015 at 09:54:48AM +0530, punnaiah choudary kalluri wrote: your MUA is wrapping lines funny, please fix it > >> I have explored using the virt-dma to reduce the common list processing, But > >> in this driver descriptor processing and cleaning is happening inside > >> the tasklet > >> context and virt-dma assumes it is happening in interrupt context and using the > >> spin locks accordingly. So, added code for the list management in side > >> the driver. > > And why would it bother you. There is a reson for that, it tries to submit > > next txn as soon as possible, which is the right thing > We have implemented hw descriptor queuing in tx_submit function, so > controller can process descriptors with out interruption. We want to > minimize disabling all the interrupts for the purpose of submitting new > req or cleaning the descriptors. Also if we adopt virt-dma, need to > override the tx_submit function So, decided to go with any one cleaner > approach. okay this doesnt seem right, tx_submit is supposed to submit the descriptor to pending queue fo the driver (sw) and then issue pending will push it. The is the flow as expected and documented by dmanegine API, please follow that And to follow you will get a tons of help from vchan -- ~Vinod ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 2/2] dma: Add Xilinx zynqmp dma engine driver support 2015-07-17 9:08 ` Vinod Koul @ 2015-07-17 9:45 ` punnaiah choudary kalluri 0 siblings, 0 replies; 10+ messages in thread From: punnaiah choudary kalluri @ 2015-07-17 9:45 UTC (permalink / raw) To: Vinod Koul Cc: Punnaiah Choudary, Punnaiah Choudary Kalluri, robh+dt-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, pawel.moll-5wv7dgnIgG8@public.gmane.org, mark.rutland-5wv7dgnIgG8@public.gmane.org, ijc+devicetree-KcIKpvwj1kUDXYZnReoRVg@public.gmane.org, Kumar Gala, michal.simek-gjFFaj9aHVfQT0dZR+AlfA@public.gmane.org, Sören Brinkmann, dan.j.williams-ral2JQCrhuEAvxtiuMwx3w, dmaengine-u79uwXL29TY76Z2rM5mHXA, devicetree-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org On Fri, Jul 17, 2015 at 2:38 PM, Vinod Koul <vinod.koul-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> wrote: > On Fri, Jul 17, 2015 at 09:54:48AM +0530, punnaiah choudary kalluri wrote: > your MUA is wrapping lines funny, please fix it > >> >> I have explored using the virt-dma to reduce the common list processing, But >> >> in this driver descriptor processing and cleaning is happening inside >> >> the tasklet >> >> context and virt-dma assumes it is happening in interrupt context and using the >> >> spin locks accordingly. So, added code for the list management in side >> >> the driver. >> > And why would it bother you. There is a reson for that, it tries to submit >> > next txn as soon as possible, which is the right thing >> We have implemented hw descriptor queuing in tx_submit function, so >> controller can process descriptors with out interruption. We want to >> minimize disabling all the interrupts for the purpose of submitting new >> req or cleaning the descriptors. Also if we adopt virt-dma, need to >> override the tx_submit function So, decided to go with any one cleaner >> approach. > okay this doesnt seem right, tx_submit is supposed to submit the descriptor > to pending queue fo the driver (sw) and then issue pending will push it. > The is the flow as expected and documented by dmanegine API, please follow > that Correct and this is what implemented in the driver. Sorry for the confusion. What I am trying to say is in tx_submit we will be linking the new request/descriptor to the tail descriptor of the pending list so that it becomes a single request to the hw and once this request is pushed to Hw in the issue_pending, driver need not to update this request to Hw in ISR. Since our controller supports interrupt accounting, we will come to know how many requests are processed by the hw in tasklet and free them. I understand that with virt-dma, list management and descriptor clean up will be offloaded but for the above reason i am not using the same. Regards, Punnaiah > > And to follow you will get a tons of help from vchan > > -- > ~Vinod > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v3 1/2] Documentation: dt: Add Xilinx zynqmp dma device tree binding documentation @ 2015-07-14 3:36 Punnaiah Choudary Kalluri 2015-07-14 3:36 ` [PATCH v3 2/2] dma: Add Xilinx zynqmp dma engine driver support Punnaiah Choudary Kalluri 0 siblings, 1 reply; 10+ messages in thread From: Punnaiah Choudary Kalluri @ 2015-07-14 3:36 UTC (permalink / raw) To: robh+dt-DgEjT+Ai2ygdnm+yROfE0A, pawel.moll-5wv7dgnIgG8, mark.rutland-5wv7dgnIgG8, ijc+devicetree-KcIKpvwj1kUDXYZnReoRVg, galak-sgV2jX0FEOL9JmXXK+q4OQ, michal.simek-gjFFaj9aHVfQT0dZR+AlfA, soren.brinkmann-gjFFaj9aHVfQT0dZR+AlfA, vinod.koul-ral2JQCrhuEAvxtiuMwx3w, dan.j.williams-ral2JQCrhuEAvxtiuMwx3w Cc: dmaengine-u79uwXL29TY76Z2rM5mHXA, devicetree-u79uwXL29TY76Z2rM5mHXA, linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, linux-kernel-u79uwXL29TY76Z2rM5mHXA, kalluripunnaiahchoudary-Re5JQEeQqe8AvxtiuMwx3w, kpc528-Re5JQEeQqe8AvxtiuMwx3w, Punnaiah Choudary Kalluri Device-tree binding documentation for Xilinx zynqmp dma engine used in Zynq UltraScale+ MPSoC. Signed-off-by: Punnaiah Choudary Kalluri <punnaia-gjFFaj9aHVfQT0dZR+AlfA@public.gmane.org> --- Changes in v3: - None Changes in v2: - None --- .../devicetree/bindings/dma/xilinx/zynqmp_dma.txt | 61 ++++++++++++++++++++ 1 files changed, 61 insertions(+), 0 deletions(-) create mode 100644 Documentation/devicetree/bindings/dma/xilinx/zynqmp_dma.txt diff --git a/Documentation/devicetree/bindings/dma/xilinx/zynqmp_dma.txt b/Documentation/devicetree/bindings/dma/xilinx/zynqmp_dma.txt new file mode 100644 index 0000000..e4f92b9 --- /dev/null +++ b/Documentation/devicetree/bindings/dma/xilinx/zynqmp_dma.txt @@ -0,0 +1,61 @@ +Xilinx ZynqMP DMA engine, it does support memory to memory transfers, +memory to device and device to memory transfers. It also has flow +control and rate control support for slave/peripheral dma access. + +Required properties: +- compatible: Should be "xlnx,zynqmp-dma-1.0" +- #dma-cells: Should be <1>, a single cell holding a line request number +- reg: Memory map for module access +- interrupt-parent: Interrupt controller the interrupt is routed through +- interrupts: Should contain DMA channel interrupt +- xlnx,bus-width: AXI buswidth in bits. Should contain 128 or 64 + +Optional properties: +- xlnx,include-sg: Indicates the controller to operate in simple or scatter + gather dma mode +- xlnx,ratectrl: Scheduling interval in terms of clock cycles for + source AXI transaction +- xlnx,overfetch: Tells whether the channel is allowed to over fetch the data +- xlnx,src-issue: Number of AXI outstanding transactions on source side +- xlnx,desc-axi-cohrnt: Tells whether the AXI transactions generated for the + descriptor read are marked Non-coherent +- xlnx,src-axi-cohrnt: Tells whether the AXI transactions generated for the + source descriptor payload are marked Non-coherent +- xlnx,dst-axi-cohrnt: Tells whether the AXI transactions generated for the + dst descriptor payload are marked Non-coherent +- xlnx,desc-axi-qos: AXI QOS bits to be used for descriptor fetch +- xlnx,src-axi-qos: AXI QOS bits to be used for data read +- xlnx,dst-axi-qos: AXI QOS bits to be used for data write +- xlnx,desc-axi-cache: AXI cache bits to be used for descriptor fetch. +- xlnx,desc-axi-cache: AXI cache bits to be used for data read +- xlnx,desc-axi-cache: AXI cache bits to be used for data write +- xlnx,src-burst-len: AXI length for data read. Support only power of 2 values + i.e 1,2,4,8 and 16 +- xlnx,dst-burst-len: AXI length for data write. Support only power of 2 values + i.e 1,2,4,8 and 16 + +Example: +++++++++ +fpd_dma_chan1: dma@FD500000 { + compatible = "xlnx,zynqmp-dma-1.0"; + reg = <0x0 0xFD500000 0x1000>; + #dma_cells = <1>; + interrupt-parent = <&gic>; + interrupts = <0 117 4>; + xlnx,bus-width = <128>; + xlnx,include-sg; + xlnx,overfetch; + xlnx,ratectrl = <0>; + xlnx,src-issue = <16>; + xlnx,desc-axi-cohrnt; + xlnx,src-axi-cohrnt; + xlnx,dst-axi-cohrnt; + xlnx,desc-axi-qos = <0>; + xlnx,desc-axi-cache = <0>; + xlnx,src-axi-qos = <0>; + xlnx,src-axi-cache = <2>; + xlnx,dst-axi-qos = <0>; + xlnx,dst-axi-cache = <2>; + xlnx,src-burst-len = <4>; + xlnx,dst-burst-len = <4>; +}; -- 1.7.4 -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 2/2] dma: Add Xilinx zynqmp dma engine driver support 2015-07-14 3:36 [PATCH v3 1/2] Documentation: dt: Add Xilinx zynqmp dma device tree binding documentation Punnaiah Choudary Kalluri @ 2015-07-14 3:36 ` Punnaiah Choudary Kalluri 0 siblings, 0 replies; 10+ messages in thread From: Punnaiah Choudary Kalluri @ 2015-07-14 3:36 UTC (permalink / raw) To: robh+dt, pawel.moll, mark.rutland, ijc+devicetree, galak, michal.simek, soren.brinkmann, vinod.koul, dan.j.williams Cc: dmaengine, devicetree, linux-arm-kernel, linux-kernel, kalluripunnaiahchoudary, kpc528, Punnaiah Choudary Kalluri Added the basic driver for zynqmp dma engine used in Zynq UltraScale+ MPSoC. The initial release of this driver supports only memory to memory transfers. Signed-off-by: Punnaiah Choudary Kalluri <punnaia@xilinx.com> --- Changes in v3: - Modified the zynqmp_dma_chan_is_idle function return type to bool Changes in v2: - Corrected the function header documentation - Framework expects bus-width value in bytes. so, fixed it. - Removed magic numbers for bus-width --- drivers/dma/Kconfig | 6 + drivers/dma/xilinx/Makefile | 1 + drivers/dma/xilinx/zynqmp_dma.c | 1216 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 1223 insertions(+), 0 deletions(-) create mode 100644 drivers/dma/xilinx/zynqmp_dma.c diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig index bda2cb0..d5b57fc 100644 --- a/drivers/dma/Kconfig +++ b/drivers/dma/Kconfig @@ -410,6 +410,12 @@ config XILINX_VDMA channels, Memory Mapped to Stream (MM2S) and Stream to Memory Mapped (S2MM) for the data transfers. +config XILINX_ZYNQMP_DMA + tristate "Xilinx ZynqMP DMA Engine" + select DMA_ENGINE + help + Enable support for Xilinx ZynqMP DMA engine in Zynq UltraScale+ MPSoC. + config DMA_SUN6I tristate "Allwinner A31 SoCs DMA support" depends on MACH_SUN6I || MACH_SUN8I || COMPILE_TEST diff --git a/drivers/dma/xilinx/Makefile b/drivers/dma/xilinx/Makefile index 3c4e9f2..95469dc 100644 --- a/drivers/dma/xilinx/Makefile +++ b/drivers/dma/xilinx/Makefile @@ -1 +1,2 @@ obj-$(CONFIG_XILINX_VDMA) += xilinx_vdma.o +obj-$(CONFIG_XILINX_ZYNQMP_DMA) += zynqmp_dma.o diff --git a/drivers/dma/xilinx/zynqmp_dma.c b/drivers/dma/xilinx/zynqmp_dma.c new file mode 100644 index 0000000..cfb169a --- /dev/null +++ b/drivers/dma/xilinx/zynqmp_dma.c @@ -0,0 +1,1216 @@ +/* + * DMA driver for Xilinx ZynqMP DMA Engine + * + * Copyright (C) 2015 Xilinx, Inc. All rights reserved. + * + * This program is free software: you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation, either version 2 of the License, or + * (at your option) any later version. + */ + +#include <linux/bitops.h> +#include <linux/dma-mapping.h> +#include <linux/dmaengine.h> +#include <linux/dmapool.h> +#include <linux/init.h> +#include <linux/interrupt.h> +#include <linux/io.h> +#include <linux/module.h> +#include <linux/of_address.h> +#include <linux/of_dma.h> +#include <linux/of_irq.h> +#include <linux/of_platform.h> +#include <linux/slab.h> + +#include "../dmaengine.h" + +/* Register Offsets */ +#define ISR 0x100 +#define IMR 0x104 +#define IER 0x108 +#define IDS 0x10C +#define CTRL0 0x110 +#define CTRL1 0x114 +#define STATUS 0x11C +#define DATA_ATTR 0x120 +#define DSCR_ATTR 0x124 +#define SRC_DSCR_WRD0 0x128 +#define SRC_DSCR_WRD1 0x12C +#define SRC_DSCR_WRD2 0x130 +#define SRC_DSCR_WRD3 0x134 +#define DST_DSCR_WRD0 0x138 +#define DST_DSCR_WRD1 0x13C +#define DST_DSCR_WRD2 0x140 +#define DST_DSCR_WRD3 0x144 +#define SRC_START_LSB 0x158 +#define SRC_START_MSB 0x15C +#define DST_START_LSB 0x160 +#define DST_START_MSB 0x164 +#define TOTAL_BYTE 0x188 +#define RATE_CTRL 0x18C +#define IRQ_SRC_ACCT 0x190 +#define IRQ_DST_ACCT 0x194 +#define CTRL2 0x200 + +/* Interrupt registers bit field definitions */ +#define DMA_DONE BIT(10) +#define AXI_WR_DATA BIT(9) +#define AXI_RD_DATA BIT(8) +#define AXI_RD_DST_DSCR BIT(7) +#define AXI_RD_SRC_DSCR BIT(6) +#define IRQ_DST_ACCT_ERR BIT(5) +#define IRQ_SRC_ACCT_ERR BIT(4) +#define BYTE_CNT_OVRFL BIT(3) +#define DST_DSCR_DONE BIT(2) +#define INV_APB BIT(0) + +/* Control 0 register bit field definitions */ +#define OVR_FETCH BIT(7) +#define POINT_TYPE_SG BIT(6) +#define RATE_CTRL_EN BIT(3) + +/* Control 1 register bit field definitions */ +#define SRC_ISSUE GENMASK(4, 0) + +/* Channel status register bit field definitions */ +#define STATUS_BUSY 0x2 + +/* Data Attribute register bit field definitions */ +#define ARBURST GENMASK(27, 26) +#define ARCACHE GENMASK(25, 22) +#define ARCACHE_OFST 22 +#define ARQOS GENMASK(21, 18) +#define ARQOS_OFST 18 +#define ARLEN GENMASK(17, 14) +#define ARLEN_OFST 14 +#define AWBURST GENMASK(13, 12) +#define AWCACHE GENMASK(11, 8) +#define AWCACHE_OFST 8 +#define AWQOS GENMASK(7, 4) +#define AWQOS_OFST 4 +#define AWLEN GENMASK(3, 0) +#define AWLEN_OFST 0 + +/* Descriptor Attribute register bit field definitions */ +#define AXCOHRNT BIT(8) +#define AXCACHE GENMASK(7, 4) +#define AXCACHE_OFST 4 +#define AXQOS GENMASK(3, 0) +#define AXQOS_OFST 0 + +/* Control register 2 bit field definitions */ +#define ENABLE BIT(0) + +/* Buffer Descriptor definitions */ +#define DESC_CTRL_STOP 0x10 +#define DESC_CTRL_COMP_INT 0x4 +#define DESC_CTRL_SIZE_256 0x2 +#define DESC_CTRL_COHRNT 0x1 + +/* Interrupt Mask specific definitions */ +#define INT_ERR (AXI_RD_DATA | AXI_WR_DATA | AXI_RD_DST_DSCR | \ + AXI_RD_SRC_DSCR | INV_APB) +#define INT_OVRFL (BYTE_CNT_OVRFL | IRQ_SRC_ACCT_ERR | IRQ_DST_ACCT_ERR) +#define INT_DONE (DMA_DONE | DST_DSCR_DONE) +#define INT_EN_DEFAULT_MASK (INT_DONE | INT_ERR | INT_OVRFL | DST_DSCR_DONE) + +/* Max number of descriptors per channel */ +#define ZYNQMP_DMA_NUM_DESCS 32 + +/* Max transfer size per descriptor */ +#define ZYNQMP_DMA_MAX_TRANS_LEN 0x40000000 + +/* Reset values for data attributes */ +#define ARCACHE_RST_VAL 0x2 +#define ARLEN_RST_VAL 0xF +#define AWCACHE_RST_VAL 0x2 +#define AWLEN_RST_VAL 0xF + +#define SRC_ISSUE_RST_VAL 0x1F + +#define IDS_DEFAULT_MASK 0xFFF + +/* Bus width in bits */ +#define ZYNQMP_DMA_BUS_WIDTH_64 64 +#define ZYNQMP_DMA_BUS_WIDTH_128 128 + +#define DESC_SIZE(chan) (chan->desc_size) + +#define to_chan(chan) container_of(chan, struct zynqmp_dma_chan, \ + common) +#define tx_to_desc(tx) container_of(tx, struct zynqmp_dma_desc_sw, \ + async_tx) + +/** + * struct zynqmp_dma_desc_ll - Hw linked list descriptor + * @addr: Buffer address + * @size: Size of the buffer + * @ctrl: Control word + * @nxtdscraddr: Next descriptor base address + * @rsvd: Reserved field and for Hw internal use. + */ +struct zynqmp_dma_desc_ll { + u64 addr; + u32 size; + u32 ctrl; + u64 nxtdscraddr; + u64 rsvd; +}; __aligned(64) + +/** + * struct zynqmp_dma_chan - Driver specific DMA channel structure + * @xdev: Driver specific device structure + * @regs: Control registers offset + * @lock: Descriptor operation lock + * @pending_list: Descriptors waiting + * @free_list: Descriptors free + * @active_list: Descriptors active + * @sw_desc_pool: SW descriptor pool + * @done_list: Complete descriptors + * @common: DMA common channel + * @desc_pool_v: Statically allocated descriptor base + * @desc_pool_p: Physical allocated descriptor base + * @desc_free_cnt: Descriptor available count + * @dev: The dma device + * @irq: Channel IRQ + * @has_sg: Support scatter gather transfers + * @ovrfetch: Overfetch status + * @ratectrl: Rate control value + * @tasklet: Cleanup work after irq + * @src_issue: Out standing transactions on source + * @dst_issue: Out standing transactions on destination + * @desc_size: Size of the low level descriptor + * @err: Channel has errors + * @bus_width: Bus width + * @desc_axi_cohrnt: Descriptor axi coherent status + * @desc_axi_cache: Descriptor axi cache attribute + * @desc_axi_qos: Descriptor axi qos attribute + * @src_axi_cohrnt: Source data axi coherent status + * @src_axi_cache: Source data axi cache attribute + * @src_axi_qos: Source data axi qos attribute + * @dst_axi_cohrnt: Dest data axi coherent status + * @dst_axi_cache: Dest data axi cache attribute + * @dst_axi_qos: Dest data axi qos attribute + * @src_burst_len: Source burst length + * @dst_burst_len: Dest burst length + */ +struct zynqmp_dma_chan { + struct zynqmp_dma_device *xdev; + void __iomem *regs; + spinlock_t lock; + struct list_head pending_list; + struct list_head free_list; + struct list_head active_list; + struct zynqmp_dma_desc_sw *sw_desc_pool; + struct list_head done_list; + struct dma_chan common; + void *desc_pool_v; + dma_addr_t desc_pool_p; + u32 desc_free_cnt; + struct device *dev; + int irq; + bool has_sg; + bool ovrfetch; + u32 ratectrl; + struct tasklet_struct tasklet; + u32 src_issue; + u32 dst_issue; + u32 desc_size; + bool err; + u32 bus_width; + u32 desc_axi_cohrnt; + u32 desc_axi_cache; + u32 desc_axi_qos; + u32 src_axi_cohrnt; + u32 src_axi_cache; + u32 src_axi_qos; + u32 dst_axi_cohrnt; + u32 dst_axi_cache; + u32 dst_axi_qos; + u32 src_burst_len; + u32 dst_burst_len; +}; + +/** + * struct zynqmp_dma_desc_sw - Per Transaction structure + * @src: Source address for simple mode dma + * @dst: Destination address for simple mode dma + * @len: Transfer length for simple mode dma + * @node: Node in the channel descriptor list + * @tx_list: List head for the current transfer + * @async_tx: Async transaction descriptor + * @src_v: Virtual address of the src descriptor + * @src_p: Physical address of the src descriptor + * @dst_v: Virtual address of the dst descriptor + * @dst_p: Physical address of the dst descriptor + */ +struct zynqmp_dma_desc_sw { + u64 src; + u64 dst; + u32 len; + struct list_head node; + struct list_head tx_list; + struct dma_async_tx_descriptor async_tx; + struct zynqmp_dma_desc_ll *src_v; + dma_addr_t src_p; + struct zynqmp_dma_desc_ll *dst_v; + dma_addr_t dst_p; +}; + +/** + * struct zynqmp_dma_device - DMA device structure + * @dev: Device Structure + * @common: DMA device structure + * @chan: Driver specific DMA channel + */ +struct zynqmp_dma_device { + struct device *dev; + struct dma_device common; + struct zynqmp_dma_chan *chan; +}; + +/** + * zynqmp_dma_chan_is_idle - Provides the channel idle status + * @chan: ZynqMP DMA DMA channel pointer + * + * Return: 'true' if the channel is idle otherwise 'false' + */ +static bool zynqmp_dma_chan_is_idle(struct zynqmp_dma_chan *chan) +{ + u32 regval; + + regval = readl(chan->regs + STATUS); + if (regval & STATUS_BUSY) + return false; + + return true; +} + +/** + * zynqmp_dma_update_desc_to_ctrlr - Updates descriptor to the controller + * @chan: ZynqMP DMA DMA channel pointer + * @desc: Transaction descriptor pointer + */ +static void zynqmp_dma_update_desc_to_ctrlr(struct zynqmp_dma_chan *chan, + struct zynqmp_dma_desc_sw *desc) +{ + dma_addr_t addr; + + addr = desc->src_p; + writel(addr, chan->regs + SRC_START_LSB); + writel(upper_32_bits(addr), chan->regs + SRC_START_MSB); + addr = desc->dst_p; + writel(addr, chan->regs + DST_START_LSB); + writel(upper_32_bits(addr), chan->regs + DST_START_MSB); +} + +/** + * zynqmp_dma_desc_config_eod - Mark the descriptor as end descriptor + * @chan: ZynqMP DMA channel pointer + * @desc: Hw descriptor pointer + */ +static void zynqmp_dma_desc_config_eod(struct zynqmp_dma_chan *chan, void *desc) +{ + struct zynqmp_dma_desc_ll *hw = (struct zynqmp_dma_desc_ll *)desc; + + hw->ctrl |= DESC_CTRL_STOP; + hw++; + hw->ctrl |= DESC_CTRL_COMP_INT | DESC_CTRL_STOP; +} + +/** + * zynqmp_dma_config_simple_desc - Configure the transfer params to channel registers + * @chan: ZynqMP DMA channel pointer + * @src: Source buffer address + * @dst: Destination buffer address + * @len: Transfer length + */ +static void zynqmp_dma_config_simple_desc(struct zynqmp_dma_chan *chan, + dma_addr_t src, dma_addr_t dst, + size_t len) +{ + u32 val; + + writel(src, chan->regs + SRC_DSCR_WRD0); + writel(upper_32_bits(src), chan->regs + SRC_DSCR_WRD1); + writel(len, chan->regs + SRC_DSCR_WRD2); + + if (chan->src_axi_cohrnt) + writel(DESC_CTRL_COHRNT, chan->regs + SRC_DSCR_WRD3); + else + writel(0, chan->regs + SRC_DSCR_WRD3); + + writel(dst, chan->regs + DST_DSCR_WRD0); + writel(upper_32_bits(dst), chan->regs + DST_DSCR_WRD1); + writel(len, chan->regs + DST_DSCR_WRD2); + + if (chan->dst_axi_cohrnt) + val = DESC_CTRL_COHRNT | DESC_CTRL_COMP_INT; + else + val = DESC_CTRL_COMP_INT; + writel(val, chan->regs + DST_DSCR_WRD3); +} + +/** + * zynqmp_dma_config_sg_ll_desc - Configure the linked list descriptor + * @chan: ZynqMP DMA channel pointer + * @sdesc: Hw descriptor pointer + * @src: Source buffer address + * @dst: Destination buffer address + * @len: Transfer length + * @prev: Previous hw descriptor pointer + */ +static void zynqmp_dma_config_sg_ll_desc(struct zynqmp_dma_chan *chan, + struct zynqmp_dma_desc_ll *sdesc, + dma_addr_t src, dma_addr_t dst, size_t len, + struct zynqmp_dma_desc_ll *prev) +{ + struct zynqmp_dma_desc_ll *ddesc = sdesc + 1; + + sdesc->size = ddesc->size = len; + sdesc->addr = src; + ddesc->addr = dst; + + sdesc->ctrl = ddesc->ctrl = DESC_CTRL_SIZE_256; + if (chan->src_axi_cohrnt) + sdesc->ctrl |= DESC_CTRL_COHRNT; + else + ddesc->ctrl |= DESC_CTRL_COHRNT; + + if (prev) { + dma_addr_t addr = chan->desc_pool_p + + ((dma_addr_t)sdesc - (dma_addr_t)chan->desc_pool_v); + ddesc = prev + 1; + prev->nxtdscraddr = addr; + ddesc->nxtdscraddr = addr + DESC_SIZE(chan); + } +} + +/** + * zynqmp_dma_init - Initialize the channel + * @chan: ZynqMP DMA channel pointer + */ +static void zynqmp_dma_init(struct zynqmp_dma_chan *chan) +{ + u32 val; + + writel(IDS_DEFAULT_MASK, chan->regs + IDS); + val = readl(chan->regs + ISR); + writel(val, chan->regs + ISR); + writel(0x0, chan->regs + TOTAL_BYTE); + + val = readl(chan->regs + CTRL1); + if (chan->src_issue) + val = (val & ~SRC_ISSUE) | chan->src_issue; + writel(val, chan->regs + CTRL1); + + val = 0; + if (chan->ovrfetch) + val |= OVR_FETCH; + if (chan->has_sg) + val |= POINT_TYPE_SG; + if (chan->ratectrl) { + val |= RATE_CTRL_EN; + writel(chan->ratectrl, chan->regs + RATE_CTRL); + } + writel(val, chan->regs + CTRL0); + + val = 0; + if (chan->desc_axi_cohrnt) + val |= AXCOHRNT; + val |= chan->desc_axi_cache; + val = (val & ~AXCACHE) | (chan->desc_axi_cache << AXCACHE_OFST); + val |= chan->desc_axi_qos; + val = (val & ~AXQOS) | (chan->desc_axi_qos << AXQOS_OFST); + writel(val, chan->regs + DSCR_ATTR); + + val = readl(chan->regs + DATA_ATTR); + val = (val & ~ARCACHE) | (chan->src_axi_cache << ARCACHE_OFST); + val = (val & ~AWCACHE) | (chan->dst_axi_cache << AWCACHE_OFST); + val = (val & ~ARQOS) | (chan->src_axi_qos << ARQOS_OFST); + val = (val & ~AWQOS) | (chan->dst_axi_qos << AWQOS_OFST); + val = (val & ~ARLEN) | (chan->src_burst_len << ARLEN_OFST); + val = (val & ~AWLEN) | (chan->dst_burst_len << AWLEN_OFST); + writel(val, chan->regs + DATA_ATTR); + + /* Clearing the interrupt account rgisters */ + val = readl(chan->regs + IRQ_SRC_ACCT); + val = readl(chan->regs + IRQ_DST_ACCT); +} + +/** + * zynqmp_dma_tx_submit - Submit DMA transaction + * @tx: Async transaction descriptor pointer + * + * Return: cookie value + */ +static dma_cookie_t zynqmp_dma_tx_submit(struct dma_async_tx_descriptor *tx) +{ + struct zynqmp_dma_chan *chan = to_chan(tx->chan); + struct zynqmp_dma_desc_sw *desc = NULL, *new; + dma_cookie_t cookie; + + new = tx_to_desc(tx); + spin_lock_bh(&chan->lock); + cookie = dma_cookie_assign(tx); + if (!list_empty(&chan->pending_list) && chan->has_sg) { + desc = list_last_entry(&chan->pending_list, + struct zynqmp_dma_desc_sw, node); + if (!list_empty(&desc->tx_list)) + desc = list_last_entry(&desc->tx_list, + struct zynqmp_dma_desc_sw, node); + desc->src_v->nxtdscraddr = new->src_p; + desc->src_v->ctrl &= ~DESC_CTRL_STOP; + desc->dst_v->nxtdscraddr = new->dst_p; + desc->dst_v->ctrl &= ~DESC_CTRL_STOP; + } + list_add_tail(&new->node, &chan->pending_list); + spin_unlock_bh(&chan->lock); + + return cookie; +} + +/** + * zynqmp_dma_get_descriptor - Get the sw descriptor from the pool + * @chan: ZynqMP DMA channel pointer + * + * Return: The sw descriptor + */ +static struct zynqmp_dma_desc_sw * +zynqmp_dma_get_descriptor(struct zynqmp_dma_chan *chan) +{ + struct zynqmp_dma_desc_sw *desc; + + spin_lock_bh(&chan->lock); + desc = list_first_entry(&chan->free_list, struct zynqmp_dma_desc_sw, + node); + list_del(&desc->node); + spin_unlock_bh(&chan->lock); + + INIT_LIST_HEAD(&desc->tx_list); + /* Clear the src and dst descriptor memory */ + if (chan->has_sg) { + memset((void *)desc->src_v, 0, DESC_SIZE(chan)); + memset((void *)desc->dst_v, 0, DESC_SIZE(chan)); + } + + return desc; +} + +/** + * zynqmp_dma_free_descriptor - Issue pending transactions + * @chan: ZynqMP DMA channel pointer + * @sdesc: Transaction descriptor pointer + */ +static void zynqmp_dma_free_descriptor(struct zynqmp_dma_chan *chan, + struct zynqmp_dma_desc_sw *sdesc) +{ + struct zynqmp_dma_desc_sw *child, *next; + + chan->desc_free_cnt++; + list_add_tail(&sdesc->node, &chan->free_list); + list_for_each_entry_safe(child, next, &sdesc->tx_list, node) { + chan->desc_free_cnt++; + list_move_tail(&child->node, &chan->free_list); + } +} + +/** + * zynqmp_dma_free_desc_list - Free descriptors list + * @chan: ZynqMP DMA channel pointer + * @list: List to parse and delete the descriptor + */ +static void zynqmp_dma_free_desc_list(struct zynqmp_dma_chan *chan, + struct list_head *list) +{ + struct zynqmp_dma_desc_sw *desc, *next; + + list_for_each_entry_safe(desc, next, list, node) + zynqmp_dma_free_descriptor(chan, desc); +} + +/** + * zynqmp_dma_alloc_chan_resources - Allocate channel resources + * @dchan: DMA channel + * + * Return: '0' on success and failure value on error + */ +static int zynqmp_dma_alloc_chan_resources(struct dma_chan *dchan) +{ + struct zynqmp_dma_chan *chan = to_chan(dchan); + struct zynqmp_dma_desc_sw *desc; + int i; + + chan->sw_desc_pool = kzalloc(sizeof(*desc) * ZYNQMP_DMA_NUM_DESCS, + GFP_KERNEL); + if (!chan->sw_desc_pool) + return -ENOMEM; + + chan->desc_free_cnt = ZYNQMP_DMA_NUM_DESCS; + + INIT_LIST_HEAD(&chan->free_list); + + for (i = 0; i < ZYNQMP_DMA_NUM_DESCS; i++) { + desc = chan->sw_desc_pool + i; + dma_async_tx_descriptor_init(&desc->async_tx, &chan->common); + desc->async_tx.tx_submit = zynqmp_dma_tx_submit; + async_tx_ack(&desc->async_tx); + list_add_tail(&desc->node, &chan->free_list); + } + + if (!chan->has_sg) + return 0; + + chan->desc_pool_v = dma_zalloc_coherent(chan->dev, + (2 * chan->desc_size * ZYNQMP_DMA_NUM_DESCS), + &chan->desc_pool_p, GFP_KERNEL); + if (!chan->desc_pool_v) + return -ENOMEM; + + for (i = 0; i < ZYNQMP_DMA_NUM_DESCS; i++) { + desc = chan->sw_desc_pool + i; + desc->src_v = (struct zynqmp_dma_desc_ll *) (chan->desc_pool_v + + (i * DESC_SIZE(chan) * 2)); + desc->dst_v = (struct zynqmp_dma_desc_ll *) (desc->src_v + 1); + desc->src_p = chan->desc_pool_p + (i * DESC_SIZE(chan) * 2); + desc->dst_p = desc->src_p + DESC_SIZE(chan); + } + + return 0; +} + +/** + * zynqmp_dma_start - Start DMA channel + * @chan: ZynqMP DMA channel pointer + */ +static void zynqmp_dma_start(struct zynqmp_dma_chan *chan) +{ + writel(INT_EN_DEFAULT_MASK, chan->regs + IER); + writel(0, chan->regs + TOTAL_BYTE); + writel(ENABLE, chan->regs + CTRL2); +} + +/** + * zynqmp_dma_handle_ovfl_int - Process the overflow interrupt + * @chan: ZynqMP DMA channel pointer + * @status: Interrupt status value + */ +static void zynqmp_dma_handle_ovfl_int(struct zynqmp_dma_chan *chan, u32 status) +{ + u32 val; + + if (status & BYTE_CNT_OVRFL) { + val = readl(chan->regs + TOTAL_BYTE); + writel(0, chan->regs + TOTAL_BYTE); + } + if (status & IRQ_DST_ACCT_ERR) + val = readl(chan->regs + IRQ_DST_ACCT); + if (status & IRQ_SRC_ACCT_ERR) + val = readl(chan->regs + IRQ_SRC_ACCT); +} + +/** + * zynqmp_dma_start_transfer - Initiate the new transfer + * @chan: ZynqMP DMA channel pointer + */ +static void zynqmp_dma_start_transfer(struct zynqmp_dma_chan *chan) +{ + struct zynqmp_dma_desc_sw *desc; + + if (!zynqmp_dma_chan_is_idle(chan)) + return; + + desc = list_first_entry_or_null(&chan->pending_list, + struct zynqmp_dma_desc_sw, node); + if (!desc) + return; + + if (chan->has_sg) + list_splice_tail_init(&chan->pending_list, &chan->active_list); + else + list_move_tail(&desc->node, &chan->active_list); + + if (chan->has_sg) + zynqmp_dma_update_desc_to_ctrlr(chan, desc); + else + zynqmp_dma_config_simple_desc(chan, desc->src, desc->dst, + desc->len); + zynqmp_dma_start(chan); +} + + +/** + * zynqmp_dma_chan_desc_cleanup - Cleanup the completed descriptors + * @chan: ZynqMP DMA channel + */ +static void zynqmp_dma_chan_desc_cleanup(struct zynqmp_dma_chan *chan) +{ + struct zynqmp_dma_desc_sw *desc, *next; + + list_for_each_entry_safe(desc, next, &chan->done_list, node) { + dma_async_tx_callback callback; + void *callback_param; + + list_del(&desc->node); + + callback = desc->async_tx.callback; + callback_param = desc->async_tx.callback_param; + if (callback) + callback(callback_param); + + /* Run any dependencies, then free the descriptor */ + dma_run_dependencies(&desc->async_tx); + zynqmp_dma_free_descriptor(chan, desc); + } +} + +/** + * zynqmp_dma_complete_descriptor - Mark the active descriptor as complete + * @chan: ZynqMP DMA channel pointer + */ +static void zynqmp_dma_complete_descriptor(struct zynqmp_dma_chan *chan) +{ + struct zynqmp_dma_desc_sw *desc; + + desc = list_first_entry_or_null(&chan->active_list, + struct zynqmp_dma_desc_sw, node); + if (!desc) + return; + list_del(&desc->node); + dma_cookie_complete(&desc->async_tx); + list_add_tail(&desc->node, &chan->done_list); +} + +/** + * zynqmp_dma_issue_pending - Issue pending transactions + * @dchan: DMA channel pointer + */ +static void zynqmp_dma_issue_pending(struct dma_chan *dchan) +{ + struct zynqmp_dma_chan *chan = to_chan(dchan); + + spin_lock_bh(&chan->lock); + zynqmp_dma_start_transfer(chan); + spin_unlock_bh(&chan->lock); +} + +/** + * zynqmp_dma_free_chan_resources - Free channel resources + * @dchan: DMA channel pointer + */ +static void zynqmp_dma_free_chan_resources(struct dma_chan *dchan) +{ + struct zynqmp_dma_chan *chan = to_chan(dchan); + + spin_lock_bh(&chan->lock); + + zynqmp_dma_free_desc_list(chan, &chan->active_list); + zynqmp_dma_free_desc_list(chan, &chan->done_list); + zynqmp_dma_free_desc_list(chan, &chan->pending_list); + + spin_unlock_bh(&chan->lock); + if (chan->has_sg) + dma_free_coherent(chan->dev, + (2 * DESC_SIZE(chan) * ZYNQMP_DMA_NUM_DESCS), + chan->desc_pool_v, chan->desc_pool_p); + kfree(chan->sw_desc_pool); +} + +/** + * zynqmp_dma_tx_status - Get dma transaction status + * @dchan: DMA channel pointer + * @cookie: Transaction identifier + * @txstate: Transaction state + * + * Return: DMA transaction status + */ +static enum dma_status zynqmp_dma_tx_status(struct dma_chan *dchan, + dma_cookie_t cookie, + struct dma_tx_state *txstate) +{ + return dma_cookie_status(dchan, cookie, txstate); +} + +/** + * zynqmp_dma_reset - Reset the channel + * @chan: ZynqMP DMA channel pointer + */ +static void zynqmp_dma_reset(struct zynqmp_dma_chan *chan) +{ + writel(IDS_DEFAULT_MASK, chan->regs + IDS); + + zynqmp_dma_complete_descriptor(chan); + zynqmp_dma_chan_desc_cleanup(chan); + + zynqmp_dma_free_desc_list(chan, &chan->active_list); + zynqmp_dma_free_desc_list(chan, &chan->done_list); + zynqmp_dma_free_desc_list(chan, &chan->pending_list); + zynqmp_dma_init(chan); +} + +/** + * zynqmp_dma_irq_handler - ZynqMP DMA Interrupt handler + * @irq: IRQ number + * @data: Pointer to the ZynqMP DMA channel structure + * + * Return: IRQ_HANDLED/IRQ_NONE + */ +static irqreturn_t zynqmp_dma_irq_handler(int irq, void *data) +{ + struct zynqmp_dma_chan *chan = (struct zynqmp_dma_chan *)data; + u32 isr, imr, status; + irqreturn_t ret = IRQ_NONE; + + isr = readl(chan->regs + ISR); + imr = readl(chan->regs + IMR); + status = isr & ~imr; + writel(isr, chan->regs+ISR); + if (status & INT_DONE) { + writel(INT_DONE, chan->regs + IDS); + tasklet_schedule(&chan->tasklet); + ret = IRQ_HANDLED; + } + + if (status & INT_ERR) { + chan->err = true; + writel(INT_ERR, chan->regs + IDS); + tasklet_schedule(&chan->tasklet); + dev_err(chan->dev, "Channel %p has has errors\n", chan); + ret = IRQ_HANDLED; + } + + if (status & INT_OVRFL) { + writel(INT_OVRFL, chan->regs + IDS); + zynqmp_dma_handle_ovfl_int(chan, status); + dev_info(chan->dev, "Channel %p overflow interrupt\n", chan); + ret = IRQ_HANDLED; + } + + return ret; +} + +/** + * zynqmp_dma_do_tasklet - Schedule completion tasklet + * @data: Pointer to the ZynqMP DMA channel structure + */ +static void zynqmp_dma_do_tasklet(unsigned long data) +{ + struct zynqmp_dma_chan *chan = (struct zynqmp_dma_chan *)data; + u32 count; + + spin_lock(&chan->lock); + + if (chan->err) { + zynqmp_dma_reset(chan); + chan->err = false; + goto unlock; + } + + if (zynqmp_dma_chan_is_idle(chan)) + zynqmp_dma_start_transfer(chan); + else + writel(INT_DONE, chan->regs + IER); + + count = readl(chan->regs + IRQ_DST_ACCT); + while (count) { + zynqmp_dma_complete_descriptor(chan); + zynqmp_dma_chan_desc_cleanup(chan); + count--; + } + +unlock: + spin_unlock(&chan->lock); +} + +/** + * zynqmp_dma_device_terminate_all - Aborts all transfers on a channel + * @dchan: DMA channel pointer + * + * Return: Always '0' + */ +static int zynqmp_dma_device_terminate_all(struct dma_chan *dchan) +{ + struct zynqmp_dma_chan *chan = to_chan(dchan); + + spin_lock_bh(&chan->lock); + zynqmp_dma_reset(chan); + spin_unlock_bh(&chan->lock); + + return 0; +} + +/** + * zynqmp_dma_prep_memcpy - prepare descriptors for memcpy transaction + * @dchan: DMA channel + * @dma_dst: Destination buffer address + * @dma_src: Source buffer address + * @len: Transfer length + * @flags: transfer ack flags + * + * Return: Async transaction descriptor on success and NULL on failure + */ +static struct dma_async_tx_descriptor *zynqmp_dma_prep_memcpy( + struct dma_chan *dchan, dma_addr_t dma_dst, + dma_addr_t dma_src, size_t len, ulong flags) +{ + struct zynqmp_dma_chan *chan; + struct zynqmp_dma_desc_sw *new, *first = NULL; + void *desc = NULL, *prev = NULL; + size_t copy; + u32 desc_cnt; + + chan = to_chan(dchan); + + if ((len > ZYNQMP_DMA_MAX_TRANS_LEN) && !chan->has_sg) + return NULL; + + desc_cnt = DIV_ROUND_UP(len, ZYNQMP_DMA_MAX_TRANS_LEN); + + spin_lock_bh(&chan->lock); + if ((desc_cnt > chan->desc_free_cnt) && chan->has_sg) { + spin_unlock_bh(&chan->lock); + dev_dbg(chan->dev, "chan %p descs are not available\n", chan); + return NULL; + } + chan->desc_free_cnt = chan->desc_free_cnt - desc_cnt; + spin_unlock_bh(&chan->lock); + + do { + new = zynqmp_dma_get_descriptor(chan); + copy = min_t(size_t, len, ZYNQMP_DMA_MAX_TRANS_LEN); + if (chan->has_sg) { + desc = (struct zynqmp_dma_desc_ll *)new->src_v; + zynqmp_dma_config_sg_ll_desc(chan, desc, dma_src, + dma_dst, copy, prev); + } else { + new->src = dma_src; + new->dst = dma_dst; + new->len = len; + } + + prev = desc; + len -= copy; + dma_src += copy; + dma_dst += copy; + if (!first) + first = new; + else + list_add_tail(&new->node, &first->tx_list); + } while (len); + + if (chan->has_sg) + zynqmp_dma_desc_config_eod(chan, desc); + + first->async_tx.flags = flags; + return &first->async_tx; +} + +/** + * zynqmp_dma_prep_slave_sg - prepare descriptors for a memory sg transaction + * @dchan: DMA channel + * @dst_sg: Destination scatter list + * @dst_sg_len: Number of entries in destination scatter list + * @src_sg: Source scatter list + * @src_sg_len: Number of entries in source scatter list + * @flags: transfer ack flags + * + * Return: Async transaction descriptor on success and NULL on failure + */ +static struct dma_async_tx_descriptor *zynqmp_dma_prep_sg( + struct dma_chan *dchan, struct scatterlist *dst_sg, + unsigned int dst_sg_len, struct scatterlist *src_sg, + unsigned int src_sg_len, unsigned long flags) +{ + struct zynqmp_dma_desc_sw *new, *first = NULL; + struct zynqmp_dma_chan *chan = to_chan(dchan); + void *desc = NULL, *prev = NULL; + size_t len, dst_avail, src_avail; + dma_addr_t dma_dst, dma_src; + u32 desc_cnt = 0, i; + struct scatterlist *sg; + + if (!chan->has_sg) + return NULL; + + for_each_sg(src_sg, sg, src_sg_len, i) + desc_cnt += DIV_ROUND_UP(sg_dma_len(sg), + ZYNQMP_DMA_MAX_TRANS_LEN); + + spin_lock_bh(&chan->lock); + if (desc_cnt > chan->desc_free_cnt) { + spin_unlock_bh(&chan->lock); + dev_dbg(chan->dev, "chan %p descs are not available\n", chan); + return NULL; + } + chan->desc_free_cnt = chan->desc_free_cnt - desc_cnt; + spin_unlock_bh(&chan->lock); + + dst_avail = sg_dma_len(dst_sg); + src_avail = sg_dma_len(src_sg); + + /* Run until we are out of scatterlist entries */ + while (true) { + /* Allocate and populate the descriptor */ + new = zynqmp_dma_get_descriptor(chan); + desc = (struct zynqmp_dma_desc_ll *)new->src_v; + len = min_t(size_t, src_avail, dst_avail); + len = min_t(size_t, len, ZYNQMP_DMA_MAX_TRANS_LEN); + if (len == 0) + goto fetch; + dma_dst = sg_dma_address(dst_sg) + sg_dma_len(dst_sg) - + dst_avail; + dma_src = sg_dma_address(src_sg) + sg_dma_len(src_sg) - + src_avail; + + zynqmp_dma_config_sg_ll_desc(chan, desc, dma_src, dma_dst, + len, prev); + prev = desc; + dst_avail -= len; + src_avail -= len; + + if (!first) + first = new; + else + list_add_tail(&new->node, &first->tx_list); +fetch: + /* Fetch the next dst scatterlist entry */ + if (dst_avail == 0) { + if (dst_sg_len == 0) + break; + dst_sg = sg_next(dst_sg); + if (dst_sg == NULL) + break; + dst_sg_len--; + dst_avail = sg_dma_len(dst_sg); + } + /* Fetch the next src scatterlist entry */ + if (src_avail == 0) { + if (src_sg_len == 0) + break; + src_sg = sg_next(src_sg); + if (src_sg == NULL) + break; + src_sg_len--; + src_avail = sg_dma_len(src_sg); + } + } + + zynqmp_dma_desc_config_eod(chan, desc); + first->async_tx.flags = flags; + return &first->async_tx; +} + +/** + * zynqmp_dma_chan_remove - Channel remove function + * @chan: ZynqMP DMA channel pointer + */ +static void zynqmp_dma_chan_remove(struct zynqmp_dma_chan *chan) +{ + if (!chan) + return; + + tasklet_kill(&chan->tasklet); + list_del(&chan->common.device_node); +} + +/** + * zynqmp_dma_chan_probe - Per Channel Probing + * @xdev: Driver specific device structure + * @pdev: Pointer to the platform_device structure + * + * Return: '0' on success and failure value on error + */ +static int zynqmp_dma_chan_probe(struct zynqmp_dma_device *xdev, + struct platform_device *pdev) +{ + struct zynqmp_dma_chan *chan; + struct resource *res; + struct device_node *node = pdev->dev.of_node; + int err; + + chan = devm_kzalloc(xdev->dev, sizeof(*chan), GFP_KERNEL); + if (!chan) + return -ENOMEM; + chan->dev = xdev->dev; + chan->xdev = xdev; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + chan->regs = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(chan->regs)) + return PTR_ERR(chan->regs); + + chan->bus_width = ZYNQMP_DMA_BUS_WIDTH_64; + chan->src_issue = SRC_ISSUE_RST_VAL; + chan->dst_burst_len = AWLEN_RST_VAL; + chan->src_burst_len = ARLEN_RST_VAL; + chan->dst_axi_cache = AWCACHE_RST_VAL; + chan->src_axi_cache = ARCACHE_RST_VAL; + err = of_property_read_u32(node, "xlnx,bus-width", &chan->bus_width); + if ((err < 0) && ((chan->bus_width != ZYNQMP_DMA_BUS_WIDTH_64) || + (chan->bus_width != ZYNQMP_DMA_BUS_WIDTH_128))) { + dev_err(xdev->dev, "invalid bus-width value"); + return err; + } + chan->has_sg = of_property_read_bool(node, "xlnx,include-sg"); + chan->ovrfetch = of_property_read_bool(node, "xlnx,overfetch"); + chan->desc_axi_cohrnt = + of_property_read_bool(node, "xlnx,desc-axi-cohrnt"); + chan->src_axi_cohrnt = + of_property_read_bool(node, "xlnx,src-axi-cohrnt"); + chan->dst_axi_cohrnt = + of_property_read_bool(node, "xlnx,dst-axi-cohrnt"); + + of_property_read_u32(node, "xlnx,desc-axi-qos", &chan->desc_axi_qos); + of_property_read_u32(node, "xlnx,desc-axi-cache", + &chan->desc_axi_cache); + of_property_read_u32(node, "xlnx,src-axi-qos", &chan->src_axi_qos); + of_property_read_u32(node, "xlnx,src-axi-cache", &chan->src_axi_cache); + of_property_read_u32(node, "xlnx,dst-axi-qos", &chan->dst_axi_qos); + of_property_read_u32(node, "xlnx,dst-axi-cache", &chan->dst_axi_cache); + of_property_read_u32(node, "xlnx,src-burst-len", &chan->src_burst_len); + of_property_read_u32(node, "xlnx,dst-burst-len", &chan->dst_burst_len); + of_property_read_u32(node, "xlnx,ratectrl", &chan->ratectrl); + of_property_read_u32(node, "xlnx,src-issue", &chan->src_issue); + + xdev->chan = chan; + tasklet_init(&chan->tasklet, zynqmp_dma_do_tasklet, (ulong)chan); + spin_lock_init(&chan->lock); + INIT_LIST_HEAD(&chan->active_list); + INIT_LIST_HEAD(&chan->pending_list); + INIT_LIST_HEAD(&chan->done_list); + INIT_LIST_HEAD(&chan->free_list); + + dma_cookie_init(&chan->common); + chan->common.device = &xdev->common; + list_add_tail(&chan->common.device_node, &xdev->common.channels); + + zynqmp_dma_init(chan); + chan->irq = platform_get_irq(pdev, 0); + if (chan->irq < 0) + return -ENXIO; + err = devm_request_irq(&pdev->dev, chan->irq, zynqmp_dma_irq_handler, 0, + "zynqmp-dma", chan); + if (err) + return err; + + chan->desc_size = sizeof(struct zynqmp_dma_desc_ll); + return 0; +} + +/** + * of_zynqmp_dma_xlate - Translation function + * @dma_spec: Pointer to DMA specifier as found in the device tree + * @ofdma: Pointer to DMA controller data + * + * Return: DMA channel pointer on success and NULL on error + */ +static struct dma_chan *of_zynqmp_dma_xlate(struct of_phandle_args *dma_spec, + struct of_dma *ofdma) +{ + struct zynqmp_dma_device *xdev = ofdma->of_dma_data; + + return dma_get_slave_channel(&xdev->chan->common); +} + +/** + * zynqmp_dma_probe - Driver probe function + * @pdev: Pointer to the platform_device structure + * + * Return: '0' on success and failure value on error + */ +static int zynqmp_dma_probe(struct platform_device *pdev) +{ + struct zynqmp_dma_device *xdev; + struct dma_device *p; + int ret; + + xdev = devm_kzalloc(&pdev->dev, sizeof(*xdev), GFP_KERNEL); + if (!xdev) + return -ENOMEM; + + xdev->dev = &pdev->dev; + INIT_LIST_HEAD(&xdev->common.channels); + + dma_set_mask(&pdev->dev, DMA_BIT_MASK(44)); + dma_cap_set(DMA_SG, xdev->common.cap_mask); + dma_cap_set(DMA_MEMCPY, xdev->common.cap_mask); + + p = &xdev->common; + p->device_prep_dma_sg = zynqmp_dma_prep_sg; + p->device_prep_dma_memcpy = zynqmp_dma_prep_memcpy; + p->device_terminate_all = zynqmp_dma_device_terminate_all; + p->device_issue_pending = zynqmp_dma_issue_pending; + p->device_alloc_chan_resources = zynqmp_dma_alloc_chan_resources; + p->device_free_chan_resources = zynqmp_dma_free_chan_resources; + p->device_tx_status = zynqmp_dma_tx_status; + p->dev = &pdev->dev; + + platform_set_drvdata(pdev, xdev); + + ret = zynqmp_dma_chan_probe(xdev, pdev); + if (ret) { + dev_err(&pdev->dev, "Probing channel failed\n"); + goto free_chan_resources; + } + + p->dst_addr_widths = xdev->chan->bus_width / 8; + p->src_addr_widths = xdev->chan->bus_width / 8; + + dma_async_device_register(&xdev->common); + + ret = of_dma_controller_register(pdev->dev.of_node, + of_zynqmp_dma_xlate, xdev); + if (ret) { + dev_err(&pdev->dev, "Unable to register DMA to DT\n"); + dma_async_device_unregister(&xdev->common); + goto free_chan_resources; + } + + dev_info(&pdev->dev, "ZynqMP DMA driver Probe success\n"); + + return 0; + +free_chan_resources: + zynqmp_dma_chan_remove(xdev->chan); + return ret; +} + +/** + * zynqmp_dma_remove - Driver remove function + * @pdev: Pointer to the platform_device structure + * + * Return: Always '0' + */ +static int zynqmp_dma_remove(struct platform_device *pdev) +{ + struct zynqmp_dma_device *xdev = platform_get_drvdata(pdev); + + of_dma_controller_free(pdev->dev.of_node); + dma_async_device_unregister(&xdev->common); + + zynqmp_dma_chan_remove(xdev->chan); + + return 0; +} + +static const struct of_device_id zynqmp_dma_of_match[] = { + { .compatible = "xlnx,zynqmp-dma-1.0", }, + {} +}; +MODULE_DEVICE_TABLE(of, zynqmp_dma_of_match); + +static struct platform_driver zynqmp_dma_driver = { + .driver = { + .name = "xilinx-zynqmp-dma", + .of_match_table = zynqmp_dma_of_match, + }, + .probe = zynqmp_dma_probe, + .remove = zynqmp_dma_remove, +}; + +module_platform_driver(zynqmp_dma_driver); + +MODULE_AUTHOR("Xilinx, Inc."); +MODULE_DESCRIPTION("Xilinx ZynqMP DMA driver"); +MODULE_LICENSE("GPL"); -- 1.7.4 ^ permalink raw reply related [flat|nested] 10+ messages in thread
end of thread, other threads:[~2015-07-17 9:45 UTC | newest] Thread overview: 10+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2015-06-16 2:34 [PATCH v3 1/2] Documentation: dt: Add Xilinx zynqmp dma device tree binding documentation Punnaiah Choudary Kalluri [not found] ` <1434422083-8653-1-git-send-email-punnaia-gjFFaj9aHVfQT0dZR+AlfA@public.gmane.org> 2015-06-16 2:34 ` [PATCH v3 2/2] dma: Add Xilinx zynqmp dma engine driver support Punnaiah Choudary Kalluri [not found] ` <1434422083-8653-2-git-send-email-punnaia-gjFFaj9aHVfQT0dZR+AlfA@public.gmane.org> 2015-07-16 12:35 ` Vinod Koul 2015-07-17 0:52 ` punnaiah choudary kalluri 2015-07-17 2:33 ` Punnaiah Choudary [not found] ` <CAHLkNH+VLccurDfxEnwSEbr9YFv8ymZuibci7TW=WPutL7qbAQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> [not found] ` <CAGnW=BbUBOzm5V_LP_YcCfJ9X+Yw8qGp6PE2xSM13E=qUXxUsw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2015-07-17 3:05 ` Vinod Koul 2015-07-17 4:24 ` punnaiah choudary kalluri 2015-07-17 9:08 ` Vinod Koul 2015-07-17 9:45 ` punnaiah choudary kalluri -- strict thread matches above, loose matches on Subject: below -- 2015-07-14 3:36 [PATCH v3 1/2] Documentation: dt: Add Xilinx zynqmp dma device tree binding documentation Punnaiah Choudary Kalluri 2015-07-14 3:36 ` [PATCH v3 2/2] dma: Add Xilinx zynqmp dma engine driver support Punnaiah Choudary Kalluri
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).