Linux kernel and device drivers for NXP i.MX platforms
 help / color / mirror / Atom feed
* [PATCH RFC 00/12] dmaengine: introduce common linked-list DMA controller framework
@ 2026-01-28 18:05 Frank Li
  2026-01-28 18:05 ` [PATCH RFC 01/12] dmaengine: Extend virt_chan for link list based DMA engines Frank Li
                   ` (11 more replies)
  0 siblings, 12 replies; 13+ messages in thread
From: Frank Li @ 2026-01-28 18:05 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-kernel, dmaengine, imx, joy.zou, Frank Li

Many DMA engines, such as fsl-edma, at-hdmac, and ste-dma40, use
linked-list descriptors for data transfers and share a large amount of
common logic.

Introduce a basic framework for linked-list–based DMA controllers as a
first step toward a common library.

All DMA descriptors have a uniform size and are chained as follows:

    ┌──────┐    ┌──────┐    ┌──────┐
    │      │ ┌─►│      │ ┌─►│      │
    │      │ │  │      │ │  │      │
    ├──────┤ │  ├──────┤ │  ├──────┤
    │ Next ├─┘  │ Next ├─┘  │ Next │
    └──────┘    └──────┘    └──────┘

The framework is derived from the fsl-edma implementation and provides
common descriptor allocation/free and helpers for prep_memcpy(),
prep_slave_sg(), and prep_slave_cyclic().

This patch series is based on:
  https://lore.kernel.org/dmaengine/20260114-dma_common_config-v1-0-64feb836ff04@nxp.com/

Additional functionality can be added if everyone agree on this method.

Signed-off-by: Frank Li <Frank.Li@nxp.com>
---
Frank Li (12):
      dmaengine: Extend virt_chan for link list based DMA engines
      dmaengine: Add common dma_ll_desc and dma_linklist_item for link-list controllers
      dmaengine: fsl-edma: Remove redundant echan from struct fsl_edma_desc
      dmaengine: fsl-edma: Use common dma_ll_desc in vchan
      dmaengine: Add DMA pool allocation in vchan_dma_ll_init() and API vchan_dma_ll_free()
      dmaengine: Move fsl_edma_(alloc|free)_desc() to common library
      dmaengine: virt-dma: split vchan_tx_prep() into init and internal helpers
      dmaengine: Factor out fsl-edma prep_memcpy into common vchan helper
      dmaengine: ll-dma: support multi-descriptor memcpy for large transfers
      dmaengine: move fsl-edma dma_[un]map_resource() to linked list library
      dmaengine: fsl-edma: use local soff/doff variables
      dmaengine: add vchan_dma_ll_prep_slave_{sg,cyclic} API

 drivers/dma/Kconfig           |   4 +
 drivers/dma/Makefile          |   1 +
 drivers/dma/fsl-edma-common.c | 448 +++++++++++++-----------------------------
 drivers/dma/fsl-edma-common.h |  35 +---
 drivers/dma/fsl-edma-main.c   |   8 +-
 drivers/dma/ll-dma.c          | 353 +++++++++++++++++++++++++++++++++
 drivers/dma/mcf-edma-main.c   |   6 +-
 drivers/dma/virt-dma.h        | 114 +++++++++--
 8 files changed, 606 insertions(+), 363 deletions(-)
---
base-commit: daf496ddce1475078b500e30e0a16de3d8fb9c8a
change-id: 20260119-dma_ll_comlib-ba69c554917d

Best regards,
--
Frank Li <Frank.Li@nxp.com>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH RFC 01/12] dmaengine: Extend virt_chan for link list based DMA engines
  2026-01-28 18:05 [PATCH RFC 00/12] dmaengine: introduce common linked-list DMA controller framework Frank Li
@ 2026-01-28 18:05 ` Frank Li
  2026-01-28 18:05 ` [PATCH RFC 02/12] dmaengine: Add common dma_ll_desc and dma_linklist_item for link-list controllers Frank Li
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Frank Li @ 2026-01-28 18:05 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-kernel, dmaengine, imx, joy.zou, Frank Li

Many DMA engines (such as fsl-edma, at-hdmac, and ste-dma40) use
linked-list descriptors for data transfers and share a large amount of
common logic. Add a basic framework to support these link list based DMA
engines and prepare for a common library.

Introduce vchan_dma_ll_terminate_all() as the first shared helper.
Additional common functionality will be added in follow-up patches.

Signed-off-by: Frank Li <Frank.Li@nxp.com>
---
 drivers/dma/Kconfig           |  4 +++
 drivers/dma/Makefile          |  1 +
 drivers/dma/fsl-edma-common.c | 35 ++++++++++++++++++-------
 drivers/dma/ll-dma.c          | 61 +++++++++++++++++++++++++++++++++++++++++++
 drivers/dma/virt-dma.h        | 19 ++++++++++++++
 5 files changed, 111 insertions(+), 9 deletions(-)

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 8bb0a119ecd48a6695404d43fce225987c9c69ff..5a61907d4d9631e61cf0c44d4104983e9113f28f 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -47,6 +47,9 @@ config DMA_ENGINE
 config DMA_VIRTUAL_CHANNELS
 	tristate
 
+config DMA_LINKLIST
+	tristate
+
 config DMA_ACPI
 	def_bool y
 	depends on ACPI
@@ -221,6 +224,7 @@ config FSL_EDMA
 	depends on HAS_IOMEM
 	select DMA_ENGINE
 	select DMA_VIRTUAL_CHANNELS
+	select DMA_LINKLIST
 	help
 	  Support the Freescale eDMA engine with programmable channel
 	  multiplexing capability for DMA request sources(slot).
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index a54d7688392b1a0e956fa5d23633507f52f017d9..f1db081a8d2487968f0ca110b80706901f9903ae 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -6,6 +6,7 @@ subdir-ccflags-$(CONFIG_DMADEVICES_VDEBUG) += -DVERBOSE_DEBUG
 #core
 obj-$(CONFIG_DMA_ENGINE) += dmaengine.o
 obj-$(CONFIG_DMA_VIRTUAL_CHANNELS) += virt-dma.o
+obj-$(CONFIG_DMA_LINKLIST) += ll-dma.o
 obj-$(CONFIG_DMA_ACPI) += acpi-dma.o
 obj-$(CONFIG_DMA_OF) += of-dma.o
 
diff --git a/drivers/dma/fsl-edma-common.c b/drivers/dma/fsl-edma-common.c
index c4ac63d9612ce9f1f5826a2186938a785ed529d1..396ff6dfa99a150f9ce34effd64534e3d8e8576b 100644
--- a/drivers/dma/fsl-edma-common.c
+++ b/drivers/dma/fsl-edma-common.c
@@ -236,16 +236,11 @@ void fsl_edma_free_desc(struct virt_dma_desc *vdesc)
 int fsl_edma_terminate_all(struct dma_chan *chan)
 {
 	struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
-	unsigned long flags;
-	LIST_HEAD(head);
+	int ret;
 
-	spin_lock_irqsave(&fsl_chan->vchan.lock, flags);
-	fsl_edma_disable_request(fsl_chan);
-	fsl_chan->edesc = NULL;
-	fsl_chan->status = DMA_COMPLETE;
-	vchan_get_all_descriptors(&fsl_chan->vchan, &head);
-	spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
-	vchan_dma_desc_free_list(&fsl_chan->vchan, &head);
+	ret = vchan_dma_ll_terminate_all(chan);
+	if (ret)
+		return ret;
 
 	if (fsl_edma_drvflags(fsl_chan) & FSL_EDMA_DRV_HAS_PD)
 		pm_runtime_allow(fsl_chan->pd_dev);
@@ -830,6 +825,21 @@ void fsl_edma_issue_pending(struct dma_chan *chan)
 	spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
 }
 
+static int fsl_edma_ll_stop(struct dma_chan *chan)
+{
+	struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
+
+	fsl_edma_disable_request(fsl_chan);
+	fsl_chan->edesc = NULL;
+	fsl_chan->status = DMA_COMPLETE;
+
+	return 0;
+}
+
+static const struct dma_linklist_ops fsl_edma_ll_ops = {
+	.stop = fsl_edma_ll_stop,
+};
+
 int fsl_edma_alloc_chan_resources(struct dma_chan *chan)
 {
 	struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
@@ -838,6 +848,13 @@ int fsl_edma_alloc_chan_resources(struct dma_chan *chan)
 	if (fsl_edma_drvflags(fsl_chan) & FSL_EDMA_DRV_HAS_CHCLK)
 		clk_prepare_enable(fsl_chan->clk);
 
+	ret = vchan_dma_ll_init(to_virt_chan(chan), &fsl_edma_ll_ops,
+				fsl_edma_drvflags(fsl_chan) & FSL_EDMA_DRV_TCD64 ?
+				sizeof(struct fsl_edma_hw_tcd64) : sizeof(struct fsl_edma_hw_tcd),
+				32, 0);
+	if (ret)
+		return ret;
+
 	fsl_chan->tcd_pool = dma_pool_create("tcd_pool", chan->device->dev,
 				fsl_edma_drvflags(fsl_chan) & FSL_EDMA_DRV_TCD64 ?
 				sizeof(struct fsl_edma_hw_tcd64) : sizeof(struct fsl_edma_hw_tcd),
diff --git a/drivers/dma/ll-dma.c b/drivers/dma/ll-dma.c
new file mode 100644
index 0000000000000000000000000000000000000000..3845cca7926eb71f008cb98d8c622cb28a2369a5
--- /dev/null
+++ b/drivers/dma/ll-dma.c
@@ -0,0 +1,61 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Common library for DMA Controller, which use Link List DMA descriptor.
+ *
+ * For the DMA controller, which DMA descriptor work as Link List, there are
+ * field, which point to next DMA descriptor.
+ *
+ * Each DMA descriptor's size is the same.
+ *
+ *	┌──────┐    ┌──────┐    ┌──────┐
+ *	│      │ ┌─►│      │ ┌─►│      │
+ *	│      │ │  │      │ │  │      │
+ *	├──────┤ │  ├──────┤ │  ├──────┤
+ *	│ Next ├─┘  │ Next ├─┘  │ Next │
+ *	└──────┘    └──────┘    └──────┘
+ *
+ */
+#include <linux/cleanup.h>
+#include <linux/device.h>
+#include <linux/dmaengine.h>
+#include <linux/module.h>
+#include <linux/spinlock.h>
+
+#include "virt-dma.h"
+
+int vchan_dma_ll_init(struct virt_dma_chan *vc,
+		      const struct dma_linklist_ops *ops, size_t size,
+		      size_t align, size_t boundary)
+{
+	if (!ops || !ops->stop)
+		return -EINVAL;
+
+	vc->ll.ops = ops;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(vchan_dma_ll_init);
+
+int vchan_dma_ll_terminate_all(struct dma_chan *chan)
+{
+	struct virt_dma_chan *vchan = to_virt_chan(chan);
+	LIST_HEAD(head);
+	int ret;
+
+	scoped_guard(spinlock_irqsave, &vchan->lock) {
+		ret = vchan->ll.ops->stop(chan);
+		if (ret)
+			return ret;
+
+		vchan_get_all_descriptors(vchan, &head);
+	}
+
+	vchan_dma_desc_free_list(vchan, &head);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(vchan_dma_ll_terminate_all);
+
+MODULE_AUTHOR("Frank Li");
+MODULE_DESCRIPTION("Common library for Link List DMA Descriptor");
+MODULE_LICENSE("GPL");
diff --git a/drivers/dma/virt-dma.h b/drivers/dma/virt-dma.h
index 59d9eabc8b6744a439aeed3114164c5203341a6c..081eb910d0b0cd2b60232736587c698fff787cb9 100644
--- a/drivers/dma/virt-dma.h
+++ b/drivers/dma/virt-dma.h
@@ -19,11 +19,23 @@ struct virt_dma_desc {
 	struct list_head node;
 };
 
+struct dma_linklist_ops {
+	int (*stop)(struct dma_chan *chan);
+};
+
+struct dma_linklist {
+	const struct dma_linklist_ops *ops;
+};
+
 struct virt_dma_chan {
 	struct dma_chan	chan;
 	struct tasklet_struct task;
 	void (*desc_free)(struct virt_dma_desc *);
 
+#if IS_ENABLED(CONFIG_DMA_LINKLIST)
+	struct dma_linklist ll;
+#endif
+
 	spinlock_t lock;
 
 	/* protected by vc.lock */
@@ -234,4 +246,11 @@ static inline void vchan_synchronize(struct virt_dma_chan *vc)
 	vchan_dma_desc_free_list(vc, &head);
 }
 
+#if IS_ENABLED(CONFIG_DMA_LINKLIST)
+int vchan_dma_ll_init(struct virt_dma_chan *vc,
+		      const struct dma_linklist_ops *ops, size_t size,
+		      size_t align, size_t boundary);
+int vchan_dma_ll_terminate_all(struct dma_chan *chan);
+#endif
+
 #endif

-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC 02/12] dmaengine: Add common dma_ll_desc and dma_linklist_item for link-list controllers
  2026-01-28 18:05 [PATCH RFC 00/12] dmaengine: introduce common linked-list DMA controller framework Frank Li
  2026-01-28 18:05 ` [PATCH RFC 01/12] dmaengine: Extend virt_chan for link list based DMA engines Frank Li
@ 2026-01-28 18:05 ` Frank Li
  2026-01-28 18:05 ` [PATCH RFC 03/12] dmaengine: fsl-edma: Remove redundant echan from struct fsl_edma_desc Frank Li
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Frank Li @ 2026-01-28 18:05 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-kernel, dmaengine, imx, joy.zou, Frank Li

Introduce common dma_ll_desc and dma_linklist_item structures for
link-list–based DMA controllers. This lays the groundwork for adding more
shared APIs to a common DMA link-list library and reduces duplication
across drivers.

Signed-off-by: Frank Li <Frank.Li@nxp.com>
---
 drivers/dma/virt-dma.h | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/drivers/dma/virt-dma.h b/drivers/dma/virt-dma.h
index 081eb910d0b0cd2b60232736587c698fff787cb9..82f3f8244f6eca036a027c9a4c9339fcb87e8d2c 100644
--- a/drivers/dma/virt-dma.h
+++ b/drivers/dma/virt-dma.h
@@ -19,11 +19,32 @@ struct virt_dma_desc {
 	struct list_head node;
 };
 
+struct dma_linklist_item {
+	dma_addr_t paddr;
+	void *vaddr;
+};
+
+/*
+ * Must put to last one if need extend it
+ *   struct vendor_dma_ll_desc {
+ *	...
+ *	struct dma_ll_desc ldesc;
+ *   }
+ */
+struct dma_ll_desc {
+	struct virt_dma_desc vdesc;
+	bool iscyclic;
+	enum dma_transfer_direction dir;
+	u32 n_its;
+	struct dma_linklist_item its[];
+};
+
 struct dma_linklist_ops {
 	int (*stop)(struct dma_chan *chan);
 };
 
 struct dma_linklist {
+	struct dma_pool *pool;
 	const struct dma_linklist_ops *ops;
 };
 
@@ -247,6 +268,11 @@ static inline void vchan_synchronize(struct virt_dma_chan *vc)
 }
 
 #if IS_ENABLED(CONFIG_DMA_LINKLIST)
+static inline struct dma_ll_desc *to_dma_ll_desc(struct virt_dma_desc *vdesc)
+{
+	return container_of(vdesc, struct dma_ll_desc, vdesc);
+}
+
 int vchan_dma_ll_init(struct virt_dma_chan *vc,
 		      const struct dma_linklist_ops *ops, size_t size,
 		      size_t align, size_t boundary);

-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC 03/12] dmaengine: fsl-edma: Remove redundant echan from struct fsl_edma_desc
  2026-01-28 18:05 [PATCH RFC 00/12] dmaengine: introduce common linked-list DMA controller framework Frank Li
  2026-01-28 18:05 ` [PATCH RFC 01/12] dmaengine: Extend virt_chan for link list based DMA engines Frank Li
  2026-01-28 18:05 ` [PATCH RFC 02/12] dmaengine: Add common dma_ll_desc and dma_linklist_item for link-list controllers Frank Li
@ 2026-01-28 18:05 ` Frank Li
  2026-01-28 18:05 ` [PATCH RFC 04/12] dmaengine: fsl-edma: Use common dma_ll_desc in vchan Frank Li
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Frank Li @ 2026-01-28 18:05 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-kernel, dmaengine, imx, joy.zou, Frank Li

The echan pointer can be obtained from the dma_async_tx_descriptor embedded
in struct virt_dma_desc. So remove echan from struct fsl_edma_desc.

Signed-off-by: Frank Li <Frank.Li@nxp.com>
---
 drivers/dma/fsl-edma-common.c | 4 ++--
 drivers/dma/fsl-edma-common.h | 1 -
 2 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/dma/fsl-edma-common.c b/drivers/dma/fsl-edma-common.c
index 396ff6dfa99a150f9ce34effd64534e3d8e8576b..61387c4edc910c8a806cc2c6f0fee2e690424bac 100644
--- a/drivers/dma/fsl-edma-common.c
+++ b/drivers/dma/fsl-edma-common.c
@@ -228,7 +228,8 @@ void fsl_edma_free_desc(struct virt_dma_desc *vdesc)
 
 	fsl_desc = to_fsl_edma_desc(vdesc);
 	for (i = 0; i < fsl_desc->n_tcds; i++)
-		dma_pool_free(fsl_desc->echan->tcd_pool, fsl_desc->tcd[i].vtcd,
+		dma_pool_free(to_fsl_edma_chan(vdesc->tx.chan)->tcd_pool,
+			      fsl_desc->tcd[i].vtcd,
 			      fsl_desc->tcd[i].ptcd);
 	kfree(fsl_desc);
 }
@@ -555,7 +556,6 @@ static struct fsl_edma_desc *fsl_edma_alloc_desc(struct fsl_edma_chan *fsl_chan,
 	if (!fsl_desc)
 		return NULL;
 
-	fsl_desc->echan = fsl_chan;
 	fsl_desc->n_tcds = sg_len;
 	for (i = 0; i < sg_len; i++) {
 		fsl_desc->tcd[i].vtcd = dma_pool_alloc(fsl_chan->tcd_pool,
diff --git a/drivers/dma/fsl-edma-common.h b/drivers/dma/fsl-edma-common.h
index 8e45770a0d3960ee34361fe5884a169de64e14a7..a0d83ad783f7a53caab93d280c6e40f63b8e9e5c 100644
--- a/drivers/dma/fsl-edma-common.h
+++ b/drivers/dma/fsl-edma-common.h
@@ -196,7 +196,6 @@ struct fsl_edma_chan {
 
 struct fsl_edma_desc {
 	struct virt_dma_desc		vdesc;
-	struct fsl_edma_chan		*echan;
 	bool				iscyclic;
 	enum dma_transfer_direction	dirn;
 	unsigned int			n_tcds;

-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC 04/12] dmaengine: fsl-edma: Use common dma_ll_desc in vchan
  2026-01-28 18:05 [PATCH RFC 00/12] dmaengine: introduce common linked-list DMA controller framework Frank Li
                   ` (2 preceding siblings ...)
  2026-01-28 18:05 ` [PATCH RFC 03/12] dmaengine: fsl-edma: Remove redundant echan from struct fsl_edma_desc Frank Li
@ 2026-01-28 18:05 ` Frank Li
  2026-01-28 18:05 ` [PATCH RFC 05/12] dmaengine: Add DMA pool allocation in vchan_dma_ll_init() and API vchan_dma_ll_free() Frank Li
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Frank Li @ 2026-01-28 18:05 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-kernel, dmaengine, imx, joy.zou, Frank Li

Use the common dma_ll_desc structure in the virtual channel implementation
to prepare for adding more shared APIs to the DMA link-list library.

No functional change.

Signed-off-by: Frank Li <Frank.Li@nxp.com>
---
 drivers/dma/fsl-edma-common.c | 95 ++++++++++++++++++++++---------------------
 drivers/dma/fsl-edma-common.h | 20 +--------
 2 files changed, 50 insertions(+), 65 deletions(-)

diff --git a/drivers/dma/fsl-edma-common.c b/drivers/dma/fsl-edma-common.c
index 61387c4edc910c8a806cc2c6f0fee2e690424bac..17a8e28037f5e61d4aafbd7f32bde407ecc01a4d 100644
--- a/drivers/dma/fsl-edma-common.c
+++ b/drivers/dma/fsl-edma-common.c
@@ -223,14 +223,14 @@ static unsigned int fsl_edma_get_tcd_attr(enum dma_slave_buswidth src_addr_width
 
 void fsl_edma_free_desc(struct virt_dma_desc *vdesc)
 {
-	struct fsl_edma_desc *fsl_desc;
+	struct dma_ll_desc *fsl_desc;
 	int i;
 
-	fsl_desc = to_fsl_edma_desc(vdesc);
-	for (i = 0; i < fsl_desc->n_tcds; i++)
-		dma_pool_free(to_fsl_edma_chan(vdesc->tx.chan)->tcd_pool,
-			      fsl_desc->tcd[i].vtcd,
-			      fsl_desc->tcd[i].ptcd);
+	fsl_desc = to_dma_ll_desc(vdesc);
+	for (i = 0; i < fsl_desc->n_its; i++)
+		dma_pool_free(to_virt_chan(vdesc->tx.chan)->ll.pool,
+			      fsl_desc->its[i].vaddr,
+			      fsl_desc->its[i].paddr);
 	kfree(fsl_desc);
 }
 
@@ -342,19 +342,19 @@ int fsl_edma_slave_config(struct dma_chan *chan,
 static size_t fsl_edma_desc_residue(struct fsl_edma_chan *fsl_chan,
 		struct virt_dma_desc *vdesc, bool in_progress)
 {
-	struct fsl_edma_desc *edesc = fsl_chan->edesc;
-	enum dma_transfer_direction dir = edesc->dirn;
+	struct dma_ll_desc *edesc = fsl_chan->edesc;
+	enum dma_transfer_direction dir = edesc->dir;
 	dma_addr_t cur_addr, dma_addr, old_addr;
 	size_t len, size;
 	u32 nbytes = 0;
 	int i;
 
 	/* calculate the total size in this desc */
-	for (len = i = 0; i < fsl_chan->edesc->n_tcds; i++) {
-		nbytes = fsl_edma_get_tcd_to_cpu(fsl_chan, edesc->tcd[i].vtcd, nbytes);
+	for (len = i = 0; i < fsl_chan->edesc->n_its; i++) {
+		nbytes = fsl_edma_get_tcd_to_cpu(fsl_chan, edesc->its[i].vaddr, nbytes);
 		if (nbytes & (EDMA_V3_TCD_NBYTES_DMLOE | EDMA_V3_TCD_NBYTES_SMLOE))
 			nbytes = EDMA_V3_TCD_NBYTES_MLOFF_NBYTES(nbytes);
-		len += nbytes * fsl_edma_get_tcd_to_cpu(fsl_chan, edesc->tcd[i].vtcd, biter);
+		len += nbytes * fsl_edma_get_tcd_to_cpu(fsl_chan, edesc->its[i].vaddr, biter);
 	}
 
 	if (!in_progress)
@@ -372,17 +372,17 @@ static size_t fsl_edma_desc_residue(struct fsl_edma_chan *fsl_chan,
 	} while (upper_32_bits(cur_addr) != upper_32_bits(old_addr));
 
 	/* figure out the finished and calculate the residue */
-	for (i = 0; i < fsl_chan->edesc->n_tcds; i++) {
-		nbytes = fsl_edma_get_tcd_to_cpu(fsl_chan, edesc->tcd[i].vtcd, nbytes);
+	for (i = 0; i < fsl_chan->edesc->n_its; i++) {
+		nbytes = fsl_edma_get_tcd_to_cpu(fsl_chan, edesc->its[i].vaddr, nbytes);
 		if (nbytes & (EDMA_V3_TCD_NBYTES_DMLOE | EDMA_V3_TCD_NBYTES_SMLOE))
 			nbytes = EDMA_V3_TCD_NBYTES_MLOFF_NBYTES(nbytes);
 
-		size = nbytes * fsl_edma_get_tcd_to_cpu(fsl_chan, edesc->tcd[i].vtcd, biter);
+		size = nbytes * fsl_edma_get_tcd_to_cpu(fsl_chan, edesc->its[i].vaddr, biter);
 
 		if (dir == DMA_MEM_TO_DEV)
-			dma_addr = fsl_edma_get_tcd_to_cpu(fsl_chan, edesc->tcd[i].vtcd, saddr);
+			dma_addr = fsl_edma_get_tcd_to_cpu(fsl_chan, edesc->its[i].vaddr, saddr);
 		else
-			dma_addr = fsl_edma_get_tcd_to_cpu(fsl_chan, edesc->tcd[i].vtcd, daddr);
+			dma_addr = fsl_edma_get_tcd_to_cpu(fsl_chan, edesc->its[i].vaddr, daddr);
 
 		len -= size;
 		if (cur_addr >= dma_addr && cur_addr < dma_addr + size) {
@@ -546,29 +546,30 @@ void fsl_edma_fill_tcd(struct fsl_edma_chan *fsl_chan,
 	trace_edma_fill_tcd(fsl_chan, tcd);
 }
 
-static struct fsl_edma_desc *fsl_edma_alloc_desc(struct fsl_edma_chan *fsl_chan,
-		int sg_len)
+static struct dma_ll_desc *
+fsl_edma_alloc_desc(struct fsl_edma_chan *fsl_chan, int sg_len)
 {
-	struct fsl_edma_desc *fsl_desc;
+	struct dma_ll_desc *fsl_desc;
 	int i;
 
-	fsl_desc = kzalloc(struct_size(fsl_desc, tcd, sg_len), GFP_NOWAIT);
+	fsl_desc = kzalloc(struct_size(fsl_desc, its, sg_len), GFP_NOWAIT);
 	if (!fsl_desc)
 		return NULL;
 
-	fsl_desc->n_tcds = sg_len;
+	fsl_desc->n_its = sg_len;
 	for (i = 0; i < sg_len; i++) {
-		fsl_desc->tcd[i].vtcd = dma_pool_alloc(fsl_chan->tcd_pool,
-					GFP_NOWAIT, &fsl_desc->tcd[i].ptcd);
-		if (!fsl_desc->tcd[i].vtcd)
+		fsl_desc->its[i].vaddr = dma_pool_alloc(fsl_chan->vchan.ll.pool,
+							GFP_NOWAIT,
+							&fsl_desc->its[i].paddr);
+		if (!fsl_desc->its[i].vaddr)
 			goto err;
 	}
 	return fsl_desc;
 
 err:
 	while (--i >= 0)
-		dma_pool_free(fsl_chan->tcd_pool, fsl_desc->tcd[i].vtcd,
-				fsl_desc->tcd[i].ptcd);
+		dma_pool_free(fsl_chan->vchan.ll.pool, fsl_desc->its[i].vaddr,
+			      fsl_desc->its[i].paddr);
 	kfree(fsl_desc);
 	return NULL;
 }
@@ -580,7 +581,7 @@ struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic(
 {
 	struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
 	struct dma_slave_config *cfg = &chan->config;
-	struct fsl_edma_desc *fsl_desc;
+	struct dma_ll_desc *fsl_desc;
 	dma_addr_t dma_buf_next;
 	bool major_int = true;
 	int sg_len, i;
@@ -599,7 +600,7 @@ struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic(
 	if (!fsl_desc)
 		return NULL;
 	fsl_desc->iscyclic = true;
-	fsl_desc->dirn = direction;
+	fsl_desc->dir = direction;
 
 	dma_buf_next = dma_addr;
 	if (direction == DMA_MEM_TO_DEV) {
@@ -625,7 +626,7 @@ struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic(
 			dma_buf_next = dma_addr;
 
 		/* get next sg's physical address */
-		last_sg = fsl_desc->tcd[(i + 1) % sg_len].ptcd;
+		last_sg = fsl_desc->its[(i + 1) % sg_len].paddr;
 
 		if (direction == DMA_MEM_TO_DEV) {
 			src_addr = dma_buf_next;
@@ -649,7 +650,7 @@ struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic(
 			major_int = false;
 		}
 
-		fsl_edma_fill_tcd(fsl_chan, fsl_desc->tcd[i].vtcd, src_addr, dst_addr,
+		fsl_edma_fill_tcd(fsl_chan, fsl_desc->its[i].vaddr, src_addr, dst_addr,
 				  fsl_chan->attr, soff, nbytes, 0, iter,
 				  iter, doff, last_sg, major_int, false, true);
 		dma_buf_next += period_len;
@@ -665,7 +666,7 @@ struct dma_async_tx_descriptor *fsl_edma_prep_slave_sg(
 {
 	struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
 	struct dma_slave_config *cfg = &chan->config;
-	struct fsl_edma_desc *fsl_desc;
+	struct dma_ll_desc *fsl_desc;
 	struct scatterlist *sg;
 	dma_addr_t src_addr, dst_addr, last_sg;
 	u16 soff, doff, iter;
@@ -682,7 +683,7 @@ struct dma_async_tx_descriptor *fsl_edma_prep_slave_sg(
 	if (!fsl_desc)
 		return NULL;
 	fsl_desc->iscyclic = false;
-	fsl_desc->dirn = direction;
+	fsl_desc->dir = direction;
 
 	if (direction == DMA_MEM_TO_DEV) {
 		if (!cfg->src_addr_width)
@@ -745,14 +746,14 @@ struct dma_async_tx_descriptor *fsl_edma_prep_slave_sg(
 		}
 		iter = sg_dma_len(sg) / nbytes;
 		if (i < sg_len - 1) {
-			last_sg = fsl_desc->tcd[(i + 1)].ptcd;
-			fsl_edma_fill_tcd(fsl_chan, fsl_desc->tcd[i].vtcd, src_addr,
+			last_sg = fsl_desc->its[(i + 1)].paddr;
+			fsl_edma_fill_tcd(fsl_chan, fsl_desc->its[i].vaddr, src_addr,
 					  dst_addr, fsl_chan->attr, soff,
 					  nbytes, 0, iter, iter, doff, last_sg,
 					  false, false, true);
 		} else {
 			last_sg = 0;
-			fsl_edma_fill_tcd(fsl_chan, fsl_desc->tcd[i].vtcd, src_addr,
+			fsl_edma_fill_tcd(fsl_chan, fsl_desc->its[i].vaddr, src_addr,
 					  dst_addr, fsl_chan->attr, soff,
 					  nbytes, 0, iter, iter, doff, last_sg,
 					  true, true, false);
@@ -767,7 +768,7 @@ struct dma_async_tx_descriptor *fsl_edma_prep_memcpy(struct dma_chan *chan,
 						     size_t len, unsigned long flags)
 {
 	struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
-	struct fsl_edma_desc *fsl_desc;
+	struct dma_ll_desc *fsl_desc;
 	u32 src_bus_width, dst_bus_width;
 
 	src_bus_width = min_t(u32, DMA_SLAVE_BUSWIDTH_32_BYTES, 1 << (ffs(dma_src) - 1));
@@ -783,10 +784,10 @@ struct dma_async_tx_descriptor *fsl_edma_prep_memcpy(struct dma_chan *chan,
 		fsl_chan->is_remote = true;
 
 	/* To match with copy_align and max_seg_size so 1 tcd is enough */
-	fsl_edma_fill_tcd(fsl_chan, fsl_desc->tcd[0].vtcd, dma_src, dma_dst,
-			fsl_edma_get_tcd_attr(src_bus_width, dst_bus_width),
-			src_bus_width, len, 0, 1, 1, dst_bus_width, 0, true,
-			true, false);
+	fsl_edma_fill_tcd(fsl_chan, fsl_desc->its[0].vaddr, dma_src, dma_dst,
+			  fsl_edma_get_tcd_attr(src_bus_width, dst_bus_width),
+			  src_bus_width, len, 0, 1, 1, dst_bus_width, 0, true,
+			  true, false);
 
 	return vchan_tx_prep(&fsl_chan->vchan, &fsl_desc->vdesc, flags);
 }
@@ -800,8 +801,8 @@ void fsl_edma_xfer_desc(struct fsl_edma_chan *fsl_chan)
 	vdesc = vchan_next_desc(&fsl_chan->vchan);
 	if (!vdesc)
 		return;
-	fsl_chan->edesc = to_fsl_edma_desc(vdesc);
-	fsl_edma_set_tcd_regs(fsl_chan, fsl_chan->edesc->tcd[0].vtcd);
+	fsl_chan->edesc = to_dma_ll_desc(vdesc);
+	fsl_edma_set_tcd_regs(fsl_chan, fsl_chan->edesc->its[0].vaddr);
 	fsl_edma_enable_request(fsl_chan);
 	fsl_chan->status = DMA_IN_PROGRESS;
 }
@@ -855,7 +856,8 @@ int fsl_edma_alloc_chan_resources(struct dma_chan *chan)
 	if (ret)
 		return ret;
 
-	fsl_chan->tcd_pool = dma_pool_create("tcd_pool", chan->device->dev,
+	fsl_chan->vchan.ll.pool =
+		dma_pool_create("tcd_pool", chan->device->dev,
 				fsl_edma_drvflags(fsl_chan) & FSL_EDMA_DRV_TCD64 ?
 				sizeof(struct fsl_edma_hw_tcd64) : sizeof(struct fsl_edma_hw_tcd),
 				32, 0);
@@ -880,7 +882,8 @@ int fsl_edma_alloc_chan_resources(struct dma_chan *chan)
 	if (fsl_chan->txirq)
 		free_irq(fsl_chan->txirq, fsl_chan);
 err_txirq:
-	dma_pool_destroy(fsl_chan->tcd_pool);
+	dma_pool_destroy(fsl_chan->vchan.ll.pool);
+	clk_disable_unprepare(fsl_chan->clk);
 
 	return ret;
 }
@@ -907,8 +910,8 @@ void fsl_edma_free_chan_resources(struct dma_chan *chan)
 		free_irq(fsl_chan->errirq, fsl_chan);
 
 	vchan_dma_desc_free_list(&fsl_chan->vchan, &head);
-	dma_pool_destroy(fsl_chan->tcd_pool);
-	fsl_chan->tcd_pool = NULL;
+	dma_pool_destroy(fsl_chan->vchan.ll.pool);
+	fsl_chan->vchan.ll.pool = NULL;
 	fsl_chan->is_sw = false;
 	fsl_chan->srcid = 0;
 	fsl_chan->is_remote = false;
diff --git a/drivers/dma/fsl-edma-common.h b/drivers/dma/fsl-edma-common.h
index a0d83ad783f7a53caab93d280c6e40f63b8e9e5c..56d219d57b852e0769cbead11fadac89913747e2 100644
--- a/drivers/dma/fsl-edma-common.h
+++ b/drivers/dma/fsl-edma-common.h
@@ -155,17 +155,12 @@ struct edma_regs {
 	void __iomem *errl;
 };
 
-struct fsl_edma_sw_tcd {
-	dma_addr_t			ptcd;
-	void				*vtcd;
-};
-
 struct fsl_edma_chan {
 	struct virt_dma_chan		vchan;
 	enum dma_status			status;
 	enum fsl_edma_pm_state		pm_state;
 	struct fsl_edma_engine		*edma;
-	struct fsl_edma_desc		*edesc;
+	struct dma_ll_desc		*edesc;
 	u32				attr;
 	bool                            is_sw;
 	struct dma_pool			*tcd_pool;
@@ -194,14 +189,6 @@ struct fsl_edma_chan {
 	bool				is_multi_fifo;
 };
 
-struct fsl_edma_desc {
-	struct virt_dma_desc		vdesc;
-	bool				iscyclic;
-	enum dma_transfer_direction	dirn;
-	unsigned int			n_tcds;
-	struct fsl_edma_sw_tcd		tcd[];
-};
-
 #define FSL_EDMA_DRV_HAS_DMACLK		BIT(0)
 #define FSL_EDMA_DRV_MUX_SWAP		BIT(1)
 #define FSL_EDMA_DRV_CONFIG32		BIT(2)
@@ -468,11 +455,6 @@ static inline struct fsl_edma_chan *to_fsl_edma_chan(struct dma_chan *chan)
 	return container_of(chan, struct fsl_edma_chan, vchan.chan);
 }
 
-static inline struct fsl_edma_desc *to_fsl_edma_desc(struct virt_dma_desc *vd)
-{
-	return container_of(vd, struct fsl_edma_desc, vdesc);
-}
-
 static inline void fsl_edma_err_chan_handler(struct fsl_edma_chan *fsl_chan)
 {
 	fsl_chan->status = DMA_ERROR;

-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC 05/12] dmaengine: Add DMA pool allocation in vchan_dma_ll_init() and API vchan_dma_ll_free()
  2026-01-28 18:05 [PATCH RFC 00/12] dmaengine: introduce common linked-list DMA controller framework Frank Li
                   ` (3 preceding siblings ...)
  2026-01-28 18:05 ` [PATCH RFC 04/12] dmaengine: fsl-edma: Use common dma_ll_desc in vchan Frank Li
@ 2026-01-28 18:05 ` Frank Li
  2026-01-28 18:05 ` [PATCH RFC 06/12] dmaengine: Move fsl_edma_(alloc|free)_desc() to common library Frank Li
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Frank Li @ 2026-01-28 18:05 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-kernel, dmaengine, imx, joy.zou, Frank Li

Add DMA pool allocation in vchan_dma_ll_init() and API vchan_dma_ll_free().
Update fsl-edma to remove its local DMA pool create/free logic, as this is
now handled by the common library.

No functional change.

Signed-off-by: Frank Li <Frank.Li@nxp.com>
---
 drivers/dma/fsl-edma-common.c | 11 ++---------
 drivers/dma/ll-dma.c          | 17 +++++++++++++++++
 drivers/dma/virt-dma.h        |  1 +
 3 files changed, 20 insertions(+), 9 deletions(-)

diff --git a/drivers/dma/fsl-edma-common.c b/drivers/dma/fsl-edma-common.c
index 17a8e28037f5e61d4aafbd7f32bde407ecc01a4d..1b5dcb4c333e7b9a0b1b3bd7964dcff94641bd79 100644
--- a/drivers/dma/fsl-edma-common.c
+++ b/drivers/dma/fsl-edma-common.c
@@ -856,12 +856,6 @@ int fsl_edma_alloc_chan_resources(struct dma_chan *chan)
 	if (ret)
 		return ret;
 
-	fsl_chan->vchan.ll.pool =
-		dma_pool_create("tcd_pool", chan->device->dev,
-				fsl_edma_drvflags(fsl_chan) & FSL_EDMA_DRV_TCD64 ?
-				sizeof(struct fsl_edma_hw_tcd64) : sizeof(struct fsl_edma_hw_tcd),
-				32, 0);
-
 	if (fsl_chan->txirq)
 		ret = request_irq(fsl_chan->txirq, fsl_chan->irq_handler, IRQF_SHARED,
 				 fsl_chan->chan_name, fsl_chan);
@@ -882,7 +876,7 @@ int fsl_edma_alloc_chan_resources(struct dma_chan *chan)
 	if (fsl_chan->txirq)
 		free_irq(fsl_chan->txirq, fsl_chan);
 err_txirq:
-	dma_pool_destroy(fsl_chan->vchan.ll.pool);
+	vchan_dma_ll_free(&fsl_chan->vchan);
 	clk_disable_unprepare(fsl_chan->clk);
 
 	return ret;
@@ -910,8 +904,7 @@ void fsl_edma_free_chan_resources(struct dma_chan *chan)
 		free_irq(fsl_chan->errirq, fsl_chan);
 
 	vchan_dma_desc_free_list(&fsl_chan->vchan, &head);
-	dma_pool_destroy(fsl_chan->vchan.ll.pool);
-	fsl_chan->vchan.ll.pool = NULL;
+	vchan_dma_ll_free(&fsl_chan->vchan);
 	fsl_chan->is_sw = false;
 	fsl_chan->srcid = 0;
 	fsl_chan->is_remote = false;
diff --git a/drivers/dma/ll-dma.c b/drivers/dma/ll-dma.c
index 3845cca7926eb71f008cb98d8c622cb28a2369a5..3b6de65ae83c070d2ca588abf6bca2c49c1d7bd2 100644
--- a/drivers/dma/ll-dma.c
+++ b/drivers/dma/ll-dma.c
@@ -17,6 +17,7 @@
  */
 #include <linux/cleanup.h>
 #include <linux/device.h>
+#include <linux/dmapool.h>
 #include <linux/dmaengine.h>
 #include <linux/module.h>
 #include <linux/spinlock.h>
@@ -32,10 +33,26 @@ int vchan_dma_ll_init(struct virt_dma_chan *vc,
 
 	vc->ll.ops = ops;
 
+	vc->ll.pool = dma_pool_create(dev_name(vc->chan.device->dev),
+				      vc->chan.device->dev, size, align,
+				      boundary);
+	if (!vc->ll.pool) {
+		dev_err(&vc->chan.dev->device,
+			"Unable to allocate descriptor pool\n");
+		return -ENOMEM;
+	}
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(vchan_dma_ll_init);
 
+void vchan_dma_ll_free(struct virt_dma_chan *vc)
+{
+	dma_pool_destroy(vc->ll.pool);
+	vc->ll.pool = NULL;
+}
+EXPORT_SYMBOL_GPL(vchan_dma_ll_free);
+
 int vchan_dma_ll_terminate_all(struct dma_chan *chan)
 {
 	struct virt_dma_chan *vchan = to_virt_chan(chan);
diff --git a/drivers/dma/virt-dma.h b/drivers/dma/virt-dma.h
index 82f3f8244f6eca036a027c9a4c9339fcb87e8d2c..e3311be3d917ea1e0d5f4fb0e6781c7d0737c0a5 100644
--- a/drivers/dma/virt-dma.h
+++ b/drivers/dma/virt-dma.h
@@ -276,6 +276,7 @@ static inline struct dma_ll_desc *to_dma_ll_desc(struct virt_dma_desc *vdesc)
 int vchan_dma_ll_init(struct virt_dma_chan *vc,
 		      const struct dma_linklist_ops *ops, size_t size,
 		      size_t align, size_t boundary);
+void vchan_dma_ll_free(struct virt_dma_chan *vc);
 int vchan_dma_ll_terminate_all(struct dma_chan *chan);
 #endif
 

-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC 06/12] dmaengine: Move fsl_edma_(alloc|free)_desc() to common library
  2026-01-28 18:05 [PATCH RFC 00/12] dmaengine: introduce common linked-list DMA controller framework Frank Li
                   ` (4 preceding siblings ...)
  2026-01-28 18:05 ` [PATCH RFC 05/12] dmaengine: Add DMA pool allocation in vchan_dma_ll_init() and API vchan_dma_ll_free() Frank Li
@ 2026-01-28 18:05 ` Frank Li
  2026-01-28 18:05 ` [PATCH RFC 07/12] dmaengine: virt-dma: split vchan_tx_prep() into init and internal helpers Frank Li
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Frank Li @ 2026-01-28 18:05 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-kernel, dmaengine, imx, joy.zou, Frank Li

Move fsl_edma_(alloc|free)_desc() to the common DMA link-list library and
rename them to vchan_dma_ll_(alloc|free)_desc().

Remove the "fsl_" prefix from local variables accordingly.

No functional change.

Signed-off-by: Frank Li <Frank.Li@nxp.com>
---
 drivers/dma/fsl-edma-common.c | 47 +++----------------------------------------
 drivers/dma/fsl-edma-common.h |  1 -
 drivers/dma/fsl-edma-main.c   |  2 +-
 drivers/dma/ll-dma.c          | 43 +++++++++++++++++++++++++++++++++++++++
 drivers/dma/mcf-edma-main.c   |  2 +-
 drivers/dma/virt-dma.h        |  2 ++
 6 files changed, 50 insertions(+), 47 deletions(-)

diff --git a/drivers/dma/fsl-edma-common.c b/drivers/dma/fsl-edma-common.c
index 1b5dcb4c333e7b9a0b1b3bd7964dcff94641bd79..20b954221c2e9b3b3a6849c1f0d4ca68efecb32e 100644
--- a/drivers/dma/fsl-edma-common.c
+++ b/drivers/dma/fsl-edma-common.c
@@ -221,19 +221,6 @@ static unsigned int fsl_edma_get_tcd_attr(enum dma_slave_buswidth src_addr_width
 	return dst_val | (src_val << 8);
 }
 
-void fsl_edma_free_desc(struct virt_dma_desc *vdesc)
-{
-	struct dma_ll_desc *fsl_desc;
-	int i;
-
-	fsl_desc = to_dma_ll_desc(vdesc);
-	for (i = 0; i < fsl_desc->n_its; i++)
-		dma_pool_free(to_virt_chan(vdesc->tx.chan)->ll.pool,
-			      fsl_desc->its[i].vaddr,
-			      fsl_desc->its[i].paddr);
-	kfree(fsl_desc);
-}
-
 int fsl_edma_terminate_all(struct dma_chan *chan)
 {
 	struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
@@ -546,34 +533,6 @@ void fsl_edma_fill_tcd(struct fsl_edma_chan *fsl_chan,
 	trace_edma_fill_tcd(fsl_chan, tcd);
 }
 
-static struct dma_ll_desc *
-fsl_edma_alloc_desc(struct fsl_edma_chan *fsl_chan, int sg_len)
-{
-	struct dma_ll_desc *fsl_desc;
-	int i;
-
-	fsl_desc = kzalloc(struct_size(fsl_desc, its, sg_len), GFP_NOWAIT);
-	if (!fsl_desc)
-		return NULL;
-
-	fsl_desc->n_its = sg_len;
-	for (i = 0; i < sg_len; i++) {
-		fsl_desc->its[i].vaddr = dma_pool_alloc(fsl_chan->vchan.ll.pool,
-							GFP_NOWAIT,
-							&fsl_desc->its[i].paddr);
-		if (!fsl_desc->its[i].vaddr)
-			goto err;
-	}
-	return fsl_desc;
-
-err:
-	while (--i >= 0)
-		dma_pool_free(fsl_chan->vchan.ll.pool, fsl_desc->its[i].vaddr,
-			      fsl_desc->its[i].paddr);
-	kfree(fsl_desc);
-	return NULL;
-}
-
 struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic(
 		struct dma_chan *chan, dma_addr_t dma_addr, size_t buf_len,
 		size_t period_len, enum dma_transfer_direction direction,
@@ -596,7 +555,7 @@ struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic(
 		return NULL;
 
 	sg_len = buf_len / period_len;
-	fsl_desc = fsl_edma_alloc_desc(fsl_chan, sg_len);
+	fsl_desc = vchan_dma_ll_alloc_desc(chan, sg_len);
 	if (!fsl_desc)
 		return NULL;
 	fsl_desc->iscyclic = true;
@@ -679,7 +638,7 @@ struct dma_async_tx_descriptor *fsl_edma_prep_slave_sg(
 	if (!fsl_edma_prep_slave_dma(fsl_chan, direction))
 		return NULL;
 
-	fsl_desc = fsl_edma_alloc_desc(fsl_chan, sg_len);
+	fsl_desc = vchan_dma_ll_alloc_desc(chan, sg_len);
 	if (!fsl_desc)
 		return NULL;
 	fsl_desc->iscyclic = false;
@@ -774,7 +733,7 @@ struct dma_async_tx_descriptor *fsl_edma_prep_memcpy(struct dma_chan *chan,
 	src_bus_width = min_t(u32, DMA_SLAVE_BUSWIDTH_32_BYTES, 1 << (ffs(dma_src) - 1));
 	dst_bus_width = min_t(u32, DMA_SLAVE_BUSWIDTH_32_BYTES, 1 << (ffs(dma_dst) - 1));
 
-	fsl_desc = fsl_edma_alloc_desc(fsl_chan, 1);
+	fsl_desc = vchan_dma_ll_alloc_desc(chan, 1);
 	if (!fsl_desc)
 		return NULL;
 	fsl_desc->iscyclic = false;
diff --git a/drivers/dma/fsl-edma-common.h b/drivers/dma/fsl-edma-common.h
index 56d219d57b852e0769cbead11fadac89913747e2..654d05f06b2c1817e68e7afaf9de3439285d2978 100644
--- a/drivers/dma/fsl-edma-common.h
+++ b/drivers/dma/fsl-edma-common.h
@@ -464,7 +464,6 @@ void fsl_edma_tx_chan_handler(struct fsl_edma_chan *fsl_chan);
 void fsl_edma_disable_request(struct fsl_edma_chan *fsl_chan);
 void fsl_edma_chan_mux(struct fsl_edma_chan *fsl_chan,
 			unsigned int slot, bool enable);
-void fsl_edma_free_desc(struct virt_dma_desc *vdesc);
 int fsl_edma_terminate_all(struct dma_chan *chan);
 int fsl_edma_pause(struct dma_chan *chan);
 int fsl_edma_resume(struct dma_chan *chan);
diff --git a/drivers/dma/fsl-edma-main.c b/drivers/dma/fsl-edma-main.c
index a753b7cbfa7a3369d17314bc5bc9139c9f8e5c27..354e4ac5e46c920dd66ec1479a64c75a609c186d 100644
--- a/drivers/dma/fsl-edma-main.c
+++ b/drivers/dma/fsl-edma-main.c
@@ -808,7 +808,7 @@ static int fsl_edma_probe(struct platform_device *pdev)
 		fsl_chan->pm_state = RUNNING;
 		fsl_chan->srcid = 0;
 		fsl_chan->dma_dir = DMA_NONE;
-		fsl_chan->vchan.desc_free = fsl_edma_free_desc;
+		fsl_chan->vchan.desc_free = vchan_dma_ll_free_desc;
 
 		len = (drvdata->flags & FSL_EDMA_DRV_SPLIT_REG) ?
 				offsetof(struct fsl_edma3_ch_reg, tcd) : 0;
diff --git a/drivers/dma/ll-dma.c b/drivers/dma/ll-dma.c
index 3b6de65ae83c070d2ca588abf6bca2c49c1d7bd2..ff9eac43886255c18550c978184c0801456fefe9 100644
--- a/drivers/dma/ll-dma.c
+++ b/drivers/dma/ll-dma.c
@@ -53,6 +53,49 @@ void vchan_dma_ll_free(struct virt_dma_chan *vc)
 }
 EXPORT_SYMBOL_GPL(vchan_dma_ll_free);
 
+struct dma_ll_desc *vchan_dma_ll_alloc_desc(struct dma_chan *chan, u32 n)
+{
+	struct virt_dma_chan *vchan = to_virt_chan(chan);
+	struct dma_ll_desc *desc;
+	u32 i;
+
+	desc = kzalloc(struct_size(desc, its, n), GFP_NOWAIT);
+	if (!desc)
+		return NULL;
+
+	desc->n_its = n;
+
+	for (i = 0; i < n; i++) {
+		desc->its[i].vaddr = dma_pool_alloc(vchan->ll.pool, GFP_NOWAIT,
+						    &desc->its[i].paddr);
+		if (!desc->its[i].vaddr)
+			goto err;
+	}
+
+	return desc;
+
+err:
+	while (--i >= 0)
+		dma_pool_free(vchan->ll.pool, desc->its[i].vaddr,
+			      desc->its[i].paddr);
+	kfree(desc);
+	return NULL;
+}
+EXPORT_SYMBOL_GPL(vchan_dma_ll_alloc_desc);
+
+void vchan_dma_ll_free_desc(struct virt_dma_desc *vdesc)
+{
+	struct dma_ll_desc *desc = to_dma_ll_desc(vdesc);
+	struct virt_dma_chan *vchan = to_virt_chan(vdesc->tx.chan);
+	int i;
+
+	for (i = 0; i < desc->n_its; i++)
+		dma_pool_free(vchan->ll.pool, desc->its[i].vaddr,
+			      desc->its[i].paddr);
+	kfree(desc);
+}
+EXPORT_SYMBOL_GPL(vchan_dma_ll_free_desc);
+
 int vchan_dma_ll_terminate_all(struct dma_chan *chan)
 {
 	struct virt_dma_chan *vchan = to_virt_chan(chan);
diff --git a/drivers/dma/mcf-edma-main.c b/drivers/dma/mcf-edma-main.c
index 9e1c6400c77be237684855759382d7b7bd2e6ea0..60c5b928ade74d36c8f4206777921544787f6cd8 100644
--- a/drivers/dma/mcf-edma-main.c
+++ b/drivers/dma/mcf-edma-main.c
@@ -196,7 +196,7 @@ static int mcf_edma_probe(struct platform_device *pdev)
 		mcf_chan->edma = mcf_edma;
 		mcf_chan->srcid = i;
 		mcf_chan->dma_dir = DMA_NONE;
-		mcf_chan->vchan.desc_free = fsl_edma_free_desc;
+		mcf_chan->vchan.desc_free = vchan_dma_ll_free_desc;
 		vchan_init(&mcf_chan->vchan, &mcf_edma->dma_dev);
 		mcf_chan->tcd = mcf_edma->membase + EDMA_TCD
 				+ i * sizeof(struct fsl_edma_hw_tcd);
diff --git a/drivers/dma/virt-dma.h b/drivers/dma/virt-dma.h
index e3311be3d917ea1e0d5f4fb0e6781c7d0737c0a5..a15f9e318ca5ec7fd3c4e6fc6864ad3d1dc3eaa5 100644
--- a/drivers/dma/virt-dma.h
+++ b/drivers/dma/virt-dma.h
@@ -277,6 +277,8 @@ int vchan_dma_ll_init(struct virt_dma_chan *vc,
 		      const struct dma_linklist_ops *ops, size_t size,
 		      size_t align, size_t boundary);
 void vchan_dma_ll_free(struct virt_dma_chan *vc);
+struct dma_ll_desc *vchan_dma_ll_alloc_desc(struct dma_chan *chan, u32 n);
+void vchan_dma_ll_free_desc(struct virt_dma_desc *vdesc);
 int vchan_dma_ll_terminate_all(struct dma_chan *chan);
 #endif
 

-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC 07/12] dmaengine: virt-dma: split vchan_tx_prep() into init and internal helpers
  2026-01-28 18:05 [PATCH RFC 00/12] dmaengine: introduce common linked-list DMA controller framework Frank Li
                   ` (5 preceding siblings ...)
  2026-01-28 18:05 ` [PATCH RFC 06/12] dmaengine: Move fsl_edma_(alloc|free)_desc() to common library Frank Li
@ 2026-01-28 18:05 ` Frank Li
  2026-01-28 18:05 ` [PATCH RFC 08/12] dmaengine: Factor out fsl-edma prep_memcpy into common vchan helper Frank Li
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Frank Li @ 2026-01-28 18:05 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-kernel, dmaengine, imx, joy.zou, Frank Li

Split vchan_tx_prep() into vchan_init_dma_async_tx() and __vchan_tx_prep()
to prepare for supporting the common linked-list DMA library.

struct dma_async_tx_descriptor already contains the dma_chan pointer, so
drivers do not need to duplicate it in their own descriptor structures
derived from vchan_desc.

Previously, dma_chan was NULL during descriptor preparation because
vchan_init_dma_async_tx() was called too late. Initializing the
dma_async_tx_descriptor earlier allows drivers to directly access
dma_chan during the prepare phase.

No functional change.

Signed-off-by: Frank Li <Frank.Li@nxp.com>
---
 drivers/dma/virt-dma.h | 38 ++++++++++++++++++++++++++------------
 1 file changed, 26 insertions(+), 12 deletions(-)

diff --git a/drivers/dma/virt-dma.h b/drivers/dma/virt-dma.h
index a15f9e318ca5ec7fd3c4e6fc6864ad3d1dc3eaa5..ad5ce489cf8e52aa02a0129bc5657fadd6070da2 100644
--- a/drivers/dma/virt-dma.h
+++ b/drivers/dma/virt-dma.h
@@ -80,17 +80,22 @@ struct virt_dma_desc *vchan_find_desc(struct virt_dma_chan *, dma_cookie_t);
 extern dma_cookie_t vchan_tx_submit(struct dma_async_tx_descriptor *);
 extern int vchan_tx_desc_free(struct dma_async_tx_descriptor *);
 
-/**
- * vchan_tx_prep - prepare a descriptor
- * @vc: virtual channel allocating this descriptor
- * @vd: virtual descriptor to prepare
- * @tx_flags: flags argument passed in to prepare function
- */
-static inline struct dma_async_tx_descriptor *vchan_tx_prep(struct virt_dma_chan *vc,
-	struct virt_dma_desc *vd, unsigned long tx_flags)
+static inline struct dma_async_tx_descriptor *
+__vchan_tx_prep(struct virt_dma_chan *vc, struct virt_dma_desc *vd)
 {
 	unsigned long flags;
 
+	spin_lock_irqsave(&vc->lock, flags);
+	list_add_tail(&vd->node, &vc->desc_allocated);
+	spin_unlock_irqrestore(&vc->lock, flags);
+
+	return &vd->tx;
+}
+
+static inline void
+vchan_init_dma_async_tx(struct virt_dma_chan *vc, struct virt_dma_desc *vd,
+			unsigned long tx_flags)
+{
 	dma_async_tx_descriptor_init(&vd->tx, &vc->chan);
 	vd->tx.flags = tx_flags;
 	vd->tx.tx_submit = vchan_tx_submit;
@@ -98,12 +103,21 @@ static inline struct dma_async_tx_descriptor *vchan_tx_prep(struct virt_dma_chan
 
 	vd->tx_result.result = DMA_TRANS_NOERROR;
 	vd->tx_result.residue = 0;
+}
 
-	spin_lock_irqsave(&vc->lock, flags);
-	list_add_tail(&vd->node, &vc->desc_allocated);
-	spin_unlock_irqrestore(&vc->lock, flags);
+/**
+ * vchan_tx_prep - prepare a descriptor
+ * @vc: virtual channel allocating this descriptor
+ * @vd: virtual descriptor to prepare
+ * @tx_flags: flags argument passed in to prepare function
+ */
+static inline struct dma_async_tx_descriptor *
+vchan_tx_prep(struct virt_dma_chan *vc, struct virt_dma_desc *vd,
+	      unsigned long tx_flags)
+{
+	vchan_init_dma_async_tx(vc, vd, tx_flags);
 
-	return &vd->tx;
+	return __vchan_tx_prep(vc, vd);
 }
 
 /**

-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC 08/12] dmaengine: Factor out fsl-edma prep_memcpy into common vchan helper
  2026-01-28 18:05 [PATCH RFC 00/12] dmaengine: introduce common linked-list DMA controller framework Frank Li
                   ` (6 preceding siblings ...)
  2026-01-28 18:05 ` [PATCH RFC 07/12] dmaengine: virt-dma: split vchan_tx_prep() into init and internal helpers Frank Li
@ 2026-01-28 18:05 ` Frank Li
  2026-01-28 18:05 ` [PATCH RFC 09/12] dmaengine: ll-dma: support multi-descriptor memcpy for large transfers Frank Li
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Frank Li @ 2026-01-28 18:05 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-kernel, dmaengine, imx, joy.zou, Frank Li

Create a common vchan_dma_ll_prep_memcpy() based on
fsl_edma_prep_memcpy().

Add .set_ll_next() and .set_lli() callbacks to abstract DMA descriptor
format differences between controllers, allowing DMA engine drivers to
focus solely on hardware-specific descriptor programming.

Signed-off-by: Frank Li <Frank.Li@nxp.com>
---
 drivers/dma/fsl-edma-common.c | 123 ++++++++++++++++++++++++++----------------
 drivers/dma/fsl-edma-common.h |   3 --
 drivers/dma/fsl-edma-main.c   |   2 +-
 drivers/dma/ll-dma.c          |  39 +++++++++++++-
 drivers/dma/virt-dma.h        |  12 ++++-
 5 files changed, 127 insertions(+), 52 deletions(-)

diff --git a/drivers/dma/fsl-edma-common.c b/drivers/dma/fsl-edma-common.c
index 20b954221c2e9b3b3a6849c1f0d4ca68efecb32e..a8f29830e0172b7e818d209f20145121631743c3 100644
--- a/drivers/dma/fsl-edma-common.c
+++ b/drivers/dma/fsl-edma-common.c
@@ -461,17 +461,17 @@ static void fsl_edma_set_tcd_regs(struct fsl_edma_chan *fsl_chan, void *tcd)
 	edma_cp_tcd_to_reg(fsl_chan, tcd, csr);
 }
 
-static inline
-void fsl_edma_fill_tcd(struct fsl_edma_chan *fsl_chan,
-		       struct fsl_edma_hw_tcd *tcd, dma_addr_t src, dma_addr_t dst,
-		       u16 attr, u16 soff, u32 nbytes, dma_addr_t slast, u16 citer,
-		       u16 biter, u16 doff, dma_addr_t dlast_sga, bool major_int,
-		       bool disable_req, bool enable_sg)
+static inline void
+__fsl_edma_fill_tcd(struct fsl_edma_chan *fsl_chan,
+		    struct fsl_edma_hw_tcd *tcd, dma_addr_t src, dma_addr_t dst,
+		    u16 attr, u16 soff, u32 nbytes, dma_addr_t slast, u16 citer,
+		    u16 biter, u16 doff, bool major_int,
+		    bool disable_req)
 {
 	struct dma_slave_config *cfg = &fsl_chan->vchan.chan.config;
 	struct dma_slave_cfg *c = dma_slave_get_cfg(cfg, cfg->direction);
+	u16 csr = fsl_edma_get_tcd_to_cpu(fsl_chan, tcd, csr);
 	u32 burst = 0;
-	u16 csr = 0;
 
 	/*
 	 * eDMA hardware SGs require the TCDs to be stored in little
@@ -509,8 +509,6 @@ void fsl_edma_fill_tcd(struct fsl_edma_chan *fsl_chan,
 	fsl_edma_set_tcd_to_le(fsl_chan, tcd, EDMA_TCD_CITER_CITER(citer), citer);
 	fsl_edma_set_tcd_to_le(fsl_chan, tcd, doff, doff);
 
-	fsl_edma_set_tcd_to_le(fsl_chan, tcd, dlast_sga, dlast_sga);
-
 	fsl_edma_set_tcd_to_le(fsl_chan, tcd, EDMA_TCD_BITER_BITER(biter), biter);
 
 	if (major_int)
@@ -519,9 +517,6 @@ void fsl_edma_fill_tcd(struct fsl_edma_chan *fsl_chan,
 	if (disable_req)
 		csr |= EDMA_TCD_CSR_D_REQ;
 
-	if (enable_sg)
-		csr |= EDMA_TCD_CSR_E_SG;
-
 	if (fsl_chan->is_rxchan)
 		csr |= EDMA_TCD_CSR_ACTIVE;
 
@@ -533,6 +528,71 @@ void fsl_edma_fill_tcd(struct fsl_edma_chan *fsl_chan,
 	trace_edma_fill_tcd(fsl_chan, tcd);
 }
 
+static void
+__fsl_edma_set_ll_next(struct fsl_edma_chan *fsl_chan, void *tcd, dma_addr_t next)
+{
+	u32 csr = fsl_edma_get_tcd_to_cpu(fsl_chan, tcd, csr);
+
+	fsl_edma_set_tcd_to_le(fsl_chan, tcd, next, dlast_sga);
+
+	csr |= EDMA_TCD_CSR_E_SG;
+	fsl_edma_set_tcd_to_le(fsl_chan, tcd, csr, csr);
+}
+
+static int
+fsl_edma_set_ll_next(struct dma_ll_desc *desc, u32 idx, dma_addr_t next)
+{
+	struct dma_chan *chan = desc->vdesc.tx.chan;
+	struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
+	void *tcd = desc->its[idx].vaddr;
+
+	__fsl_edma_set_ll_next(fsl_chan, tcd, next);
+
+	return 0;
+}
+
+static inline
+void fsl_edma_fill_tcd(struct fsl_edma_chan *fsl_chan,
+		       struct fsl_edma_hw_tcd *tcd, dma_addr_t src, dma_addr_t dst,
+		       u16 attr, u16 soff, u32 nbytes, dma_addr_t slast, u16 citer,
+		       u16 biter, u16 doff, dma_addr_t dlast_sga, bool major_int,
+		       bool disable_req, bool enable_sg)
+{
+	__fsl_edma_fill_tcd(fsl_chan, tcd, src, dst, attr, soff, nbytes, slast,
+			    citer, biter, doff, major_int, disable_req);
+
+	if (enable_sg)
+		__fsl_edma_set_ll_next(fsl_chan, tcd, dlast_sga);
+}
+
+static int fsl_edma_set_lli(struct dma_ll_desc *desc, u32 idx,
+			    dma_addr_t src, dma_addr_t dst, size_t len, bool irq,
+			    struct dma_slave_config *config)
+{
+	struct dma_chan *chan = desc->vdesc.tx.chan;
+	struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
+	void *tcd = desc->its[idx].vaddr;
+	u32 src_bus_width, dst_bus_width;
+
+	/* Memory to memory */
+	if (!config) {
+		src_bus_width = min_t(u32, DMA_SLAVE_BUSWIDTH_32_BYTES, 1 << (ffs(src) - 1));
+		dst_bus_width = min_t(u32, DMA_SLAVE_BUSWIDTH_32_BYTES, 1 << (ffs(dst) - 1));
+
+		fsl_chan->is_sw = true;
+	}
+
+	if (fsl_edma_drvflags(fsl_chan) & FSL_EDMA_DRV_MEM_REMOTE)
+		fsl_chan->is_remote = true;
+
+	/* To match with copy_align and max_seg_size so 1 tcd is enough */
+	__fsl_edma_fill_tcd(fsl_chan, tcd, src, dst,
+			    fsl_edma_get_tcd_attr(src_bus_width, dst_bus_width),
+			    src_bus_width, len, 0, 1, 1, dst_bus_width, irq, true);
+
+	return 0;
+}
+
 struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic(
 		struct dma_chan *chan, dma_addr_t dma_addr, size_t buf_len,
 		size_t period_len, enum dma_transfer_direction direction,
@@ -555,7 +615,7 @@ struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic(
 		return NULL;
 
 	sg_len = buf_len / period_len;
-	fsl_desc = vchan_dma_ll_alloc_desc(chan, sg_len);
+	fsl_desc = vchan_dma_ll_alloc_desc(chan, sg_len, flags);
 	if (!fsl_desc)
 		return NULL;
 	fsl_desc->iscyclic = true;
@@ -615,7 +675,7 @@ struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic(
 		dma_buf_next += period_len;
 	}
 
-	return vchan_tx_prep(&fsl_chan->vchan, &fsl_desc->vdesc, flags);
+	return __vchan_tx_prep(&fsl_chan->vchan, &fsl_desc->vdesc);
 }
 
 struct dma_async_tx_descriptor *fsl_edma_prep_slave_sg(
@@ -638,7 +698,7 @@ struct dma_async_tx_descriptor *fsl_edma_prep_slave_sg(
 	if (!fsl_edma_prep_slave_dma(fsl_chan, direction))
 		return NULL;
 
-	fsl_desc = vchan_dma_ll_alloc_desc(chan, sg_len);
+	fsl_desc = vchan_dma_ll_alloc_desc(chan, sg_len, flags);
 	if (!fsl_desc)
 		return NULL;
 	fsl_desc->iscyclic = false;
@@ -719,36 +779,7 @@ struct dma_async_tx_descriptor *fsl_edma_prep_slave_sg(
 		}
 	}
 
-	return vchan_tx_prep(&fsl_chan->vchan, &fsl_desc->vdesc, flags);
-}
-
-struct dma_async_tx_descriptor *fsl_edma_prep_memcpy(struct dma_chan *chan,
-						     dma_addr_t dma_dst, dma_addr_t dma_src,
-						     size_t len, unsigned long flags)
-{
-	struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
-	struct dma_ll_desc *fsl_desc;
-	u32 src_bus_width, dst_bus_width;
-
-	src_bus_width = min_t(u32, DMA_SLAVE_BUSWIDTH_32_BYTES, 1 << (ffs(dma_src) - 1));
-	dst_bus_width = min_t(u32, DMA_SLAVE_BUSWIDTH_32_BYTES, 1 << (ffs(dma_dst) - 1));
-
-	fsl_desc = vchan_dma_ll_alloc_desc(chan, 1);
-	if (!fsl_desc)
-		return NULL;
-	fsl_desc->iscyclic = false;
-
-	fsl_chan->is_sw = true;
-	if (fsl_edma_drvflags(fsl_chan) & FSL_EDMA_DRV_MEM_REMOTE)
-		fsl_chan->is_remote = true;
-
-	/* To match with copy_align and max_seg_size so 1 tcd is enough */
-	fsl_edma_fill_tcd(fsl_chan, fsl_desc->its[0].vaddr, dma_src, dma_dst,
-			  fsl_edma_get_tcd_attr(src_bus_width, dst_bus_width),
-			  src_bus_width, len, 0, 1, 1, dst_bus_width, 0, true,
-			  true, false);
-
-	return vchan_tx_prep(&fsl_chan->vchan, &fsl_desc->vdesc, flags);
+	return __vchan_tx_prep(&fsl_chan->vchan, &fsl_desc->vdesc);
 }
 
 void fsl_edma_xfer_desc(struct fsl_edma_chan *fsl_chan)
@@ -797,6 +828,8 @@ static int fsl_edma_ll_stop(struct dma_chan *chan)
 }
 
 static const struct dma_linklist_ops fsl_edma_ll_ops = {
+	.set_ll_next = fsl_edma_set_ll_next,
+	.set_lli = fsl_edma_set_lli,
 	.stop = fsl_edma_ll_stop,
 };
 
diff --git a/drivers/dma/fsl-edma-common.h b/drivers/dma/fsl-edma-common.h
index 654d05f06b2c1817e68e7afaf9de3439285d2978..f2c346cb84f5f15d333cf8547963ea7a717f4d5f 100644
--- a/drivers/dma/fsl-edma-common.h
+++ b/drivers/dma/fsl-edma-common.h
@@ -479,9 +479,6 @@ struct dma_async_tx_descriptor *fsl_edma_prep_slave_sg(
 		struct dma_chan *chan, struct scatterlist *sgl,
 		unsigned int sg_len, enum dma_transfer_direction direction,
 		unsigned long flags, void *context);
-struct dma_async_tx_descriptor *fsl_edma_prep_memcpy(
-		struct dma_chan *chan, dma_addr_t dma_dst, dma_addr_t dma_src,
-		size_t len, unsigned long flags);
 void fsl_edma_xfer_desc(struct fsl_edma_chan *fsl_chan);
 void fsl_edma_issue_pending(struct dma_chan *chan);
 int fsl_edma_alloc_chan_resources(struct dma_chan *chan);
diff --git a/drivers/dma/fsl-edma-main.c b/drivers/dma/fsl-edma-main.c
index 354e4ac5e46c920dd66ec1479a64c75a609c186d..1724a2d1449fe1850d460cefae5899a5ab828afd 100644
--- a/drivers/dma/fsl-edma-main.c
+++ b/drivers/dma/fsl-edma-main.c
@@ -850,7 +850,7 @@ static int fsl_edma_probe(struct platform_device *pdev)
 	fsl_edma->dma_dev.device_tx_status = fsl_edma_tx_status;
 	fsl_edma->dma_dev.device_prep_slave_sg = fsl_edma_prep_slave_sg;
 	fsl_edma->dma_dev.device_prep_dma_cyclic = fsl_edma_prep_dma_cyclic;
-	fsl_edma->dma_dev.device_prep_dma_memcpy = fsl_edma_prep_memcpy;
+	fsl_edma->dma_dev.device_prep_dma_memcpy = vchan_dma_ll_prep_memcpy;
 	fsl_edma->dma_dev.device_config = fsl_edma_slave_config;
 	fsl_edma->dma_dev.device_pause = fsl_edma_pause;
 	fsl_edma->dma_dev.device_resume = fsl_edma_resume;
diff --git a/drivers/dma/ll-dma.c b/drivers/dma/ll-dma.c
index ff9eac43886255c18550c978184c0801456fefe9..da13ba4dcdfe403af0ad3678bf4c0ff60f715a63 100644
--- a/drivers/dma/ll-dma.c
+++ b/drivers/dma/ll-dma.c
@@ -28,10 +28,11 @@ int vchan_dma_ll_init(struct virt_dma_chan *vc,
 		      const struct dma_linklist_ops *ops, size_t size,
 		      size_t align, size_t boundary)
 {
-	if (!ops || !ops->stop)
+	if (!ops || !ops->stop || !ops->set_ll_next || !ops->set_lli)
 		return -EINVAL;
 
 	vc->ll.ops = ops;
+	vc->ll.size = size;
 
 	vc->ll.pool = dma_pool_create(dev_name(vc->chan.device->dev),
 				      vc->chan.device->dev, size, align,
@@ -53,7 +54,8 @@ void vchan_dma_ll_free(struct virt_dma_chan *vc)
 }
 EXPORT_SYMBOL_GPL(vchan_dma_ll_free);
 
-struct dma_ll_desc *vchan_dma_ll_alloc_desc(struct dma_chan *chan, u32 n)
+struct dma_ll_desc *vchan_dma_ll_alloc_desc(struct dma_chan *chan, u32 n,
+					    unsigned long flags)
 {
 	struct virt_dma_chan *vchan = to_virt_chan(chan);
 	struct dma_ll_desc *desc;
@@ -65,11 +67,15 @@ struct dma_ll_desc *vchan_dma_ll_alloc_desc(struct dma_chan *chan, u32 n)
 
 	desc->n_its = n;
 
+	vchan_init_dma_async_tx(vchan, &desc->vdesc, flags);
+
 	for (i = 0; i < n; i++) {
 		desc->its[i].vaddr = dma_pool_alloc(vchan->ll.pool, GFP_NOWAIT,
 						    &desc->its[i].paddr);
 		if (!desc->its[i].vaddr)
 			goto err;
+
+		memset(desc->its[i].vaddr, 0, vchan->ll.size);
 	}
 
 	return desc;
@@ -96,6 +102,35 @@ void vchan_dma_ll_free_desc(struct virt_dma_desc *vdesc)
 }
 EXPORT_SYMBOL_GPL(vchan_dma_ll_free_desc);
 
+struct dma_async_tx_descriptor *
+vchan_dma_ll_prep_memcpy(struct dma_chan *chan,
+			 dma_addr_t dma_dst, dma_addr_t dma_src, size_t len,
+			 unsigned long flags)
+{
+	struct virt_dma_chan *vchan = to_virt_chan(chan);
+	struct dma_ll_desc *desc;
+	int ret;
+
+	desc = vchan_dma_ll_alloc_desc(chan, 1, flags);
+	if (!desc)
+		return NULL;
+
+	desc->iscyclic = false;
+
+	ret = vchan->ll.ops->set_lli(desc, 0, dma_src, dma_dst,
+				     len, true, NULL);
+
+	if (ret)
+		goto err;
+
+	return __vchan_tx_prep(vchan, &desc->vdesc);
+
+err:
+	vchan_dma_ll_free_desc(&desc->vdesc);
+	return NULL;
+}
+EXPORT_SYMBOL_GPL(vchan_dma_ll_prep_memcpy);
+
 int vchan_dma_ll_terminate_all(struct dma_chan *chan)
 {
 	struct virt_dma_chan *vchan = to_virt_chan(chan);
diff --git a/drivers/dma/virt-dma.h b/drivers/dma/virt-dma.h
index ad5ce489cf8e52aa02a0129bc5657fadd6070da2..f4aec6eb3c3900a5473c8feedc16b06e29751deb 100644
--- a/drivers/dma/virt-dma.h
+++ b/drivers/dma/virt-dma.h
@@ -40,11 +40,16 @@ struct dma_ll_desc {
 };
 
 struct dma_linklist_ops {
+	int (*set_ll_next)(struct dma_ll_desc *desc, u32 idx, dma_addr_t next);
+	int (*set_lli)(struct dma_ll_desc *desc, u32 idx,
+		       dma_addr_t src, dma_addr_t dst, size_t size,
+		       bool irq, struct dma_slave_config *config);
 	int (*stop)(struct dma_chan *chan);
 };
 
 struct dma_linklist {
 	struct dma_pool *pool;
+	size_t	size;
 	const struct dma_linklist_ops *ops;
 };
 
@@ -291,7 +296,12 @@ int vchan_dma_ll_init(struct virt_dma_chan *vc,
 		      const struct dma_linklist_ops *ops, size_t size,
 		      size_t align, size_t boundary);
 void vchan_dma_ll_free(struct virt_dma_chan *vc);
-struct dma_ll_desc *vchan_dma_ll_alloc_desc(struct dma_chan *chan, u32 n);
+struct dma_ll_desc *vchan_dma_ll_alloc_desc(struct dma_chan *chan, u32 n,
+					    unsigned long flags);
+struct dma_async_tx_descriptor *
+vchan_dma_ll_prep_memcpy(struct dma_chan *chan,
+			 dma_addr_t dma_dst, dma_addr_t dma_src, size_t len,
+			 unsigned long flags);
 void vchan_dma_ll_free_desc(struct virt_dma_desc *vdesc);
 int vchan_dma_ll_terminate_all(struct dma_chan *chan);
 #endif

-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC 09/12] dmaengine: ll-dma: support multi-descriptor memcpy for large transfers
  2026-01-28 18:05 [PATCH RFC 00/12] dmaengine: introduce common linked-list DMA controller framework Frank Li
                   ` (7 preceding siblings ...)
  2026-01-28 18:05 ` [PATCH RFC 08/12] dmaengine: Factor out fsl-edma prep_memcpy into common vchan helper Frank Li
@ 2026-01-28 18:05 ` Frank Li
  2026-01-28 18:05 ` [PATCH RFC 10/12] dmaengine: move fsl-edma dma_[un]map_resource() to linked list library Frank Li
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Frank Li @ 2026-01-28 18:05 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-kernel, dmaengine, imx, joy.zou, Frank Li

Each DMA descriptor has a maximum transfer length limitation.
When a memcpy request exceeds this limit, split it into multiple
linked DMA descriptors to complete the transfer.

Signed-off-by: Frank Li <Frank.Li@nxp.com>
---
 drivers/dma/ll-dma.c | 33 +++++++++++++++++++++++++++------
 1 file changed, 27 insertions(+), 6 deletions(-)

diff --git a/drivers/dma/ll-dma.c b/drivers/dma/ll-dma.c
index da13ba4dcdfe403af0ad3678bf4c0ff60f715a63..313ca274df945081fc569ddb6a172298c25bc11c 100644
--- a/drivers/dma/ll-dma.c
+++ b/drivers/dma/ll-dma.c
@@ -17,6 +17,7 @@
  */
 #include <linux/cleanup.h>
 #include <linux/device.h>
+#include <linux/dma-mapping.h>
 #include <linux/dmapool.h>
 #include <linux/dmaengine.h>
 #include <linux/module.h>
@@ -76,6 +77,15 @@ struct dma_ll_desc *vchan_dma_ll_alloc_desc(struct dma_chan *chan, u32 n,
 			goto err;
 
 		memset(desc->its[i].vaddr, 0, vchan->ll.size);
+
+		if (!i)
+			continue;
+
+		if (vchan->ll.ops->set_ll_next(desc, i - 1,
+					       desc->its[i].paddr)) {
+			i++; /* Need free current one */
+			goto err;
+		}
 	}
 
 	return desc;
@@ -107,21 +117,32 @@ vchan_dma_ll_prep_memcpy(struct dma_chan *chan,
 			 dma_addr_t dma_dst, dma_addr_t dma_src, size_t len,
 			 unsigned long flags)
 {
+	unsigned int max_seg = dma_get_max_seg_size(chan->device->dev);
 	struct virt_dma_chan *vchan = to_virt_chan(chan);
 	struct dma_ll_desc *desc;
-	int ret;
+	unsigned int ndesc;
+	int i, ret;
 
-	desc = vchan_dma_ll_alloc_desc(chan, 1, flags);
+	ndesc = (len + max_seg - 1) / max_seg;
+
+	desc = vchan_dma_ll_alloc_desc(chan, ndesc, flags);
 	if (!desc)
 		return NULL;
 
 	desc->iscyclic = false;
 
-	ret = vchan->ll.ops->set_lli(desc, 0, dma_src, dma_dst,
-				     len, true, NULL);
+	for (i = 0; i < ndesc; i++) {
+		ret = vchan->ll.ops->set_lli(desc, i,
+					     dma_src, dma_dst,
+					     min(len, max_seg),
+					     i == ndesc - 1, NULL);
+		if (ret)
+			goto err;
 
-	if (ret)
-		goto err;
+		dma_src += max_seg;
+		dma_dst += max_seg;
+		len -= max_seg;
+	}
 
 	return __vchan_tx_prep(vchan, &desc->vdesc);
 

-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC 10/12] dmaengine: move fsl-edma dma_[un]map_resource() to linked list library
  2026-01-28 18:05 [PATCH RFC 00/12] dmaengine: introduce common linked-list DMA controller framework Frank Li
                   ` (8 preceding siblings ...)
  2026-01-28 18:05 ` [PATCH RFC 09/12] dmaengine: ll-dma: support multi-descriptor memcpy for large transfers Frank Li
@ 2026-01-28 18:05 ` Frank Li
  2026-01-28 18:05 ` [PATCH RFC 11/12] dmaengine: fsl-edma: use local soff/doff variables Frank Li
  2026-01-28 18:05 ` [PATCH RFC 12/12] dmaengine: add vchan_dma_ll_prep_slave_{sg,cyclic} API Frank Li
  11 siblings, 0 replies; 13+ messages in thread
From: Frank Li @ 2026-01-28 18:05 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-kernel, dmaengine, imx, joy.zou, Frank Li

Move fsl-edma dma_[un]map_resource() into the common linked list library.
These helpers do not touch hardware resources and can be reused by other
DMA engine controller drivers.

Signed-off-by: Frank Li <Frank.Li@nxp.com>
---
 drivers/dma/fsl-edma-common.c | 85 +++++++++----------------------------------
 drivers/dma/fsl-edma-common.h |  2 -
 drivers/dma/ll-dma.c          | 64 ++++++++++++++++++++++++++++++++
 drivers/dma/virt-dma.h        |  9 +++++
 4 files changed, 91 insertions(+), 69 deletions(-)

diff --git a/drivers/dma/fsl-edma-common.c b/drivers/dma/fsl-edma-common.c
index a8f29830e0172b7e818d209f20145121631743c3..ff1ef067cfcffef876eefd30c62d630c77ac537a 100644
--- a/drivers/dma/fsl-edma-common.c
+++ b/drivers/dma/fsl-edma-common.c
@@ -264,65 +264,9 @@ int fsl_edma_resume(struct dma_chan *chan)
 	return 0;
 }
 
-static void fsl_edma_unprep_slave_dma(struct fsl_edma_chan *fsl_chan)
-{
-	if (fsl_chan->dma_dir != DMA_NONE)
-		dma_unmap_resource(fsl_chan->vchan.chan.device->dev,
-				   fsl_chan->dma_dev_addr,
-				   fsl_chan->dma_dev_size,
-				   fsl_chan->dma_dir, 0);
-	fsl_chan->dma_dir = DMA_NONE;
-}
-
-static enum dma_data_direction
-fsl_dma_dir_trans_to_data(enum dma_transfer_direction dir)
-{
-	if (dir == DMA_MEM_TO_DEV)
-		return DMA_FROM_DEVICE;
-
-	if (dir ==  DMA_DEV_TO_MEM)
-		return DMA_TO_DEVICE;
-
-	return DMA_NONE;
-}
-
-static bool fsl_edma_prep_slave_dma(struct fsl_edma_chan *fsl_chan,
-				    enum dma_transfer_direction dir)
-{
-	struct dma_slave_config *cfg = &fsl_chan->vchan.chan.config;
-	struct dma_slave_cfg *c = dma_slave_get_cfg(cfg, dir);
-	struct device *dev = fsl_chan->vchan.chan.device->dev;
-	enum dma_data_direction dma_dir;
-	phys_addr_t addr = 0;
-	u32 size = 0;
-
-	dma_dir = fsl_dma_dir_trans_to_data(dir);
-
-	addr = c->addr;
-	size = c->maxburst;
-
-	/* Already mapped for this config? */
-	if (fsl_chan->dma_dir == dma_dir)
-		return true;
-
-	fsl_edma_unprep_slave_dma(fsl_chan);
-
-	fsl_chan->dma_dev_addr = dma_map_resource(dev, addr, size, dma_dir, 0);
-	if (dma_mapping_error(dev, fsl_chan->dma_dev_addr))
-		return false;
-	fsl_chan->dma_dev_size = size;
-	fsl_chan->dma_dir = dma_dir;
-
-	return true;
-}
-
 int fsl_edma_slave_config(struct dma_chan *chan,
 				 struct dma_slave_config *cfg)
 {
-	struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
-
-	fsl_edma_unprep_slave_dma(fsl_chan);
-
 	return 0;
 }
 
@@ -611,9 +555,6 @@ struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic(
 	if (!is_slave_direction(direction))
 		return NULL;
 
-	if (!fsl_edma_prep_slave_dma(fsl_chan, direction))
-		return NULL;
-
 	sg_len = buf_len / period_len;
 	fsl_desc = vchan_dma_ll_alloc_desc(chan, sg_len, flags);
 	if (!fsl_desc)
@@ -621,6 +562,9 @@ struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic(
 	fsl_desc->iscyclic = true;
 	fsl_desc->dir = direction;
 
+	if (vchan_dma_ll_map_slave_addr(chan, fsl_desc, direction, cfg))
+		goto err;
+
 	dma_buf_next = dma_addr;
 	if (direction == DMA_MEM_TO_DEV) {
 		if (!cfg->src_addr_width)
@@ -649,13 +593,13 @@ struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic(
 
 		if (direction == DMA_MEM_TO_DEV) {
 			src_addr = dma_buf_next;
-			dst_addr = fsl_chan->dma_dev_addr;
+			dst_addr = fsl_desc->dst.addr;
 			soff = cfg->dst_addr_width;
 			doff = fsl_chan->is_multi_fifo ? 4 : 0;
 			if (cfg->dst_port_window_size)
 				doff = cfg->dst_addr_width;
 		} else if (direction == DMA_DEV_TO_MEM) {
-			src_addr = fsl_chan->dma_dev_addr;
+			src_addr = fsl_desc->src.addr;
 			dst_addr = dma_buf_next;
 			soff = fsl_chan->is_multi_fifo ? 4 : 0;
 			doff = cfg->src_addr_width;
@@ -676,6 +620,10 @@ struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic(
 	}
 
 	return __vchan_tx_prep(&fsl_chan->vchan, &fsl_desc->vdesc);
+
+err:
+	vchan_dma_ll_free_desc(&fsl_desc->vdesc);
+	return NULL;
 }
 
 struct dma_async_tx_descriptor *fsl_edma_prep_slave_sg(
@@ -695,15 +643,15 @@ struct dma_async_tx_descriptor *fsl_edma_prep_slave_sg(
 	if (!is_slave_direction(direction))
 		return NULL;
 
-	if (!fsl_edma_prep_slave_dma(fsl_chan, direction))
-		return NULL;
-
 	fsl_desc = vchan_dma_ll_alloc_desc(chan, sg_len, flags);
 	if (!fsl_desc)
 		return NULL;
 	fsl_desc->iscyclic = false;
 	fsl_desc->dir = direction;
 
+	if (vchan_dma_ll_map_slave_addr(chan, fsl_desc, direction, cfg))
+		goto err;
+
 	if (direction == DMA_MEM_TO_DEV) {
 		if (!cfg->src_addr_width)
 			cfg->src_addr_width = cfg->dst_addr_width;
@@ -725,11 +673,11 @@ struct dma_async_tx_descriptor *fsl_edma_prep_slave_sg(
 	for_each_sg(sgl, sg, sg_len, i) {
 		if (direction == DMA_MEM_TO_DEV) {
 			src_addr = sg_dma_address(sg);
-			dst_addr = fsl_chan->dma_dev_addr;
+			dst_addr = fsl_desc->dst.addr;
 			soff = cfg->dst_addr_width;
 			doff = 0;
 		} else if (direction == DMA_DEV_TO_MEM) {
-			src_addr = fsl_chan->dma_dev_addr;
+			src_addr = fsl_desc->src.addr;
 			dst_addr = sg_dma_address(sg);
 			soff = 0;
 			doff = cfg->src_addr_width;
@@ -780,6 +728,10 @@ struct dma_async_tx_descriptor *fsl_edma_prep_slave_sg(
 	}
 
 	return __vchan_tx_prep(&fsl_chan->vchan, &fsl_desc->vdesc);
+
+err:
+	vchan_dma_ll_free_desc(&fsl_desc->vdesc);
+	return NULL;
 }
 
 void fsl_edma_xfer_desc(struct fsl_edma_chan *fsl_chan)
@@ -887,7 +839,6 @@ void fsl_edma_free_chan_resources(struct dma_chan *chan)
 		fsl_edma_chan_mux(fsl_chan, 0, false);
 	fsl_chan->edesc = NULL;
 	vchan_get_all_descriptors(&fsl_chan->vchan, &head);
-	fsl_edma_unprep_slave_dma(fsl_chan);
 	spin_unlock_irqrestore(&fsl_chan->vchan.lock, flags);
 
 	if (fsl_chan->txirq)
diff --git a/drivers/dma/fsl-edma-common.h b/drivers/dma/fsl-edma-common.h
index f2c346cb84f5f15d333cf8547963ea7a717f4d5f..7cba3bc0d39537e675167b42dda644647bf63819 100644
--- a/drivers/dma/fsl-edma-common.h
+++ b/drivers/dma/fsl-edma-common.h
@@ -164,8 +164,6 @@ struct fsl_edma_chan {
 	u32				attr;
 	bool                            is_sw;
 	struct dma_pool			*tcd_pool;
-	dma_addr_t			dma_dev_addr;
-	u32				dma_dev_size;
 	enum dma_data_direction		dma_dir;
 	char				chan_name[32];
 	char				errirq_name[36];
diff --git a/drivers/dma/ll-dma.c b/drivers/dma/ll-dma.c
index 313ca274df945081fc569ddb6a172298c25bc11c..66e4222ac528f871c75a508c68895078fa38cf7b 100644
--- a/drivers/dma/ll-dma.c
+++ b/drivers/dma/ll-dma.c
@@ -99,12 +99,35 @@ struct dma_ll_desc *vchan_dma_ll_alloc_desc(struct dma_chan *chan, u32 n,
 }
 EXPORT_SYMBOL_GPL(vchan_dma_ll_alloc_desc);
 
+static void
+vchan_dma_ll_unmap_slave_addr_one(struct device *dev,
+				  struct dma_slave_map_addr *map,
+				  enum dma_data_direction dir)
+{
+	if (!dma_mapping_error(dev, map->addr) && map->size)
+		dma_unmap_resource(dev, map->addr, map->size, dir, 0);
+}
+
+static void
+vchan_dma_ll_unmap_slave_addr(struct dma_chan *chan, struct dma_ll_desc *desc)
+{
+	struct device *dev = chan->device->dev;
+
+	if (desc->dir == DMA_MEM_TO_DEV || desc->dir == DMA_DEV_TO_DEV)
+		vchan_dma_ll_unmap_slave_addr_one(dev, &desc->dst, DMA_TO_DEVICE);
+
+	if (desc->dir == DMA_DEV_TO_MEM || desc->dir == DMA_DEV_TO_DEV)
+		vchan_dma_ll_unmap_slave_addr_one(dev, &desc->src, DMA_FROM_DEVICE);
+}
+
 void vchan_dma_ll_free_desc(struct virt_dma_desc *vdesc)
 {
 	struct dma_ll_desc *desc = to_dma_ll_desc(vdesc);
 	struct virt_dma_chan *vchan = to_virt_chan(vdesc->tx.chan);
 	int i;
 
+	vchan_dma_ll_unmap_slave_addr(&vchan->chan, desc);
+
 	for (i = 0; i < desc->n_its; i++)
 		dma_pool_free(vchan->ll.pool, desc->its[i].vaddr,
 			      desc->its[i].paddr);
@@ -112,6 +135,47 @@ void vchan_dma_ll_free_desc(struct virt_dma_desc *vdesc)
 }
 EXPORT_SYMBOL_GPL(vchan_dma_ll_free_desc);
 
+static int
+vchan_dma_ll_map_slave_addr_one(struct device *dev,
+				struct dma_slave_map_addr *map,
+				enum dma_transfer_direction tran_dir,
+				enum dma_data_direction data_dir,
+				struct dma_slave_cfg *cfg)
+{
+	map->addr = dma_map_resource(dev, cfg->addr, cfg->maxburst, data_dir, 0);
+	if (dma_mapping_error(dev, map->addr))
+		return -ENOMEM;
+
+	map->size = cfg->maxburst;
+	return 0;
+}
+
+int vchan_dma_ll_map_slave_addr(struct dma_chan *chan, struct dma_ll_desc *desc,
+				enum dma_transfer_direction dir,
+				struct dma_slave_config *cfg)
+{
+	struct device *dev = chan->device->dev;
+
+	if (dir == DMA_MEM_TO_DEV || dir == DMA_DEV_TO_DEV) {
+		if (vchan_dma_ll_map_slave_addr_one(dev, &desc->dst, dir,
+						    DMA_TO_DEVICE, &cfg->dst))
+			goto err;
+	}
+
+	if (dir == DMA_DEV_TO_MEM || dir == DMA_DEV_TO_DEV) {
+		if (vchan_dma_ll_map_slave_addr_one(dev, &desc->src, dir,
+						    DMA_TO_DEVICE, &cfg->src))
+			goto err;
+	}
+
+	return 0;
+
+err:
+	vchan_dma_ll_unmap_slave_addr(chan, desc);
+	return -ENOMEM;
+}
+EXPORT_SYMBOL_GPL(vchan_dma_ll_map_slave_addr);
+
 struct dma_async_tx_descriptor *
 vchan_dma_ll_prep_memcpy(struct dma_chan *chan,
 			 dma_addr_t dma_dst, dma_addr_t dma_src, size_t len,
diff --git a/drivers/dma/virt-dma.h b/drivers/dma/virt-dma.h
index f4aec6eb3c3900a5473c8feedc16b06e29751deb..0a18663dc95f323f7a9bab76f2d730701277371a 100644
--- a/drivers/dma/virt-dma.h
+++ b/drivers/dma/virt-dma.h
@@ -24,6 +24,10 @@ struct dma_linklist_item {
 	void *vaddr;
 };
 
+struct dma_slave_map_addr {
+	dma_addr_t addr;
+	size_t size;
+};
 /*
  * Must put to last one if need extend it
  *   struct vendor_dma_ll_desc {
@@ -35,6 +39,8 @@ struct dma_ll_desc {
 	struct virt_dma_desc vdesc;
 	bool iscyclic;
 	enum dma_transfer_direction dir;
+	struct dma_slave_map_addr src;
+	struct dma_slave_map_addr dst;
 	u32 n_its;
 	struct dma_linklist_item its[];
 };
@@ -304,6 +310,9 @@ vchan_dma_ll_prep_memcpy(struct dma_chan *chan,
 			 unsigned long flags);
 void vchan_dma_ll_free_desc(struct virt_dma_desc *vdesc);
 int vchan_dma_ll_terminate_all(struct dma_chan *chan);
+int vchan_dma_ll_map_slave_addr(struct dma_chan *chan, struct dma_ll_desc *desc,
+				enum dma_transfer_direction dir,
+				struct dma_slave_config *cfg);
 #endif
 
 #endif

-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC 11/12] dmaengine: fsl-edma: use local soff/doff variables
  2026-01-28 18:05 [PATCH RFC 00/12] dmaengine: introduce common linked-list DMA controller framework Frank Li
                   ` (9 preceding siblings ...)
  2026-01-28 18:05 ` [PATCH RFC 10/12] dmaengine: move fsl-edma dma_[un]map_resource() to linked list library Frank Li
@ 2026-01-28 18:05 ` Frank Li
  2026-01-28 18:05 ` [PATCH RFC 12/12] dmaengine: add vchan_dma_ll_prep_slave_{sg,cyclic} API Frank Li
  11 siblings, 0 replies; 13+ messages in thread
From: Frank Li @ 2026-01-28 18:05 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-kernel, dmaengine, imx, joy.zou, Frank Li

Introduce local soff and doff variables (and related fields) so that memcpy
set_lli() handling matches the sg and cyclic cases.

Prepare the fsl-edma driver for moving prep_slave_{sg,cyclic} into the
common linked-list library.

Signed-off-by: Frank Li <Frank.Li@nxp.com>
---
 drivers/dma/fsl-edma-common.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/drivers/dma/fsl-edma-common.c b/drivers/dma/fsl-edma-common.c
index ff1ef067cfcffef876eefd30c62d630c77ac537a..fdac0518316914d59df592ad26f6000d2034bcb9 100644
--- a/drivers/dma/fsl-edma-common.c
+++ b/drivers/dma/fsl-edma-common.c
@@ -517,6 +517,10 @@ static int fsl_edma_set_lli(struct dma_ll_desc *desc, u32 idx,
 	struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
 	void *tcd = desc->its[idx].vaddr;
 	u32 src_bus_width, dst_bus_width;
+	bool disable_req;
+	u32 soff, doff;
+	u32 nbytes;
+	u16 iter;
 
 	/* Memory to memory */
 	if (!config) {
@@ -524,6 +528,12 @@ static int fsl_edma_set_lli(struct dma_ll_desc *desc, u32 idx,
 		dst_bus_width = min_t(u32, DMA_SLAVE_BUSWIDTH_32_BYTES, 1 << (ffs(dst) - 1));
 
 		fsl_chan->is_sw = true;
+
+		soff = src_bus_width;
+		doff = dst_bus_width;
+		iter = 1;
+		disable_req = true;
+		nbytes = len;
 	}
 
 	if (fsl_edma_drvflags(fsl_chan) & FSL_EDMA_DRV_MEM_REMOTE)
@@ -532,7 +542,7 @@ static int fsl_edma_set_lli(struct dma_ll_desc *desc, u32 idx,
 	/* To match with copy_align and max_seg_size so 1 tcd is enough */
 	__fsl_edma_fill_tcd(fsl_chan, tcd, src, dst,
 			    fsl_edma_get_tcd_attr(src_bus_width, dst_bus_width),
-			    src_bus_width, len, 0, 1, 1, dst_bus_width, irq, true);
+			    soff, nbytes, 0, iter, iter, doff, irq, disable_req);
 
 	return 0;
 }

-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RFC 12/12] dmaengine: add vchan_dma_ll_prep_slave_{sg,cyclic} API
  2026-01-28 18:05 [PATCH RFC 00/12] dmaengine: introduce common linked-list DMA controller framework Frank Li
                   ` (10 preceding siblings ...)
  2026-01-28 18:05 ` [PATCH RFC 11/12] dmaengine: fsl-edma: use local soff/doff variables Frank Li
@ 2026-01-28 18:05 ` Frank Li
  11 siblings, 0 replies; 13+ messages in thread
From: Frank Li @ 2026-01-28 18:05 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-kernel, dmaengine, imx, joy.zou, Frank Li

Create common vchan_dma_ll_prep_slave_{sg,cyclic} API in the DMA
linked-list library, based on the existing fsl-edma implementation.

Update the fsl-edma driver to use the common API instead of maintaining
its own implementation.

Signed-off-by: Frank Li <Frank.Li@nxp.com>
---
 drivers/dma/fsl-edma-common.c | 226 ++++++++----------------------------------
 drivers/dma/fsl-edma-common.h |   8 --
 drivers/dma/fsl-edma-main.c   |   4 +-
 drivers/dma/ll-dma.c          | 112 +++++++++++++++++++++
 drivers/dma/mcf-edma-main.c   |   4 +-
 drivers/dma/virt-dma.h        |   9 ++
 6 files changed, 169 insertions(+), 194 deletions(-)

diff --git a/drivers/dma/fsl-edma-common.c b/drivers/dma/fsl-edma-common.c
index fdac0518316914d59df592ad26f6000d2034bcb9..643e8bd30b88a2cf66eebf024505428365b8f0ae 100644
--- a/drivers/dma/fsl-edma-common.c
+++ b/drivers/dma/fsl-edma-common.c
@@ -534,185 +534,55 @@ static int fsl_edma_set_lli(struct dma_ll_desc *desc, u32 idx,
 		iter = 1;
 		disable_req = true;
 		nbytes = len;
-	}
-
-	if (fsl_edma_drvflags(fsl_chan) & FSL_EDMA_DRV_MEM_REMOTE)
-		fsl_chan->is_remote = true;
-
-	/* To match with copy_align and max_seg_size so 1 tcd is enough */
-	__fsl_edma_fill_tcd(fsl_chan, tcd, src, dst,
-			    fsl_edma_get_tcd_attr(src_bus_width, dst_bus_width),
-			    soff, nbytes, 0, iter, iter, doff, irq, disable_req);
-
-	return 0;
-}
-
-struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic(
-		struct dma_chan *chan, dma_addr_t dma_addr, size_t buf_len,
-		size_t period_len, enum dma_transfer_direction direction,
-		unsigned long flags)
-{
-	struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
-	struct dma_slave_config *cfg = &chan->config;
-	struct dma_ll_desc *fsl_desc;
-	dma_addr_t dma_buf_next;
-	bool major_int = true;
-	int sg_len, i;
-	dma_addr_t src_addr, dst_addr, last_sg;
-	u16 soff, doff, iter;
-	u32 nbytes;
-
-	if (!is_slave_direction(direction))
-		return NULL;
-
-	sg_len = buf_len / period_len;
-	fsl_desc = vchan_dma_ll_alloc_desc(chan, sg_len, flags);
-	if (!fsl_desc)
-		return NULL;
-	fsl_desc->iscyclic = true;
-	fsl_desc->dir = direction;
-
-	if (vchan_dma_ll_map_slave_addr(chan, fsl_desc, direction, cfg))
-		goto err;
-
-	dma_buf_next = dma_addr;
-	if (direction == DMA_MEM_TO_DEV) {
-		if (!cfg->src_addr_width)
-			cfg->src_addr_width = cfg->dst_addr_width;
-		fsl_chan->attr =
-			fsl_edma_get_tcd_attr(cfg->src_addr_width,
-					      cfg->dst_addr_width);
-		nbytes = cfg->dst_addr_width * cfg->dst_maxburst;
 	} else {
-		if (!cfg->dst_addr_width)
-			cfg->dst_addr_width = cfg->src_addr_width;
-		fsl_chan->attr =
-			fsl_edma_get_tcd_attr(cfg->src_addr_width,
-					      cfg->dst_addr_width);
-		nbytes = cfg->src_addr_width * cfg->src_maxburst;
-	}
+		enum dma_transfer_direction dir = config->direction;
 
-	iter = period_len / nbytes;
+		if (!desc->iscyclic && idx == desc->n_its - 1)
+			disable_req = true;
+		else
+			disable_req = false;
 
-	for (i = 0; i < sg_len; i++) {
-		if (dma_buf_next >= dma_addr + buf_len)
-			dma_buf_next = dma_addr;
+		fsl_chan->is_sw = false;
 
-		/* get next sg's physical address */
-		last_sg = fsl_desc->its[(i + 1) % sg_len].paddr;
+		if (dir == DMA_MEM_TO_DEV) {
+			dst_bus_width = config->dst_addr_width;
+			if (!config->src_addr_width)
+				src_bus_width = config->dst_addr_width;
+			nbytes = config->dst_addr_width * config->dst_maxburst;
 
-		if (direction == DMA_MEM_TO_DEV) {
-			src_addr = dma_buf_next;
-			dst_addr = fsl_desc->dst.addr;
-			soff = cfg->dst_addr_width;
+			soff = config->dst_addr_width;
 			doff = fsl_chan->is_multi_fifo ? 4 : 0;
-			if (cfg->dst_port_window_size)
-				doff = cfg->dst_addr_width;
-		} else if (direction == DMA_DEV_TO_MEM) {
-			src_addr = fsl_desc->src.addr;
-			dst_addr = dma_buf_next;
+			if (config->dst_port_window_size)
+				doff = config->dst_addr_width;
+		} else if (dir == DMA_DEV_TO_MEM) {
+			src_bus_width = config->src_addr_width;
+			if (!config->dst_addr_width)
+				dst_bus_width = config->src_addr_width;
+			nbytes = config->src_addr_width * config->src_maxburst;
 			soff = fsl_chan->is_multi_fifo ? 4 : 0;
-			doff = cfg->src_addr_width;
-			if (cfg->src_port_window_size)
-				soff = cfg->src_addr_width;
+			doff = config->src_addr_width;
+			if (config->src_port_window_size)
+				soff = config->src_addr_width;
 		} else {
 			/* DMA_DEV_TO_DEV */
-			src_addr = cfg->src_addr;
-			dst_addr = cfg->dst_addr;
 			soff = doff = 0;
-			major_int = false;
-		}
-
-		fsl_edma_fill_tcd(fsl_chan, fsl_desc->its[i].vaddr, src_addr, dst_addr,
-				  fsl_chan->attr, soff, nbytes, 0, iter,
-				  iter, doff, last_sg, major_int, false, true);
-		dma_buf_next += period_len;
-	}
-
-	return __vchan_tx_prep(&fsl_chan->vchan, &fsl_desc->vdesc);
-
-err:
-	vchan_dma_ll_free_desc(&fsl_desc->vdesc);
-	return NULL;
-}
-
-struct dma_async_tx_descriptor *fsl_edma_prep_slave_sg(
-		struct dma_chan *chan, struct scatterlist *sgl,
-		unsigned int sg_len, enum dma_transfer_direction direction,
-		unsigned long flags, void *context)
-{
-	struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan);
-	struct dma_slave_config *cfg = &chan->config;
-	struct dma_ll_desc *fsl_desc;
-	struct scatterlist *sg;
-	dma_addr_t src_addr, dst_addr, last_sg;
-	u16 soff, doff, iter;
-	u32 nbytes;
-	int i;
-
-	if (!is_slave_direction(direction))
-		return NULL;
-
-	fsl_desc = vchan_dma_ll_alloc_desc(chan, sg_len, flags);
-	if (!fsl_desc)
-		return NULL;
-	fsl_desc->iscyclic = false;
-	fsl_desc->dir = direction;
-
-	if (vchan_dma_ll_map_slave_addr(chan, fsl_desc, direction, cfg))
-		goto err;
-
-	if (direction == DMA_MEM_TO_DEV) {
-		if (!cfg->src_addr_width)
-			cfg->src_addr_width = cfg->dst_addr_width;
-		fsl_chan->attr =
-			fsl_edma_get_tcd_attr(cfg->src_addr_width,
-					      cfg->dst_addr_width);
-		nbytes = cfg->dst_addr_width *
-			cfg->dst_maxburst;
-	} else {
-		if (!cfg->dst_addr_width)
-			cfg->dst_addr_width = cfg->src_addr_width;
-		fsl_chan->attr =
-			fsl_edma_get_tcd_attr(cfg->src_addr_width,
-					      cfg->dst_addr_width);
-		nbytes = cfg->src_addr_width *
-			cfg->src_maxburst;
-	}
-
-	for_each_sg(sgl, sg, sg_len, i) {
-		if (direction == DMA_MEM_TO_DEV) {
-			src_addr = sg_dma_address(sg);
-			dst_addr = fsl_desc->dst.addr;
-			soff = cfg->dst_addr_width;
-			doff = 0;
-		} else if (direction == DMA_DEV_TO_MEM) {
-			src_addr = fsl_desc->src.addr;
-			dst_addr = sg_dma_address(sg);
-			soff = 0;
-			doff = cfg->src_addr_width;
-		} else {
-			/* DMA_DEV_TO_DEV */
-			src_addr = cfg->src_addr;
-			dst_addr = cfg->dst_addr;
-			soff = 0;
-			doff = 0;
+			irq = false;
 		}
 
-		/*
-		 * Choose the suitable burst length if sg_dma_len is not
-		 * multiple of burst length so that the whole transfer length is
-		 * multiple of minor loop(burst length).
-		 */
-		if (sg_dma_len(sg) % nbytes) {
-			u32 width = (direction == DMA_DEV_TO_MEM) ? doff : soff;
-			u32 burst = (direction == DMA_DEV_TO_MEM) ?
-						cfg->src_maxburst :
-						cfg->dst_maxburst;
+	       /*
+		* Choose the suitable burst length if sg_dma_len is not
+		* multiple of burst length so that the whole transfer length is
+		* multiple of minor loop(burst length).
+		*/
+		if (len % nbytes) {
+			u32 width = (dir == DMA_DEV_TO_MEM) ? doff : soff;
+			u32 burst = (dir == DMA_DEV_TO_MEM) ?
+					config->src_maxburst :
+					config->dst_maxburst;
 			int j;
 
 			for (j = burst; j > 1; j--) {
-				if (!(sg_dma_len(sg) % (j * width))) {
+				if (!(len % (j * width))) {
 					nbytes = j * width;
 					break;
 				}
@@ -721,27 +591,19 @@ struct dma_async_tx_descriptor *fsl_edma_prep_slave_sg(
 			if (j == 1)
 				nbytes = width;
 		}
-		iter = sg_dma_len(sg) / nbytes;
-		if (i < sg_len - 1) {
-			last_sg = fsl_desc->its[(i + 1)].paddr;
-			fsl_edma_fill_tcd(fsl_chan, fsl_desc->its[i].vaddr, src_addr,
-					  dst_addr, fsl_chan->attr, soff,
-					  nbytes, 0, iter, iter, doff, last_sg,
-					  false, false, true);
-		} else {
-			last_sg = 0;
-			fsl_edma_fill_tcd(fsl_chan, fsl_desc->its[i].vaddr, src_addr,
-					  dst_addr, fsl_chan->attr, soff,
-					  nbytes, 0, iter, iter, doff, last_sg,
-					  true, true, false);
-		}
+
+		iter = len / nbytes;
 	}
 
-	return __vchan_tx_prep(&fsl_chan->vchan, &fsl_desc->vdesc);
+	if (fsl_edma_drvflags(fsl_chan) & FSL_EDMA_DRV_MEM_REMOTE)
+		fsl_chan->is_remote = true;
+
+	/* To match with copy_align and max_seg_size so 1 tcd is enough */
+	__fsl_edma_fill_tcd(fsl_chan, tcd, src, dst,
+			    fsl_edma_get_tcd_attr(src_bus_width, dst_bus_width),
+			    soff, nbytes, 0, iter, iter, doff, irq, disable_req);
 
-err:
-	vchan_dma_ll_free_desc(&fsl_desc->vdesc);
-	return NULL;
+	return 0;
 }
 
 void fsl_edma_xfer_desc(struct fsl_edma_chan *fsl_chan)
diff --git a/drivers/dma/fsl-edma-common.h b/drivers/dma/fsl-edma-common.h
index 7cba3bc0d39537e675167b42dda644647bf63819..b5bfd3162237bb9dd585bbf91e6f9f73f0376112 100644
--- a/drivers/dma/fsl-edma-common.h
+++ b/drivers/dma/fsl-edma-common.h
@@ -469,14 +469,6 @@ int fsl_edma_slave_config(struct dma_chan *chan,
 				 struct dma_slave_config *cfg);
 enum dma_status fsl_edma_tx_status(struct dma_chan *chan,
 		dma_cookie_t cookie, struct dma_tx_state *txstate);
-struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic(
-		struct dma_chan *chan, dma_addr_t dma_addr, size_t buf_len,
-		size_t period_len, enum dma_transfer_direction direction,
-		unsigned long flags);
-struct dma_async_tx_descriptor *fsl_edma_prep_slave_sg(
-		struct dma_chan *chan, struct scatterlist *sgl,
-		unsigned int sg_len, enum dma_transfer_direction direction,
-		unsigned long flags, void *context);
 void fsl_edma_xfer_desc(struct fsl_edma_chan *fsl_chan);
 void fsl_edma_issue_pending(struct dma_chan *chan);
 int fsl_edma_alloc_chan_resources(struct dma_chan *chan);
diff --git a/drivers/dma/fsl-edma-main.c b/drivers/dma/fsl-edma-main.c
index 1724a2d1449fe1850d460cefae5899a5ab828afd..e405aa96e625702673b5fc64e1102b50d18eb894 100644
--- a/drivers/dma/fsl-edma-main.c
+++ b/drivers/dma/fsl-edma-main.c
@@ -848,8 +848,8 @@ static int fsl_edma_probe(struct platform_device *pdev)
 	fsl_edma->dma_dev.device_free_chan_resources
 		= fsl_edma_free_chan_resources;
 	fsl_edma->dma_dev.device_tx_status = fsl_edma_tx_status;
-	fsl_edma->dma_dev.device_prep_slave_sg = fsl_edma_prep_slave_sg;
-	fsl_edma->dma_dev.device_prep_dma_cyclic = fsl_edma_prep_dma_cyclic;
+	fsl_edma->dma_dev.device_prep_slave_sg = vchan_dma_ll_prep_slave_sg;
+	fsl_edma->dma_dev.device_prep_dma_cyclic = vchan_dma_ll_prep_slave_cyclic;
 	fsl_edma->dma_dev.device_prep_dma_memcpy = vchan_dma_ll_prep_memcpy;
 	fsl_edma->dma_dev.device_config = fsl_edma_slave_config;
 	fsl_edma->dma_dev.device_pause = fsl_edma_pause;
diff --git a/drivers/dma/ll-dma.c b/drivers/dma/ll-dma.c
index 66e4222ac528f871c75a508c68895078fa38cf7b..de289e10468b9c0e6ab6c15b1bd49ab2b627e59d 100644
--- a/drivers/dma/ll-dma.c
+++ b/drivers/dma/ll-dma.c
@@ -216,6 +216,118 @@ vchan_dma_ll_prep_memcpy(struct dma_chan *chan,
 }
 EXPORT_SYMBOL_GPL(vchan_dma_ll_prep_memcpy);
 
+static dma_addr_t
+vchan_dma_get_src_addr(struct dma_ll_desc *desc, dma_addr_t addr,
+		       enum dma_transfer_direction dir)
+{
+	if (dir == DMA_DEV_TO_MEM || dir == DMA_DEV_TO_DEV)
+		return desc->src.addr;
+
+	return addr;
+}
+
+static dma_addr_t
+vchan_dma_get_dst_addr(struct dma_ll_desc *desc, dma_addr_t addr,
+		       enum dma_transfer_direction dir)
+{
+	if (dir == DMA_MEM_TO_DEV || dir == DMA_DEV_TO_DEV)
+		return desc->dst.addr;
+
+	return addr;
+}
+
+struct dma_async_tx_descriptor *
+vchan_dma_ll_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+			   unsigned int sg_len, enum dma_transfer_direction dir,
+			   unsigned long flags, void *context)
+{
+	struct virt_dma_chan *vchan = to_virt_chan(chan);
+	struct dma_slave_config *config = &chan->config;
+	struct dma_ll_desc *desc;
+	struct scatterlist *sg;
+	int i, ret;
+
+	if (!is_slave_direction(dir))
+		return NULL;
+
+	desc = vchan_dma_ll_alloc_desc(chan, sg_len, flags);
+	if (!desc)
+		return NULL;
+	desc->iscyclic = false;
+	desc->dir = dir;
+
+	if (vchan_dma_ll_map_slave_addr(chan, desc, dir, config))
+		goto err;
+
+	for_each_sg(sgl, sg, sg_len, i) {
+		dma_addr_t addr = sg_dma_address(sg);
+
+		ret = vchan->ll.ops->set_lli(desc, i,
+					     vchan_dma_get_src_addr(desc, addr, dir),
+					     vchan_dma_get_dst_addr(desc, addr, dir),
+					     sg_dma_len(sg),
+					     i == sg_len - 1, config);
+		if (ret)
+			goto err;
+	}
+
+	return __vchan_tx_prep(vchan, &desc->vdesc);
+
+err:
+	vchan_dma_ll_free_desc(&desc->vdesc);
+	return NULL;
+}
+EXPORT_SYMBOL_GPL(vchan_dma_ll_prep_slave_sg);
+
+struct dma_async_tx_descriptor *
+vchan_dma_ll_prep_slave_cyclic(struct dma_chan *chan, dma_addr_t dma_addr,
+			       size_t buf_len, size_t period_len,
+			       enum dma_transfer_direction dir,
+			       unsigned long flags)
+{
+	struct virt_dma_chan *vchan = to_virt_chan(chan);
+	struct dma_slave_config *config = &chan->config;
+	dma_addr_t addr = dma_addr;
+	struct dma_ll_desc *desc;
+	int nItems;
+	int i, ret;
+
+	if (!is_slave_direction(dir))
+		return NULL;
+
+	nItems = buf_len / period_len;
+	desc = vchan_dma_ll_alloc_desc(chan, nItems, flags);
+	if (!desc)
+		return NULL;
+	desc->iscyclic = true;
+	desc->dir = dir;
+
+	if (vchan_dma_ll_map_slave_addr(chan, desc, dir, config))
+		goto err;
+
+	for (i = 0; i < nItems; i++) {
+		ret = vchan->ll.ops->set_lli(desc, i,
+					     vchan_dma_get_src_addr(desc, addr, dir),
+					     vchan_dma_get_dst_addr(desc, addr, dir),
+					     period_len, true, config);
+		if (ret)
+			goto err;
+
+		addr += period_len;
+	}
+
+	ret = vchan->ll.ops->set_ll_next(desc, nItems - 1, desc->its[0].paddr);
+	if (ret)
+		goto err;
+
+	return __vchan_tx_prep(vchan, &desc->vdesc);
+
+err:
+	vchan_dma_ll_free_desc(&desc->vdesc);
+	return NULL;
+}
+EXPORT_SYMBOL_GPL(vchan_dma_ll_prep_slave_cyclic);
+
 int vchan_dma_ll_terminate_all(struct dma_chan *chan)
 {
 	struct virt_dma_chan *vchan = to_virt_chan(chan);
diff --git a/drivers/dma/mcf-edma-main.c b/drivers/dma/mcf-edma-main.c
index 60c5b928ade74d36c8f4206777921544787f6cd8..6d68dfd97b47c88d2499540b10564b964820b807 100644
--- a/drivers/dma/mcf-edma-main.c
+++ b/drivers/dma/mcf-edma-main.c
@@ -221,8 +221,8 @@ static int mcf_edma_probe(struct platform_device *pdev)
 			fsl_edma_free_chan_resources;
 	mcf_edma->dma_dev.device_config = fsl_edma_slave_config;
 	mcf_edma->dma_dev.device_prep_dma_cyclic =
-			fsl_edma_prep_dma_cyclic;
-	mcf_edma->dma_dev.device_prep_slave_sg = fsl_edma_prep_slave_sg;
+			vchan_dma_ll_prep_slave_cyclic;
+	mcf_edma->dma_dev.device_prep_slave_sg = vchan_dma_ll_prep_slave_sg;
 	mcf_edma->dma_dev.device_tx_status = fsl_edma_tx_status;
 	mcf_edma->dma_dev.device_pause = fsl_edma_pause;
 	mcf_edma->dma_dev.device_resume = fsl_edma_resume;
diff --git a/drivers/dma/virt-dma.h b/drivers/dma/virt-dma.h
index 0a18663dc95f323f7a9bab76f2d730701277371a..d1bb130f0fd798f8ec78cc8f88da3f8d1ae74625 100644
--- a/drivers/dma/virt-dma.h
+++ b/drivers/dma/virt-dma.h
@@ -308,6 +308,15 @@ struct dma_async_tx_descriptor *
 vchan_dma_ll_prep_memcpy(struct dma_chan *chan,
 			 dma_addr_t dma_dst, dma_addr_t dma_src, size_t len,
 			 unsigned long flags);
+struct dma_async_tx_descriptor *
+vchan_dma_ll_prep_slave_cyclic(struct dma_chan *chan, dma_addr_t dma_addr,
+			       size_t buf_len, size_t period_len,
+			       enum dma_transfer_direction dir,
+			       unsigned long flags);
+struct dma_async_tx_descriptor *
+vchan_dma_ll_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+			   unsigned int sg_len, enum dma_transfer_direction dir,
+			   unsigned long flags, void *context);
 void vchan_dma_ll_free_desc(struct virt_dma_desc *vdesc);
 int vchan_dma_ll_terminate_all(struct dma_chan *chan);
 int vchan_dma_ll_map_slave_addr(struct dma_chan *chan, struct dma_ll_desc *desc,

-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2026-01-28 18:06 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-28 18:05 [PATCH RFC 00/12] dmaengine: introduce common linked-list DMA controller framework Frank Li
2026-01-28 18:05 ` [PATCH RFC 01/12] dmaengine: Extend virt_chan for link list based DMA engines Frank Li
2026-01-28 18:05 ` [PATCH RFC 02/12] dmaengine: Add common dma_ll_desc and dma_linklist_item for link-list controllers Frank Li
2026-01-28 18:05 ` [PATCH RFC 03/12] dmaengine: fsl-edma: Remove redundant echan from struct fsl_edma_desc Frank Li
2026-01-28 18:05 ` [PATCH RFC 04/12] dmaengine: fsl-edma: Use common dma_ll_desc in vchan Frank Li
2026-01-28 18:05 ` [PATCH RFC 05/12] dmaengine: Add DMA pool allocation in vchan_dma_ll_init() and API vchan_dma_ll_free() Frank Li
2026-01-28 18:05 ` [PATCH RFC 06/12] dmaengine: Move fsl_edma_(alloc|free)_desc() to common library Frank Li
2026-01-28 18:05 ` [PATCH RFC 07/12] dmaengine: virt-dma: split vchan_tx_prep() into init and internal helpers Frank Li
2026-01-28 18:05 ` [PATCH RFC 08/12] dmaengine: Factor out fsl-edma prep_memcpy into common vchan helper Frank Li
2026-01-28 18:05 ` [PATCH RFC 09/12] dmaengine: ll-dma: support multi-descriptor memcpy for large transfers Frank Li
2026-01-28 18:05 ` [PATCH RFC 10/12] dmaengine: move fsl-edma dma_[un]map_resource() to linked list library Frank Li
2026-01-28 18:05 ` [PATCH RFC 11/12] dmaengine: fsl-edma: use local soff/doff variables Frank Li
2026-01-28 18:05 ` [PATCH RFC 12/12] dmaengine: add vchan_dma_ll_prep_slave_{sg,cyclic} API Frank Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox