public inbox for dmaengine@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] dma: dma-axi-dmac: Add cyclic transfer support and graceful termination
@ 2026-01-27 14:28 Nuno Sá via B4 Relay
  2026-01-27 14:28 ` [PATCH 1/5] dmaengine: Document cyclic transfer for dmaengine_prep_peripheral_dma_vec() Nuno Sá via B4 Relay
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Nuno Sá via B4 Relay @ 2026-01-27 14:28 UTC (permalink / raw)
  To: dmaengine; +Cc: Lars-Peter Clausen, Vinod Koul

This series adds support for cyclic transfers in the .device_prep_peripheral_dma_vec()
callback and implements graceful termination of cyclic transfers using the
DMA_PREP_LOAD_EOT flag. Using DMA_PREP_REPEAT and DMA_PREP_LOAD_EOT is
based on the discussion in [1].

Currently, the only way to stop a cyclic transfer is through brute force using
.device_terminate_all(), which terminates all pending transfers. This series
introduces a mechanism to gracefully terminate individual cyclic transfers when
a new transfer flagged with DMA_PREP_LOAD_EOT is queued.

We need two different approaches:

1. Software-managed cyclic transfers: These generate EOT (End-Of-Transfer)
   interrupts for each cycle. Hence, termination can be handled directly
   in the interrupt handler when the EOT interrupt fires, making the
   transition to the next transfer straightforward.

2. Hardware-managed cyclic transfers: These are optimized to avoid interrupt
   overhead by suppressing EOT interrupts. Since there are no EOT interrupts,
   termination must be detected at SOF (Start-Of-Frame) when new transfers
   are being considered. The transfer is marked for termination and the
   hardware is configured to end the current cycle gracefully.

For HW-managed cyclic mode, the series handles both scatter-gather and non-SG
variants. With SG support, the last segment flags are modified to trigger EOT.
Without SG, the CYCLIC flag is cleared to allow natural completion. A workaround
is included for older IP cores (pre-4.6.a) that can prefetch data incorrectly
when clearing the CYCLIC flag, requiring a core disable/enable cycle.

[1]: https://lore.kernel.org/dmaengine/ZhJW9JEqN2wrejvC@matsya/

---
Nuno Sá (5):
      dmaengine: Document cyclic transfer for dmaengine_prep_peripheral_dma_vec()
      dma: dma-axi-dmac: add cyclic transfers in .device_prep_peripheral_dma_vec()
      dma: dma-axi-dmac: add helper for getting next desc
      dma: dma-axi-dmac: Gracefully terminate SW cyclic transfers
      dma: dma-axi-dmac: gracefully terminate HW cyclic transfers

 drivers/dma/dma-axi-dmac.c | 170 +++++++++++++++++++++++++++++++++++++++------
 include/linux/dmaengine.h  |   3 +-
 2 files changed, 151 insertions(+), 22 deletions(-)
---
base-commit: 3c8a86ed002ab8fb287ee4ec92f0fd6ac5b291d2
change-id: 20260126-axi-dac-cyclic-support-a06721b2e107
--

Thanks!
- Nuno Sá



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/5] dmaengine: Document cyclic transfer for dmaengine_prep_peripheral_dma_vec()
  2026-01-27 14:28 [PATCH 0/5] dma: dma-axi-dmac: Add cyclic transfer support and graceful termination Nuno Sá via B4 Relay
@ 2026-01-27 14:28 ` Nuno Sá via B4 Relay
  2026-02-25 16:50   ` Frank Li
  2026-01-27 14:28 ` [PATCH 2/5] dma: dma-axi-dmac: add cyclic transfers in .device_prep_peripheral_dma_vec() Nuno Sá via B4 Relay
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Nuno Sá via B4 Relay @ 2026-01-27 14:28 UTC (permalink / raw)
  To: dmaengine; +Cc: Lars-Peter Clausen, Vinod Koul

From: Nuno Sá <nuno.sa@analog.com>

Document that the DMA_PREP_REPEAT flag can be used with the
dmaengine_prep_peripheral_dma_vec() to mark a transfer as cyclic similar
to dmaengine_prep_dma_cyclic().

Signed-off-by: Nuno Sá <nuno.sa@analog.com>
---
 include/linux/dmaengine.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index 99efe2b9b4ea..b3d251c9734e 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -996,7 +996,8 @@ static inline struct dma_async_tx_descriptor *dmaengine_prep_slave_single(
  * @vecs: The array of DMA vectors that should be transferred
  * @nents: The number of DMA vectors in the array
  * @dir: Specifies the direction of the data transfer
- * @flags: DMA engine flags
+ * @flags: DMA engine flags - DMA_PREP_REPEAT can be used to mark a cyclic
+ *         DMA transfer
  */
 static inline struct dma_async_tx_descriptor *dmaengine_prep_peripheral_dma_vec(
 	struct dma_chan *chan, const struct dma_vec *vecs, size_t nents,

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/5] dma: dma-axi-dmac: add cyclic transfers in .device_prep_peripheral_dma_vec()
  2026-01-27 14:28 [PATCH 0/5] dma: dma-axi-dmac: Add cyclic transfer support and graceful termination Nuno Sá via B4 Relay
  2026-01-27 14:28 ` [PATCH 1/5] dmaengine: Document cyclic transfer for dmaengine_prep_peripheral_dma_vec() Nuno Sá via B4 Relay
@ 2026-01-27 14:28 ` Nuno Sá via B4 Relay
  2026-02-25 16:51   ` Frank Li
  2026-01-27 14:28 ` [PATCH 3/5] dma: dma-axi-dmac: add helper for getting next desc Nuno Sá via B4 Relay
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Nuno Sá via B4 Relay @ 2026-01-27 14:28 UTC (permalink / raw)
  To: dmaengine; +Cc: Lars-Peter Clausen, Vinod Koul

From: Nuno Sá <nuno.sa@analog.com>

Add support for cyclic transfers by checking the DMA_PREP_REPEAT
flag. If we do see it, close the loop and clear the flag for the last
segment (the same as we do for .device_prep_dma_cyclic().

Signed-off-by: Nuno Sá <nuno.sa@analog.com>
---
 drivers/dma/dma-axi-dmac.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/dma/dma-axi-dmac.c b/drivers/dma/dma-axi-dmac.c
index f5caf75dc0e7..b083b6176593 100644
--- a/drivers/dma/dma-axi-dmac.c
+++ b/drivers/dma/dma-axi-dmac.c
@@ -657,7 +657,12 @@ axi_dmac_prep_peripheral_dma_vec(struct dma_chan *c, const struct dma_vec *vecs,
 					      vecs[i].len, dsg);
 	}
 
-	desc->cyclic = false;
+	desc->cyclic = flags & DMA_PREP_REPEAT;
+	if (desc->cyclic) {
+		/* Chain the last descriptor to the first, and remove its "last" flag */
+		desc->sg[num_sgs - 1].hw->flags &= ~AXI_DMAC_HW_FLAG_LAST;
+		desc->sg[num_sgs - 1].hw->next_sg_addr = desc->sg[0].hw_phys;
+	}
 
 	return vchan_tx_prep(&chan->vchan, &desc->vdesc, flags);
 }

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/5] dma: dma-axi-dmac: add helper for getting next desc
  2026-01-27 14:28 [PATCH 0/5] dma: dma-axi-dmac: Add cyclic transfer support and graceful termination Nuno Sá via B4 Relay
  2026-01-27 14:28 ` [PATCH 1/5] dmaengine: Document cyclic transfer for dmaengine_prep_peripheral_dma_vec() Nuno Sá via B4 Relay
  2026-01-27 14:28 ` [PATCH 2/5] dma: dma-axi-dmac: add cyclic transfers in .device_prep_peripheral_dma_vec() Nuno Sá via B4 Relay
@ 2026-01-27 14:28 ` Nuno Sá via B4 Relay
  2026-02-25 16:54   ` Frank Li
  2026-01-27 14:28 ` [PATCH 4/5] dma: dma-axi-dmac: Gracefully terminate SW cyclic transfers Nuno Sá via B4 Relay
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Nuno Sá via B4 Relay @ 2026-01-27 14:28 UTC (permalink / raw)
  To: dmaengine; +Cc: Lars-Peter Clausen, Vinod Koul

From: Nuno Sá <nuno.sa@analog.com>

Add a new helper for getting the next valid struct axi_dmac_desc. This
will be extended in follow up patches to support to gracefully terminate
cyclic transfers.

Signed-off-by: Nuno Sá <nuno.sa@analog.com>
---
 drivers/dma/dma-axi-dmac.c | 33 +++++++++++++++++++++++----------
 1 file changed, 23 insertions(+), 10 deletions(-)

diff --git a/drivers/dma/dma-axi-dmac.c b/drivers/dma/dma-axi-dmac.c
index b083b6176593..3984236717a6 100644
--- a/drivers/dma/dma-axi-dmac.c
+++ b/drivers/dma/dma-axi-dmac.c
@@ -227,10 +227,29 @@ static bool axi_dmac_check_addr(struct axi_dmac_chan *chan, dma_addr_t addr)
 	return true;
 }
 
+static struct axi_dmac_desc *axi_dmac_get_next_desc(struct axi_dmac *dmac,
+						    struct axi_dmac_chan *chan)
+{
+	struct virt_dma_desc *vdesc;
+	struct axi_dmac_desc *desc;
+
+	if (chan->next_desc)
+		return chan->next_desc;
+
+	vdesc = vchan_next_desc(&chan->vchan);
+	if (!vdesc)
+		return NULL;
+
+	list_move_tail(&vdesc->node, &chan->active_descs);
+	desc = to_axi_dmac_desc(vdesc);
+	chan->next_desc = desc;
+
+	return desc;
+}
+
 static void axi_dmac_start_transfer(struct axi_dmac_chan *chan)
 {
 	struct axi_dmac *dmac = chan_to_axi_dmac(chan);
-	struct virt_dma_desc *vdesc;
 	struct axi_dmac_desc *desc;
 	struct axi_dmac_sg *sg;
 	unsigned int flags = 0;
@@ -240,16 +259,10 @@ static void axi_dmac_start_transfer(struct axi_dmac_chan *chan)
 	if (val) /* Queue is full, wait for the next SOT IRQ */
 		return;
 
-	desc = chan->next_desc;
+	desc = axi_dmac_get_next_desc(dmac, chan);
+	if (!desc)
+		return;
 
-	if (!desc) {
-		vdesc = vchan_next_desc(&chan->vchan);
-		if (!vdesc)
-			return;
-		list_move_tail(&vdesc->node, &chan->active_descs);
-		desc = to_axi_dmac_desc(vdesc);
-		chan->next_desc = desc;
-	}
 	sg = &desc->sg[desc->num_submitted];
 
 	/* Already queued in cyclic mode. Wait for it to finish */

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/5] dma: dma-axi-dmac: Gracefully terminate SW cyclic transfers
  2026-01-27 14:28 [PATCH 0/5] dma: dma-axi-dmac: Add cyclic transfer support and graceful termination Nuno Sá via B4 Relay
                   ` (2 preceding siblings ...)
  2026-01-27 14:28 ` [PATCH 3/5] dma: dma-axi-dmac: add helper for getting next desc Nuno Sá via B4 Relay
@ 2026-01-27 14:28 ` Nuno Sá via B4 Relay
  2026-02-25 16:57   ` Frank Li
  2026-01-27 14:28 ` [PATCH 5/5] dma: dma-axi-dmac: gracefully terminate HW " Nuno Sá via B4 Relay
  2026-02-27  2:17 ` [PATCH 0/5] dma: dma-axi-dmac: Add cyclic transfer support and graceful termination Vinod Koul
  5 siblings, 1 reply; 13+ messages in thread
From: Nuno Sá via B4 Relay @ 2026-01-27 14:28 UTC (permalink / raw)
  To: dmaengine; +Cc: Lars-Peter Clausen, Vinod Koul

From: Nuno Sá <nuno.sa@analog.com>

As of now, to terminate a cyclic transfer, one pretty much needs to use
brute force and terminate all transfers with .device_terminate_all().
With this change, when a cyclic transfer terminates we look and see if
we have any pending transfer with the DMA_PREP_LOAD_EOT flag set. If
we do, we terminate the cyclic transfer and prepare to start the next
one. If we don't see the flag we'll ignore that transfer.

Signed-off-by: Nuno Sá <nuno.sa@analog.com>
---
 drivers/dma/dma-axi-dmac.c | 34 +++++++++++++++++++++++++++++++++-
 1 file changed, 33 insertions(+), 1 deletion(-)

diff --git a/drivers/dma/dma-axi-dmac.c b/drivers/dma/dma-axi-dmac.c
index 3984236717a6..638625647152 100644
--- a/drivers/dma/dma-axi-dmac.c
+++ b/drivers/dma/dma-axi-dmac.c
@@ -233,6 +233,11 @@ static struct axi_dmac_desc *axi_dmac_get_next_desc(struct axi_dmac *dmac,
 	struct virt_dma_desc *vdesc;
 	struct axi_dmac_desc *desc;
 
+	/*
+	 * It means a SW cyclic transfer is in place so we should just return
+	 * the same descriptor. SW cyclic transfer termination is handled
+	 * in axi_dmac_transfer_done().
+	 */
 	if (chan->next_desc)
 		return chan->next_desc;
 
@@ -411,6 +416,32 @@ static void axi_dmac_compute_residue(struct axi_dmac_chan *chan,
 	}
 }
 
+static bool axi_dmac_handle_cyclic_eot(struct axi_dmac_chan *chan,
+				       struct axi_dmac_desc *active)
+{
+	struct device *dev = chan_to_axi_dmac(chan)->dma_dev.dev;
+	struct virt_dma_desc *vdesc;
+
+	/* wrap around */
+	active->num_completed = 0;
+
+	vdesc = vchan_next_desc(&chan->vchan);
+	if (!vdesc)
+		return false;
+	if (!(vdesc->tx.flags & DMA_PREP_LOAD_EOT)) {
+		dev_warn(dev, "Discarding non EOT transfer after cyclic\n");
+		list_del(&vdesc->node);
+		return false;
+	}
+
+	/* then let's end the cyclic transfer */
+	chan->next_desc = NULL;
+	list_del(&active->vdesc.node);
+	vchan_cookie_complete(&active->vdesc);
+
+	return true;
+}
+
 static bool axi_dmac_transfer_done(struct axi_dmac_chan *chan,
 	unsigned int completed_transfers)
 {
@@ -458,7 +489,8 @@ static bool axi_dmac_transfer_done(struct axi_dmac_chan *chan,
 			if (active->num_completed == active->num_sgs ||
 			    sg->partial_len) {
 				if (active->cyclic) {
-					active->num_completed = 0; /* wrap around */
+					/* keep start_next as is, if already true... */
+					start_next |= axi_dmac_handle_cyclic_eot(chan, active);
 				} else {
 					list_del(&active->vdesc.node);
 					vchan_cookie_complete(&active->vdesc);

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 5/5] dma: dma-axi-dmac: gracefully terminate HW cyclic transfers
  2026-01-27 14:28 [PATCH 0/5] dma: dma-axi-dmac: Add cyclic transfer support and graceful termination Nuno Sá via B4 Relay
                   ` (3 preceding siblings ...)
  2026-01-27 14:28 ` [PATCH 4/5] dma: dma-axi-dmac: Gracefully terminate SW cyclic transfers Nuno Sá via B4 Relay
@ 2026-01-27 14:28 ` Nuno Sá via B4 Relay
  2026-02-27  2:17 ` [PATCH 0/5] dma: dma-axi-dmac: Add cyclic transfer support and graceful termination Vinod Koul
  5 siblings, 0 replies; 13+ messages in thread
From: Nuno Sá via B4 Relay @ 2026-01-27 14:28 UTC (permalink / raw)
  To: dmaengine; +Cc: Lars-Peter Clausen, Vinod Koul

From: Nuno Sá <nuno.sa@analog.com>

Add support for gracefully terminating hardware cyclic DMA transfers when
a new transfer is queued and is flagged with DMA_PREP_LOAD_EOT. Without
this, cyclic transfers would continue indefinitely until we brute force
it with .device_terminate_all().

When a new descriptor is queued while a cyclic transfer is active, mark
the cyclic transfer for termination. For hardware with scatter-gather
support, modify the last segment flags to trigger end-of-transfer. For
non-SG hardware, clear the CYCLIC flag to allow natural completion.

Older IP core versions (pre-4.6.a) can prefetch data when clearing the
CYCLIC flag, causing corruption in the next transfer. Work around this
by disabling and re-enabling the core to flush prefetched data.

The cyclic_eot flag tracks transfers marked for termination, preventing
new transfers from starting until the cyclic one completes. Non-EOT
transfers submitted after cyclic transfers are discarded with a warning.

Also note that for hardware cyclic transfers not using SG, we need to
make sure that chan->next_desc is also set to NULL (so we can look at
possible EOT transfers) and we also need to move the queue check to
after axi_dmac_get_next_desc() because with hardware based cyclic
transfers we might get the queue marked as full and hence we would not
be able to check for cyclic termination.

Signed-off-by: Nuno Sá <nuno.sa@analog.com>
---
 drivers/dma/dma-axi-dmac.c | 104 +++++++++++++++++++++++++++++++++++++++------
 1 file changed, 91 insertions(+), 13 deletions(-)

diff --git a/drivers/dma/dma-axi-dmac.c b/drivers/dma/dma-axi-dmac.c
index 638625647152..731fa59dc3e7 100644
--- a/drivers/dma/dma-axi-dmac.c
+++ b/drivers/dma/dma-axi-dmac.c
@@ -134,6 +134,7 @@ struct axi_dmac_desc {
 	struct axi_dmac_chan *chan;
 
 	bool cyclic;
+	bool cyclic_eot;
 	bool have_partial_xfer;
 
 	unsigned int num_submitted;
@@ -162,6 +163,7 @@ struct axi_dmac_chan {
 	bool hw_cyclic;
 	bool hw_2d;
 	bool hw_sg;
+	bool hw_cyclic_hotfix;
 };
 
 struct axi_dmac {
@@ -227,11 +229,26 @@ static bool axi_dmac_check_addr(struct axi_dmac_chan *chan, dma_addr_t addr)
 	return true;
 }
 
+static struct axi_dmac_desc *axi_dmac_active_desc(struct axi_dmac_chan *chan)
+{
+	return list_first_entry_or_null(&chan->active_descs,
+					struct axi_dmac_desc, vdesc.node);
+}
+
 static struct axi_dmac_desc *axi_dmac_get_next_desc(struct axi_dmac *dmac,
 						    struct axi_dmac_chan *chan)
 {
+	struct axi_dmac_desc *active = axi_dmac_active_desc(chan);
 	struct virt_dma_desc *vdesc;
 	struct axi_dmac_desc *desc;
+	unsigned int val;
+
+	/*
+	 * Just play safe and ignore any SOF if we have an active cyclic transfer
+	 * flagged to end. We'll start it as soon as the current cyclic one ends.
+	 */
+	if (active && active->cyclic_eot)
+		return NULL;
 
 	/*
 	 * It means a SW cyclic transfer is in place so we should just return
@@ -245,11 +262,43 @@ static struct axi_dmac_desc *axi_dmac_get_next_desc(struct axi_dmac *dmac,
 	if (!vdesc)
 		return NULL;
 
+	if (active && active->cyclic && !(vdesc->tx.flags & DMA_PREP_LOAD_EOT)) {
+		struct device *dev = chan_to_axi_dmac(chan)->dma_dev.dev;
+
+		dev_warn(dev, "Discarding non EOT transfer after cyclic\n");
+		list_del(&vdesc->node);
+		return NULL;
+	}
+
 	list_move_tail(&vdesc->node, &chan->active_descs);
 	desc = to_axi_dmac_desc(vdesc);
 	chan->next_desc = desc;
 
-	return desc;
+	if (!active || !active->cyclic)
+		return desc;
+
+	active->cyclic_eot = true;
+
+	if (chan->hw_sg) {
+		unsigned long flags = AXI_DMAC_HW_FLAG_IRQ | AXI_DMAC_HW_FLAG_LAST;
+		/*
+		 * Let's then stop the current cyclic transfer by making sure we
+		 * get an EOT interrupt and to open the cyclic loop by marking
+		 * the last segment.
+		 */
+		active->sg[active->num_sgs - 1].hw->flags = flags;
+		return NULL;
+	}
+
+	/*
+	 * Clear the cyclic bit if there's no Scatter-Gather HW so that we get
+	 * at the end of the transfer.
+	 */
+	val = axi_dmac_read(dmac, AXI_DMAC_REG_FLAGS);
+	val &= ~AXI_DMAC_FLAG_CYCLIC;
+	axi_dmac_write(dmac, AXI_DMAC_REG_FLAGS, val);
+
+	return NULL;
 }
 
 static void axi_dmac_start_transfer(struct axi_dmac_chan *chan)
@@ -260,14 +309,14 @@ static void axi_dmac_start_transfer(struct axi_dmac_chan *chan)
 	unsigned int flags = 0;
 	unsigned int val;
 
-	val = axi_dmac_read(dmac, AXI_DMAC_REG_START_TRANSFER);
-	if (val) /* Queue is full, wait for the next SOT IRQ */
-		return;
-
 	desc = axi_dmac_get_next_desc(dmac, chan);
 	if (!desc)
 		return;
 
+	val = axi_dmac_read(dmac, AXI_DMAC_REG_START_TRANSFER);
+	if (val) /* Queue is full, wait for the next SOT IRQ */
+		return;
+
 	sg = &desc->sg[desc->num_submitted];
 
 	/* Already queued in cyclic mode. Wait for it to finish */
@@ -309,10 +358,12 @@ static void axi_dmac_start_transfer(struct axi_dmac_chan *chan)
 	 * call, enable hw cyclic mode to avoid unnecessary interrupts.
 	 */
 	if (chan->hw_cyclic && desc->cyclic && !desc->vdesc.tx.callback) {
-		if (chan->hw_sg)
+		if (chan->hw_sg) {
 			desc->sg[desc->num_sgs - 1].hw->flags &= ~AXI_DMAC_HW_FLAG_IRQ;
-		else if (desc->num_sgs == 1)
+		} else if (desc->num_sgs == 1) {
+			chan->next_desc = NULL;
 			flags |= AXI_DMAC_FLAG_CYCLIC;
+		}
 	}
 
 	if (chan->hw_partial_xfer)
@@ -330,12 +381,6 @@ static void axi_dmac_start_transfer(struct axi_dmac_chan *chan)
 	axi_dmac_write(dmac, AXI_DMAC_REG_START_TRANSFER, 1);
 }
 
-static struct axi_dmac_desc *axi_dmac_active_desc(struct axi_dmac_chan *chan)
-{
-	return list_first_entry_or_null(&chan->active_descs,
-		struct axi_dmac_desc, vdesc.node);
-}
-
 static inline unsigned int axi_dmac_total_sg_bytes(struct axi_dmac_chan *chan,
 	struct axi_dmac_sg *sg)
 {
@@ -425,6 +470,35 @@ static bool axi_dmac_handle_cyclic_eot(struct axi_dmac_chan *chan,
 	/* wrap around */
 	active->num_completed = 0;
 
+	if (active->cyclic_eot) {
+		/*
+		 * It means an HW cyclic transfer was marked to stop. And we
+		 * know we have something to schedule, so start the next
+		 * transfer now the cyclic one is done.
+		 */
+		list_del(&active->vdesc.node);
+		vchan_cookie_complete(&active->vdesc);
+
+		if (chan->hw_cyclic_hotfix) {
+			struct axi_dmac *dmac = chan_to_axi_dmac(chan);
+			/*
+			 * In older IP cores, ending a cyclic transfer by clearing
+			 * the CYCLIC flag does not guarantee a graceful end.
+			 * It can happen that some data (of the next frame) is
+			 * already prefetched and will be wrongly visible in the
+			 * next transfer. To workaround this, we need to reenable
+			 * the core so everything is flushed. Newer cores handles
+			 * this correctly and do not require this "hotfix". The
+			 * SG IP also does not require this.
+			 */
+			dev_dbg(dev, "HW cyclic hotfix\n");
+			axi_dmac_write(dmac, AXI_DMAC_REG_CTRL, 0);
+			axi_dmac_write(dmac, AXI_DMAC_REG_CTRL, AXI_DMAC_CTRL_ENABLE);
+		}
+
+		return true;
+	}
+
 	vdesc = vchan_next_desc(&chan->vchan);
 	if (!vdesc)
 		return false;
@@ -460,6 +534,7 @@ static bool axi_dmac_transfer_done(struct axi_dmac_chan *chan,
 	if (chan->hw_sg) {
 		if (active->cyclic) {
 			vchan_cyclic_callback(&active->vdesc);
+			start_next = axi_dmac_handle_cyclic_eot(chan, active);
 		} else {
 			list_del(&active->vdesc.node);
 			vchan_cookie_complete(&active->vdesc);
@@ -1103,6 +1178,9 @@ static int axi_dmac_detect_caps(struct axi_dmac *dmac, unsigned int version)
 		chan->length_align_mask = chan->address_align_mask;
 	}
 
+	if (version < ADI_AXI_PCORE_VER(4, 6, 'a') && !chan->hw_sg)
+		chan->hw_cyclic_hotfix = true;
+
 	return 0;
 }
 

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/5] dmaengine: Document cyclic transfer for dmaengine_prep_peripheral_dma_vec()
  2026-01-27 14:28 ` [PATCH 1/5] dmaengine: Document cyclic transfer for dmaengine_prep_peripheral_dma_vec() Nuno Sá via B4 Relay
@ 2026-02-25 16:50   ` Frank Li
  0 siblings, 0 replies; 13+ messages in thread
From: Frank Li @ 2026-02-25 16:50 UTC (permalink / raw)
  To: Nuno Sá; +Cc: dmaengine, Lars-Peter Clausen, Vinod Koul

On Tue, Jan 27, 2026 at 02:28:22PM +0000, Nuno Sá wrote:
> Document that the DMA_PREP_REPEAT flag can be used with the
> dmaengine_prep_peripheral_dma_vec() to mark a transfer as cyclic similar
> to dmaengine_prep_dma_cyclic().
>
> Signed-off-by: Nuno Sá <nuno.sa@analog.com>
> ---
Reviewed-by: Frank Li <Frank.Li@nxp.com>
>  include/linux/dmaengine.h | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
> index 99efe2b9b4ea..b3d251c9734e 100644
> --- a/include/linux/dmaengine.h
> +++ b/include/linux/dmaengine.h
> @@ -996,7 +996,8 @@ static inline struct dma_async_tx_descriptor *dmaengine_prep_slave_single(
>   * @vecs: The array of DMA vectors that should be transferred
>   * @nents: The number of DMA vectors in the array
>   * @dir: Specifies the direction of the data transfer
> - * @flags: DMA engine flags
> + * @flags: DMA engine flags - DMA_PREP_REPEAT can be used to mark a cyclic
> + *         DMA transfer
>   */
>  static inline struct dma_async_tx_descriptor *dmaengine_prep_peripheral_dma_vec(
>  	struct dma_chan *chan, const struct dma_vec *vecs, size_t nents,
>
> --
> 2.52.0
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/5] dma: dma-axi-dmac: add cyclic transfers in .device_prep_peripheral_dma_vec()
  2026-01-27 14:28 ` [PATCH 2/5] dma: dma-axi-dmac: add cyclic transfers in .device_prep_peripheral_dma_vec() Nuno Sá via B4 Relay
@ 2026-02-25 16:51   ` Frank Li
  0 siblings, 0 replies; 13+ messages in thread
From: Frank Li @ 2026-02-25 16:51 UTC (permalink / raw)
  To: Nuno Sá; +Cc: dmaengine, Lars-Peter Clausen, Vinod Koul

On Tue, Jan 27, 2026 at 02:28:23PM +0000, Nuno Sá wrote:
> Add support for cyclic transfers by checking the DMA_PREP_REPEAT
> flag. If we do see it, close the loop and clear the flag for the last
> segment (the same as we do for .device_prep_dma_cyclic().
>
> Signed-off-by: Nuno Sá <nuno.sa@analog.com>
> ---
Reviewed-by: Frank Li <Frank.Li@nxp.com>
>  drivers/dma/dma-axi-dmac.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/dma/dma-axi-dmac.c b/drivers/dma/dma-axi-dmac.c
> index f5caf75dc0e7..b083b6176593 100644
> --- a/drivers/dma/dma-axi-dmac.c
> +++ b/drivers/dma/dma-axi-dmac.c
> @@ -657,7 +657,12 @@ axi_dmac_prep_peripheral_dma_vec(struct dma_chan *c, const struct dma_vec *vecs,
>  					      vecs[i].len, dsg);
>  	}
>
> -	desc->cyclic = false;
> +	desc->cyclic = flags & DMA_PREP_REPEAT;
> +	if (desc->cyclic) {
> +		/* Chain the last descriptor to the first, and remove its "last" flag */
> +		desc->sg[num_sgs - 1].hw->flags &= ~AXI_DMAC_HW_FLAG_LAST;
> +		desc->sg[num_sgs - 1].hw->next_sg_addr = desc->sg[0].hw_phys;
> +	}
>
>  	return vchan_tx_prep(&chan->vchan, &desc->vdesc, flags);
>  }
>
> --
> 2.52.0
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/5] dma: dma-axi-dmac: add helper for getting next desc
  2026-01-27 14:28 ` [PATCH 3/5] dma: dma-axi-dmac: add helper for getting next desc Nuno Sá via B4 Relay
@ 2026-02-25 16:54   ` Frank Li
  0 siblings, 0 replies; 13+ messages in thread
From: Frank Li @ 2026-02-25 16:54 UTC (permalink / raw)
  To: Nuno Sá; +Cc: dmaengine, Lars-Peter Clausen, Vinod Koul

On Tue, Jan 27, 2026 at 02:28:24PM +0000, Nuno Sá wrote:
> Add a new helper for getting the next valid struct axi_dmac_desc. This
> will be extended in follow up patches to support to gracefully terminate
> cyclic transfers.
>
> Signed-off-by: Nuno Sá <nuno.sa@analog.com>
> ---
>  drivers/dma/dma-axi-dmac.c | 33 +++++++++++++++++++++++----------
>  1 file changed, 23 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/dma/dma-axi-dmac.c b/drivers/dma/dma-axi-dmac.c
> index b083b6176593..3984236717a6 100644
> --- a/drivers/dma/dma-axi-dmac.c
> +++ b/drivers/dma/dma-axi-dmac.c
> @@ -227,10 +227,29 @@ static bool axi_dmac_check_addr(struct axi_dmac_chan *chan, dma_addr_t addr)
>  	return true;
>  }
>
> +static struct axi_dmac_desc *axi_dmac_get_next_desc(struct axi_dmac *dmac,
> +						    struct axi_dmac_chan *chan)

Nit: if need respin,

static struct axi_dmac_desc *
axi_dmac_get_next_desc(struct axi_dmac *dmac, struct axi_dmac_chan *chan)

This is also fine.

Reviewed-by: Frank Li <Frank.Li@nxp.com>
> +{
> +	struct virt_dma_desc *vdesc;
> +	struct axi_dmac_desc *desc;
> +
> +	if (chan->next_desc)
> +		return chan->next_desc;
> +
> +	vdesc = vchan_next_desc(&chan->vchan);
> +	if (!vdesc)
> +		return NULL;
> +
> +	list_move_tail(&vdesc->node, &chan->active_descs);
> +	desc = to_axi_dmac_desc(vdesc);
> +	chan->next_desc = desc;
> +
> +	return desc;
> +}
> +
>  static void axi_dmac_start_transfer(struct axi_dmac_chan *chan)
>  {
>  	struct axi_dmac *dmac = chan_to_axi_dmac(chan);
> -	struct virt_dma_desc *vdesc;
>  	struct axi_dmac_desc *desc;
>  	struct axi_dmac_sg *sg;
>  	unsigned int flags = 0;
> @@ -240,16 +259,10 @@ static void axi_dmac_start_transfer(struct axi_dmac_chan *chan)
>  	if (val) /* Queue is full, wait for the next SOT IRQ */
>  		return;
>
> -	desc = chan->next_desc;
> +	desc = axi_dmac_get_next_desc(dmac, chan);
> +	if (!desc)
> +		return;
>
> -	if (!desc) {
> -		vdesc = vchan_next_desc(&chan->vchan);
> -		if (!vdesc)
> -			return;
> -		list_move_tail(&vdesc->node, &chan->active_descs);
> -		desc = to_axi_dmac_desc(vdesc);
> -		chan->next_desc = desc;
> -	}
>  	sg = &desc->sg[desc->num_submitted];
>
>  	/* Already queued in cyclic mode. Wait for it to finish */
>
> --
> 2.52.0
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/5] dma: dma-axi-dmac: Gracefully terminate SW cyclic transfers
  2026-01-27 14:28 ` [PATCH 4/5] dma: dma-axi-dmac: Gracefully terminate SW cyclic transfers Nuno Sá via B4 Relay
@ 2026-02-25 16:57   ` Frank Li
  2026-02-26  9:35     ` Nuno Sá
  0 siblings, 1 reply; 13+ messages in thread
From: Frank Li @ 2026-02-25 16:57 UTC (permalink / raw)
  To: Nuno Sá; +Cc: dmaengine, Lars-Peter Clausen, Vinod Koul

On Tue, Jan 27, 2026 at 02:28:25PM +0000, Nuno Sá wrote:
> As of now, to terminate a cyclic transfer, one pretty much needs to use
> brute force and terminate all transfers with .device_terminate_all().
> With this change, when a cyclic transfer terminates we look and see if
> we have any pending transfer with the DMA_PREP_LOAD_EOT flag set. If
> we do, we terminate the cyclic transfer and prepare to start the next
> one. If we don't see the flag we'll ignore that transfer.

Can you rephrase it to avoid use "we".

for example,

Ignore that transfer if flag not set.

Frank

>
> Signed-off-by: Nuno Sá <nuno.sa@analog.com>
> ---
>  drivers/dma/dma-axi-dmac.c | 34 +++++++++++++++++++++++++++++++++-
>  1 file changed, 33 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/dma/dma-axi-dmac.c b/drivers/dma/dma-axi-dmac.c
> index 3984236717a6..638625647152 100644
> --- a/drivers/dma/dma-axi-dmac.c
> +++ b/drivers/dma/dma-axi-dmac.c
> @@ -233,6 +233,11 @@ static struct axi_dmac_desc *axi_dmac_get_next_desc(struct axi_dmac *dmac,
>  	struct virt_dma_desc *vdesc;
>  	struct axi_dmac_desc *desc;
>
> +	/*
> +	 * It means a SW cyclic transfer is in place so we should just return
> +	 * the same descriptor. SW cyclic transfer termination is handled
> +	 * in axi_dmac_transfer_done().
> +	 */
>  	if (chan->next_desc)
>  		return chan->next_desc;
>
> @@ -411,6 +416,32 @@ static void axi_dmac_compute_residue(struct axi_dmac_chan *chan,
>  	}
>  }
>
> +static bool axi_dmac_handle_cyclic_eot(struct axi_dmac_chan *chan,
> +				       struct axi_dmac_desc *active)
> +{
> +	struct device *dev = chan_to_axi_dmac(chan)->dma_dev.dev;
> +	struct virt_dma_desc *vdesc;
> +
> +	/* wrap around */
> +	active->num_completed = 0;
> +
> +	vdesc = vchan_next_desc(&chan->vchan);
> +	if (!vdesc)
> +		return false;
> +	if (!(vdesc->tx.flags & DMA_PREP_LOAD_EOT)) {
> +		dev_warn(dev, "Discarding non EOT transfer after cyclic\n");
> +		list_del(&vdesc->node);
> +		return false;
> +	}
> +
> +	/* then let's end the cyclic transfer */
> +	chan->next_desc = NULL;
> +	list_del(&active->vdesc.node);
> +	vchan_cookie_complete(&active->vdesc);
> +
> +	return true;
> +}
> +
>  static bool axi_dmac_transfer_done(struct axi_dmac_chan *chan,
>  	unsigned int completed_transfers)
>  {
> @@ -458,7 +489,8 @@ static bool axi_dmac_transfer_done(struct axi_dmac_chan *chan,
>  			if (active->num_completed == active->num_sgs ||
>  			    sg->partial_len) {
>  				if (active->cyclic) {
> -					active->num_completed = 0; /* wrap around */
> +					/* keep start_next as is, if already true... */
> +					start_next |= axi_dmac_handle_cyclic_eot(chan, active);
>  				} else {
>  					list_del(&active->vdesc.node);
>  					vchan_cookie_complete(&active->vdesc);
>
> --
> 2.52.0
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/5] dma: dma-axi-dmac: Gracefully terminate SW cyclic transfers
  2026-02-25 16:57   ` Frank Li
@ 2026-02-26  9:35     ` Nuno Sá
  0 siblings, 0 replies; 13+ messages in thread
From: Nuno Sá @ 2026-02-26  9:35 UTC (permalink / raw)
  To: Frank Li, Nuno Sá; +Cc: dmaengine, Lars-Peter Clausen, Vinod Koul

On Wed, 2026-02-25 at 11:57 -0500, Frank Li wrote:
> On Tue, Jan 27, 2026 at 02:28:25PM +0000, Nuno Sá wrote:
> > As of now, to terminate a cyclic transfer, one pretty much needs to use
> > brute force and terminate all transfers with .device_terminate_all().
> > With this change, when a cyclic transfer terminates we look and see if
> > we have any pending transfer with the DMA_PREP_LOAD_EOT flag set. If
> > we do, we terminate the cyclic transfer and prepare to start the next
> > one. If we don't see the flag we'll ignore that transfer.
> 
> Can you rephrase it to avoid use "we".
> 

Sure!

Thanks for looking at this.

- Nuno Sá

> for example,
> 
> Ignore that transfer if flag not set.
> 
> Frank
> 
> > 
> > Signed-off-by: Nuno Sá <nuno.sa@analog.com>
> > ---
> >  drivers/dma/dma-axi-dmac.c | 34 +++++++++++++++++++++++++++++++++-
> >  1 file changed, 33 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/dma/dma-axi-dmac.c b/drivers/dma/dma-axi-dmac.c
> > index 3984236717a6..638625647152 100644
> > --- a/drivers/dma/dma-axi-dmac.c
> > +++ b/drivers/dma/dma-axi-dmac.c
> > @@ -233,6 +233,11 @@ static struct axi_dmac_desc *axi_dmac_get_next_desc(struct axi_dmac *dmac,
> >  	struct virt_dma_desc *vdesc;
> >  	struct axi_dmac_desc *desc;
> > 
> > +	/*
> > +	 * It means a SW cyclic transfer is in place so we should just return
> > +	 * the same descriptor. SW cyclic transfer termination is handled
> > +	 * in axi_dmac_transfer_done().
> > +	 */
> >  	if (chan->next_desc)
> >  		return chan->next_desc;
> > 
> > @@ -411,6 +416,32 @@ static void axi_dmac_compute_residue(struct axi_dmac_chan *chan,
> >  	}
> >  }
> > 
> > +static bool axi_dmac_handle_cyclic_eot(struct axi_dmac_chan *chan,
> > +				       struct axi_dmac_desc *active)
> > +{
> > +	struct device *dev = chan_to_axi_dmac(chan)->dma_dev.dev;
> > +	struct virt_dma_desc *vdesc;
> > +
> > +	/* wrap around */
> > +	active->num_completed = 0;
> > +
> > +	vdesc = vchan_next_desc(&chan->vchan);
> > +	if (!vdesc)
> > +		return false;
> > +	if (!(vdesc->tx.flags & DMA_PREP_LOAD_EOT)) {
> > +		dev_warn(dev, "Discarding non EOT transfer after cyclic\n");
> > +		list_del(&vdesc->node);
> > +		return false;
> > +	}
> > +
> > +	/* then let's end the cyclic transfer */
> > +	chan->next_desc = NULL;
> > +	list_del(&active->vdesc.node);
> > +	vchan_cookie_complete(&active->vdesc);
> > +
> > +	return true;
> > +}
> > +
> >  static bool axi_dmac_transfer_done(struct axi_dmac_chan *chan,
> >  	unsigned int completed_transfers)
> >  {
> > @@ -458,7 +489,8 @@ static bool axi_dmac_transfer_done(struct axi_dmac_chan *chan,
> >  			if (active->num_completed == active->num_sgs ||
> >  			    sg->partial_len) {
> >  				if (active->cyclic) {
> > -					active->num_completed = 0; /* wrap around */
> > +					/* keep start_next as is, if already true... */
> > +					start_next |= axi_dmac_handle_cyclic_eot(chan, active);
> >  				} else {
> >  					list_del(&active->vdesc.node);
> >  					vchan_cookie_complete(&active->vdesc);
> > 
> > --
> > 2.52.0
> > 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/5] dma: dma-axi-dmac: Add cyclic transfer support and graceful termination
  2026-01-27 14:28 [PATCH 0/5] dma: dma-axi-dmac: Add cyclic transfer support and graceful termination Nuno Sá via B4 Relay
                   ` (4 preceding siblings ...)
  2026-01-27 14:28 ` [PATCH 5/5] dma: dma-axi-dmac: gracefully terminate HW " Nuno Sá via B4 Relay
@ 2026-02-27  2:17 ` Vinod Koul
  2026-02-27 12:41   ` Nuno Sá
  5 siblings, 1 reply; 13+ messages in thread
From: Vinod Koul @ 2026-02-27  2:17 UTC (permalink / raw)
  To: Nuno Sá via B4 Relay; +Cc: dmaengine, Lars-Peter Clausen

On 27-01-26, 14:28, Nuno Sá via B4 Relay wrote:
> This series adds support for cyclic transfers in the .device_prep_peripheral_dma_vec()
> callback and implements graceful termination of cyclic transfers using the
> DMA_PREP_LOAD_EOT flag. Using DMA_PREP_REPEAT and DMA_PREP_LOAD_EOT is
> based on the discussion in [1].
> 
> Currently, the only way to stop a cyclic transfer is through brute force using
> .device_terminate_all(), which terminates all pending transfers. This series
> introduces a mechanism to gracefully terminate individual cyclic transfers when
> a new transfer flagged with DMA_PREP_LOAD_EOT is queued.
> 
> We need two different approaches:
> 
> 1. Software-managed cyclic transfers: These generate EOT (End-Of-Transfer)
>    interrupts for each cycle. Hence, termination can be handled directly
>    in the interrupt handler when the EOT interrupt fires, making the
>    transition to the next transfer straightforward.
> 
> 2. Hardware-managed cyclic transfers: These are optimized to avoid interrupt
>    overhead by suppressing EOT interrupts. Since there are no EOT interrupts,
>    termination must be detected at SOF (Start-Of-Frame) when new transfers
>    are being considered. The transfer is marked for termination and the
>    hardware is configured to end the current cycle gracefully.
> 
> For HW-managed cyclic mode, the series handles both scatter-gather and non-SG
> variants. With SG support, the last segment flags are modified to trigger EOT.
> Without SG, the CYCLIC flag is cleared to allow natural completion. A workaround
> is included for older IP cores (pre-4.6.a) that can prefetch data incorrectly
> when clearing the CYCLIC flag, requiring a core disable/enable cycle.
> 
> [1]: https://lore.kernel.org/dmaengine/ZhJW9JEqN2wrejvC@matsya/
> 
> ---
> Nuno Sá (5):
>       dmaengine: Document cyclic transfer for dmaengine_prep_peripheral_dma_vec()
>       dma: dma-axi-dmac: add cyclic transfers in .device_prep_peripheral_dma_vec()
>       dma: dma-axi-dmac: add helper for getting next desc
>       dma: dma-axi-dmac: Gracefully terminate SW cyclic transfers
>       dma: dma-axi-dmac: gracefully terminate HW cyclic transfers

Please be consistent in naming, it should dmaengine: dma-axi-dmac: xxx
everywhere!

-- 
~Vinod

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/5] dma: dma-axi-dmac: Add cyclic transfer support and graceful termination
  2026-02-27  2:17 ` [PATCH 0/5] dma: dma-axi-dmac: Add cyclic transfer support and graceful termination Vinod Koul
@ 2026-02-27 12:41   ` Nuno Sá
  0 siblings, 0 replies; 13+ messages in thread
From: Nuno Sá @ 2026-02-27 12:41 UTC (permalink / raw)
  To: Vinod Koul, Nuno Sá via B4 Relay; +Cc: dmaengine, Lars-Peter Clausen

On Fri, 2026-02-27 at 07:47 +0530, Vinod Koul wrote:
> On 27-01-26, 14:28, Nuno Sá via B4 Relay wrote:
> > This series adds support for cyclic transfers in the .device_prep_peripheral_dma_vec()
> > callback and implements graceful termination of cyclic transfers using the
> > DMA_PREP_LOAD_EOT flag. Using DMA_PREP_REPEAT and DMA_PREP_LOAD_EOT is
> > based on the discussion in [1].
> > 
> > Currently, the only way to stop a cyclic transfer is through brute force using
> > .device_terminate_all(), which terminates all pending transfers. This series
> > introduces a mechanism to gracefully terminate individual cyclic transfers when
> > a new transfer flagged with DMA_PREP_LOAD_EOT is queued.
> > 
> > We need two different approaches:
> > 
> > 1. Software-managed cyclic transfers: These generate EOT (End-Of-Transfer)
> >    interrupts for each cycle. Hence, termination can be handled directly
> >    in the interrupt handler when the EOT interrupt fires, making the
> >    transition to the next transfer straightforward.
> > 
> > 2. Hardware-managed cyclic transfers: These are optimized to avoid interrupt
> >    overhead by suppressing EOT interrupts. Since there are no EOT interrupts,
> >    termination must be detected at SOF (Start-Of-Frame) when new transfers
> >    are being considered. The transfer is marked for termination and the
> >    hardware is configured to end the current cycle gracefully.
> > 
> > For HW-managed cyclic mode, the series handles both scatter-gather and non-SG
> > variants. With SG support, the last segment flags are modified to trigger EOT.
> > Without SG, the CYCLIC flag is cleared to allow natural completion. A workaround
> > is included for older IP cores (pre-4.6.a) that can prefetch data incorrectly
> > when clearing the CYCLIC flag, requiring a core disable/enable cycle.
> > 
> > [1]: https://lore.kernel.org/dmaengine/ZhJW9JEqN2wrejvC@matsya/
> > 
> > ---
> > Nuno Sá (5):
> >       dmaengine: Document cyclic transfer for dmaengine_prep_peripheral_dma_vec()
> >       dma: dma-axi-dmac: add cyclic transfers in .device_prep_peripheral_dma_vec()
> >       dma: dma-axi-dmac: add helper for getting next desc
> >       dma: dma-axi-dmac: Gracefully terminate SW cyclic transfers
> >       dma: dma-axi-dmac: gracefully terminate HW cyclic transfers
> 
> Please be consistent in naming, it should dmaengine: dma-axi-dmac: xxx
> everywhere!

Ahh sure! Sorry, not sure how I did not noticed such an obvious thing!

- Nuno Sá

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2026-02-27 12:41 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-27 14:28 [PATCH 0/5] dma: dma-axi-dmac: Add cyclic transfer support and graceful termination Nuno Sá via B4 Relay
2026-01-27 14:28 ` [PATCH 1/5] dmaengine: Document cyclic transfer for dmaengine_prep_peripheral_dma_vec() Nuno Sá via B4 Relay
2026-02-25 16:50   ` Frank Li
2026-01-27 14:28 ` [PATCH 2/5] dma: dma-axi-dmac: add cyclic transfers in .device_prep_peripheral_dma_vec() Nuno Sá via B4 Relay
2026-02-25 16:51   ` Frank Li
2026-01-27 14:28 ` [PATCH 3/5] dma: dma-axi-dmac: add helper for getting next desc Nuno Sá via B4 Relay
2026-02-25 16:54   ` Frank Li
2026-01-27 14:28 ` [PATCH 4/5] dma: dma-axi-dmac: Gracefully terminate SW cyclic transfers Nuno Sá via B4 Relay
2026-02-25 16:57   ` Frank Li
2026-02-26  9:35     ` Nuno Sá
2026-01-27 14:28 ` [PATCH 5/5] dma: dma-axi-dmac: gracefully terminate HW " Nuno Sá via B4 Relay
2026-02-27  2:17 ` [PATCH 0/5] dma: dma-axi-dmac: Add cyclic transfer support and graceful termination Vinod Koul
2026-02-27 12:41   ` Nuno Sá

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox