DMA Engine development
 help / color / mirror / Atom feed
* [PATCH v5 00/17] Renesas: dmaengine and ASoC fixes
@ 2026-05-12 12:12 Claudiu Beznea
  2026-05-12 12:12 ` [PATCH v5 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up Claudiu Beznea
                   ` (16 more replies)
  0 siblings, 17 replies; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-12 12:12 UTC (permalink / raw)
  To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur
  Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc, Claudiu Beznea

Hi,

This series addresses issues identified in the DMA engine and RZ SSI
drivers.

As described in the patch "dmaengine: sh: rz-dmac: Set the Link End (LE)
bit on the last descriptor", stress testing on the Renesas RZ/G2L SoC
showed that starting all available DMA channels could cause the system
to stall after several hours of operation. This issue was resolved by
setting the Link End bit on the last descriptor of a DMA transfer.

However, after applying that fix, the SSI audio driver began to suffer
from frequent overruns and underruns. This was caused by the way the SSI
driver emulated cyclic DMA transfers: at the start of playback/capture
it initially enqueued 4 DMA descriptors as single SG transfers, and upon
completion of each descriptor, a new one was enqueued. Since there was
no indication to the DMA hardware where the descriptor list ended
(though the LE bit), the DMA engine continued transferring until the
audio stream was stopped. From time to time, audio signal spikes were
observed in the recorded file with this approach.

To address these issue, cyclic DMA support was added to the DMA engine
driver, and the SSI audio driver was reworked to use this support via
the generic PCM dmaengine APIs.

Due to the behavior described above, no Fixes tags were added to the
patches in this series, and all patches should be merged through the
same tree.

In case this series will be merged this release cycle, as the audio
patches are acked, best would be to go though the DMA tree.

Thank you,
Claudiu

Changes in v5:
- dropped patch "dmaengine: sh: rz-dmac: Do not disable the channel on error"
- added patch "dmaengine: sh: rz-dmac: Add runtime PM support"

Changes in v4:
- collected tags
- addressed review comments got from sashiko.dev. For this:
- added patches:
-- dmaengine: sh: rz-dmac: Move interrupt request after everything is set up
-- dmaengine: sh: rz-dmac: Fix incorrect NULL check on list_first_entry()

Changes in v3:
- addressed review comments got from sashiko.dev. For this:
- added patches 1-9
- added patch "ASoC: renesas: rz-ssi: Add pause support"
- dropped patches:
-- dmaengine: sh: rz-dmac: Add enable status bit
-- dmaengine: sh: rz-dmac: Add pause status bit

Changes in v2:
- fixed typos in patch descriptions and patch titles
- updated "ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs"
  to fix the PIO mode
- in patch "dmaengine: sh: rz-dmac: Add suspend to RAM support"
  clear the RZ_DMAC_CHAN_STATUS_SYS_SUSPENDED status bit for
  channel w/o RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL
- per-patch updates can be found in individual patches changelog 
- rebased on top of next-20260319
- updated the cover letter

Claudiu Beznea (17):
  dmaengine: sh: rz-dmac: Move interrupt request after everything is set
    up
  dmaengine: sh: rz-dmac: Fix incorrect NULL check on list_first_entry()
  dmaengine: sh: rz-dmac: Use list_first_entry_or_null()
  dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw()
  dmaengine: sh: rz-dmac: Add helper to compute the lmdesc address
  dmaengine: sh: rz-dmac: Save the start LM descriptor
  dmaengine: sh: rz-dmac: Add helper to check if the channel is enabled
  dmaengine: sh: rz-dmac: Add helper to check if the channel is paused
  dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor
    processing
  dmaengine: sh: rz-dmac: Refactor pause/resume code
  dmaengine: sh: rz-dmac: Drop the update of channel->chctrl with
    CHCTRL_SETEN
  dmaengine: sh: rz-dmac: Add cyclic DMA support
  dmaengine: sh: rz-dmac: Add runtime PM support
  dmaengine: sh: rz-dmac: Add suspend to RAM support
  ASoC: renesas: rz-ssi: Add pause support
  ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs
  dmaengine: sh: rz-dmac: Set the Link End (LE) bit on the last
    descriptor

 drivers/dma/sh/rz-dmac.c   | 827 ++++++++++++++++++++++++++-----------
 sound/soc/renesas/Kconfig  |   1 +
 sound/soc/renesas/rz-ssi.c | 393 ++++++------------
 3 files changed, 726 insertions(+), 495 deletions(-)

-- 
2.43.0


^ permalink raw reply	[flat|nested] 52+ messages in thread

* [PATCH v5 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up
  2026-05-12 12:12 [PATCH v5 00/17] Renesas: dmaengine and ASoC fixes Claudiu Beznea
@ 2026-05-12 12:12 ` Claudiu Beznea
  2026-05-12 20:28   ` Frank Li
  2026-05-13 21:44   ` sashiko-bot
  2026-05-12 12:12 ` [PATCH v5 02/17] dmaengine: sh: rz-dmac: Fix incorrect NULL check on list_first_entry() Claudiu Beznea
                   ` (15 subsequent siblings)
  16 siblings, 2 replies; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-12 12:12 UTC (permalink / raw)
  To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur
  Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc, Claudiu Beznea, stable

Once the interrupt is requested, the interrupt handler may run immediately.
Since the IRQ handler can access channel->ch_base, which is initialized
only after requesting the IRQ, this may lead to invalid memory access.
Likewise, the IRQ thread may access uninitialized data (the ld_free,
ld_queue, and ld_active lists), which may also lead to issues.

Request the interrupts only after everything is set up. To keep the error
path simpler, use dmam_alloc_coherent() instead of dma_alloc_coherent().

Fixes: 5000d37042a6 ("dmaengine: sh: Add DMAC driver for RZ/G2L SoC")
Cc: stable@vger.kernel.org
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---

Changes in v5:
- none

Changes in v4:
- none, this patch is new

 drivers/dma/sh/rz-dmac.c | 88 +++++++++++++++-------------------------
 1 file changed, 33 insertions(+), 55 deletions(-)

diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 625ff29024de..9f206a33dcc6 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -981,25 +981,6 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac,
 	channel->index = index;
 	channel->mid_rid = -EINVAL;
 
-	/* Request the channel interrupt. */
-	scnprintf(pdev_irqname, sizeof(pdev_irqname), "ch%u", index);
-	irq = platform_get_irq_byname(pdev, pdev_irqname);
-	if (irq < 0)
-		return irq;
-
-	irqname = devm_kasprintf(dmac->dev, GFP_KERNEL, "%s:%u",
-				 dev_name(dmac->dev), index);
-	if (!irqname)
-		return -ENOMEM;
-
-	ret = devm_request_threaded_irq(dmac->dev, irq, rz_dmac_irq_handler,
-					rz_dmac_irq_handler_thread, 0,
-					irqname, channel);
-	if (ret) {
-		dev_err(dmac->dev, "failed to request IRQ %u (%d)\n", irq, ret);
-		return ret;
-	}
-
 	/* Set io base address for each channel */
 	if (index < 8) {
 		channel->ch_base = dmac->base + CHANNEL_0_7_OFFSET +
@@ -1012,9 +993,9 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac,
 	}
 
 	/* Allocate descriptors */
-	lmdesc = dma_alloc_coherent(&pdev->dev,
-				    sizeof(struct rz_lmdesc) * DMAC_NR_LMDESC,
-				    &channel->lmdesc.base_dma, GFP_KERNEL);
+	lmdesc = dmam_alloc_coherent(&pdev->dev,
+				     sizeof(struct rz_lmdesc) * DMAC_NR_LMDESC,
+				     &channel->lmdesc.base_dma, GFP_KERNEL);
 	if (!lmdesc) {
 		dev_err(&pdev->dev, "Can't allocate memory (lmdesc)\n");
 		return -ENOMEM;
@@ -1030,7 +1011,24 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac,
 	INIT_LIST_HEAD(&channel->ld_free);
 	INIT_LIST_HEAD(&channel->ld_active);
 
-	return 0;
+	/* Request the channel interrupt. */
+	scnprintf(pdev_irqname, sizeof(pdev_irqname), "ch%u", index);
+	irq = platform_get_irq_byname(pdev, pdev_irqname);
+	if (irq < 0)
+		return irq;
+
+	irqname = devm_kasprintf(dmac->dev, GFP_KERNEL, "%s:%u",
+				 dev_name(dmac->dev), index);
+	if (!irqname)
+		return -ENOMEM;
+
+	ret = devm_request_threaded_irq(dmac->dev, irq, rz_dmac_irq_handler,
+					rz_dmac_irq_handler_thread, 0,
+					irqname, channel);
+	if (ret)
+		dev_err(dmac->dev, "failed to request IRQ %u (%d)\n", irq, ret);
+
+	return ret;
 }
 
 static void rz_dmac_put_device(void *_dev)
@@ -1099,7 +1097,6 @@ static int rz_dmac_probe(struct platform_device *pdev)
 	const char *irqname = "error";
 	struct dma_device *engine;
 	struct rz_dmac *dmac;
-	int channel_num;
 	int ret;
 	int irq;
 	u8 i;
@@ -1132,18 +1129,6 @@ static int rz_dmac_probe(struct platform_device *pdev)
 			return PTR_ERR(dmac->ext_base);
 	}
 
-	/* Register interrupt handler for error */
-	irq = platform_get_irq_byname_optional(pdev, irqname);
-	if (irq > 0) {
-		ret = devm_request_irq(&pdev->dev, irq, rz_dmac_irq_handler, 0,
-				       irqname, NULL);
-		if (ret) {
-			dev_err(&pdev->dev, "failed to request IRQ %u (%d)\n",
-				irq, ret);
-			return ret;
-		}
-	}
-
 	/* Initialize the channels. */
 	INIT_LIST_HEAD(&dmac->engine.channels);
 
@@ -1169,6 +1154,18 @@ static int rz_dmac_probe(struct platform_device *pdev)
 			goto err;
 	}
 
+	/* Register interrupt handler for error */
+	irq = platform_get_irq_byname_optional(pdev, irqname);
+	if (irq > 0) {
+		ret = devm_request_irq(&pdev->dev, irq, rz_dmac_irq_handler, 0,
+				       irqname, NULL);
+		if (ret) {
+			dev_err(&pdev->dev, "failed to request IRQ %u (%d)\n",
+				irq, ret);
+			goto err;
+		}
+	}
+
 	/* Register the DMAC as a DMA provider for DT. */
 	ret = of_dma_controller_register(pdev->dev.of_node, rz_dmac_of_xlate,
 					 NULL);
@@ -1210,16 +1207,6 @@ static int rz_dmac_probe(struct platform_device *pdev)
 dma_register_err:
 	of_dma_controller_free(pdev->dev.of_node);
 err:
-	channel_num = i ? i - 1 : 0;
-	for (i = 0; i < channel_num; i++) {
-		struct rz_dmac_chan *channel = &dmac->channels[i];
-
-		dma_free_coherent(&pdev->dev,
-				  sizeof(struct rz_lmdesc) * DMAC_NR_LMDESC,
-				  channel->lmdesc.base,
-				  channel->lmdesc.base_dma);
-	}
-
 	reset_control_assert(dmac->rstc);
 err_pm_runtime_put:
 	pm_runtime_put(&pdev->dev);
@@ -1232,18 +1219,9 @@ static int rz_dmac_probe(struct platform_device *pdev)
 static void rz_dmac_remove(struct platform_device *pdev)
 {
 	struct rz_dmac *dmac = platform_get_drvdata(pdev);
-	unsigned int i;
 
 	dma_async_device_unregister(&dmac->engine);
 	of_dma_controller_free(pdev->dev.of_node);
-	for (i = 0; i < dmac->n_channels; i++) {
-		struct rz_dmac_chan *channel = &dmac->channels[i];
-
-		dma_free_coherent(&pdev->dev,
-				  sizeof(struct rz_lmdesc) * DMAC_NR_LMDESC,
-				  channel->lmdesc.base,
-				  channel->lmdesc.base_dma);
-	}
 	reset_control_assert(dmac->rstc);
 	pm_runtime_put(&pdev->dev);
 	pm_runtime_disable(&pdev->dev);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v5 02/17] dmaengine: sh: rz-dmac: Fix incorrect NULL check on list_first_entry()
  2026-05-12 12:12 [PATCH v5 00/17] Renesas: dmaengine and ASoC fixes Claudiu Beznea
  2026-05-12 12:12 ` [PATCH v5 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up Claudiu Beznea
@ 2026-05-12 12:12 ` Claudiu Beznea
  2026-05-12 20:35   ` Frank Li
  2026-05-13 22:00   ` sashiko-bot
  2026-05-12 12:12 ` [PATCH v5 03/17] dmaengine: sh: rz-dmac: Use list_first_entry_or_null() Claudiu Beznea
                   ` (14 subsequent siblings)
  16 siblings, 2 replies; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-12 12:12 UTC (permalink / raw)
  To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur
  Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc, Claudiu Beznea, stable

The list passed as argument to list_first_entry() is expected to be not
empty. Use list_first_entry_or_null() to avoid dereferencing invalid
memory.

Fixes: 21323b118c16 ("dmaengine: sh: rz-dmac: Add device_tx_status() callback")
Cc: stable@vger.kernel.org
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---

Changes in v5:
- none

Changes in v4:
- none, this patch is new

 drivers/dma/sh/rz-dmac.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 9f206a33dcc6..6d80cb668957 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -723,8 +723,8 @@ static u32 rz_dmac_chan_get_residue(struct rz_dmac_chan *channel,
 	u32 crla, crtb, i;
 
 	/* Get current processing virtual descriptor */
-	current_desc = list_first_entry(&channel->ld_active,
-					struct rz_dmac_desc, node);
+	current_desc = list_first_entry_or_null(&channel->ld_active,
+						struct rz_dmac_desc, node);
 	if (!current_desc)
 		return 0;
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v5 03/17] dmaengine: sh: rz-dmac: Use list_first_entry_or_null()
  2026-05-12 12:12 [PATCH v5 00/17] Renesas: dmaengine and ASoC fixes Claudiu Beznea
  2026-05-12 12:12 ` [PATCH v5 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up Claudiu Beznea
  2026-05-12 12:12 ` [PATCH v5 02/17] dmaengine: sh: rz-dmac: Fix incorrect NULL check on list_first_entry() Claudiu Beznea
@ 2026-05-12 12:12 ` Claudiu Beznea
  2026-05-12 20:38   ` Frank Li
  2026-05-13 22:18   ` sashiko-bot
  2026-05-12 12:12 ` [PATCH v5 04/17] dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw() Claudiu Beznea
                   ` (13 subsequent siblings)
  16 siblings, 2 replies; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-12 12:12 UTC (permalink / raw)
  To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur
  Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc, Claudiu Beznea

Use list_first_entry_or_null() instead of open-coding it with a
list_empty() check and list_first_entry(). This simplifies the code.

Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---

Changes in v5:
- none

Changes in v4:
- none

Changes in v3:
- none, this patch is new

 drivers/dma/sh/rz-dmac.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 6d80cb668957..1717b407ab9e 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -503,11 +503,10 @@ rz_dmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
 		__func__, channel->index, &src, &dest, len);
 
 	scoped_guard(spinlock_irqsave, &channel->vc.lock) {
-		if (list_empty(&channel->ld_free))
+		desc = list_first_entry_or_null(&channel->ld_free, struct rz_dmac_desc, node);
+		if (!desc)
 			return NULL;
 
-		desc = list_first_entry(&channel->ld_free, struct rz_dmac_desc, node);
-
 		desc->type = RZ_DMAC_DESC_MEMCPY;
 		desc->src = src;
 		desc->dest = dest;
@@ -533,11 +532,10 @@ rz_dmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
 	int i = 0;
 
 	scoped_guard(spinlock_irqsave, &channel->vc.lock) {
-		if (list_empty(&channel->ld_free))
+		desc = list_first_entry_or_null(&channel->ld_free, struct rz_dmac_desc, node);
+		if (!desc)
 			return NULL;
 
-		desc = list_first_entry(&channel->ld_free, struct rz_dmac_desc, node);
-
 		for_each_sg(sgl, sg, sg_len, i)
 			dma_length += sg_dma_len(sg);
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v5 04/17] dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw()
  2026-05-12 12:12 [PATCH v5 00/17] Renesas: dmaengine and ASoC fixes Claudiu Beznea
                   ` (2 preceding siblings ...)
  2026-05-12 12:12 ` [PATCH v5 03/17] dmaengine: sh: rz-dmac: Use list_first_entry_or_null() Claudiu Beznea
@ 2026-05-12 12:12 ` Claudiu Beznea
  2026-05-12 20:42   ` Frank Li
  2026-05-12 12:12 ` [PATCH v5 05/17] dmaengine: sh: rz-dmac: Add helper to compute the lmdesc address Claudiu Beznea
                   ` (12 subsequent siblings)
  16 siblings, 1 reply; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-12 12:12 UTC (permalink / raw)
  To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur
  Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc, Claudiu Beznea

Use rz_dmac_disable_hw() instead of open codding it. This unifies the
code and prepares it for the addition of suspend to RAM and cyclic DMA.

The rz_dmac_disable_hw() from rz_dmac_chan_probe() was moved after
vchan_init() as it initializes the channel->vc.chan.device used in
rz_dmac_disable_hw().

Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---

Changes in v5:
- none

Changes in v4:
- in rz_dmac_chan_probe(): moved rz_dmac_disable_hw() after the
  vchan_init(&channel->vc, &dmac->engine) call as this is the one which
  initializes data structures used by the debug code from
  rz_dmac_disable_hw(); updated the patch description to reflect this
 
Changes in v3:
- none, this patch is new

 drivers/dma/sh/rz-dmac.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 1717b407ab9e..40ddf534c094 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -873,7 +873,7 @@ static void rz_dmac_irq_handle_channel(struct rz_dmac_chan *channel)
 			channel->index, chstat);
 
 		scoped_guard(spinlock_irqsave, &channel->vc.lock)
-			rz_dmac_ch_writel(channel, CHCTRL_DEFAULT, CHCTRL, 1);
+			rz_dmac_disable_hw(channel);
 		return;
 	}
 
@@ -1000,15 +1000,15 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac,
 	}
 	rz_lmdesc_setup(channel, lmdesc);
 
-	/* Initialize register for each channel */
-	rz_dmac_ch_writel(channel, CHCTRL_DEFAULT, CHCTRL, 1);
-
 	channel->vc.desc_free = rz_dmac_virt_desc_free;
 	vchan_init(&channel->vc, &dmac->engine);
 	INIT_LIST_HEAD(&channel->ld_queue);
 	INIT_LIST_HEAD(&channel->ld_free);
 	INIT_LIST_HEAD(&channel->ld_active);
 
+	/* Initialize register for each channel */
+	rz_dmac_disable_hw(channel);
+
 	/* Request the channel interrupt. */
 	scnprintf(pdev_irqname, sizeof(pdev_irqname), "ch%u", index);
 	irq = platform_get_irq_byname(pdev, pdev_irqname);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v5 05/17] dmaengine: sh: rz-dmac: Add helper to compute the lmdesc address
  2026-05-12 12:12 [PATCH v5 00/17] Renesas: dmaengine and ASoC fixes Claudiu Beznea
                   ` (3 preceding siblings ...)
  2026-05-12 12:12 ` [PATCH v5 04/17] dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw() Claudiu Beznea
@ 2026-05-12 12:12 ` Claudiu Beznea
  2026-05-12 20:44   ` Frank Li
  2026-05-12 12:12 ` [PATCH v5 06/17] dmaengine: sh: rz-dmac: Save the start LM descriptor Claudiu Beznea
                   ` (11 subsequent siblings)
  16 siblings, 1 reply; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-12 12:12 UTC (permalink / raw)
  To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur
  Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc, Claudiu Beznea

Add a helper function to compute the lmdesc address. This makes the
code easier to understand, and the helper will be used in subsequent
patches.

Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---

Changes in v5:
- none

Changes in v4:
- none

Changes in v3:
- none, this patch is new

 drivers/dma/sh/rz-dmac.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 40ddf534c094..c48858b68dee 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -259,6 +259,12 @@ static void rz_lmdesc_setup(struct rz_dmac_chan *channel,
  * Descriptors preparation
  */
 
+static u32 rz_dmac_lmdesc_addr(struct rz_dmac_chan *channel, struct rz_lmdesc *lmdesc)
+{
+	return channel->lmdesc.base_dma +
+	       (sizeof(struct rz_lmdesc) * (lmdesc - channel->lmdesc.base));
+}
+
 static void rz_dmac_lmdesc_recycle(struct rz_dmac_chan *channel)
 {
 	struct rz_lmdesc *lmdesc = channel->lmdesc.head;
@@ -284,9 +290,7 @@ static void rz_dmac_enable_hw(struct rz_dmac_chan *channel)
 
 	rz_dmac_lmdesc_recycle(channel);
 
-	nxla = channel->lmdesc.base_dma +
-		(sizeof(struct rz_lmdesc) * (channel->lmdesc.head -
-					     channel->lmdesc.base));
+	nxla = rz_dmac_lmdesc_addr(channel, channel->lmdesc.head);
 
 	chstat = rz_dmac_ch_readl(channel, CHSTAT, 1);
 	if (!(chstat & CHSTAT_EN)) {
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v5 06/17] dmaengine: sh: rz-dmac: Save the start LM descriptor
  2026-05-12 12:12 [PATCH v5 00/17] Renesas: dmaengine and ASoC fixes Claudiu Beznea
                   ` (4 preceding siblings ...)
  2026-05-12 12:12 ` [PATCH v5 05/17] dmaengine: sh: rz-dmac: Add helper to compute the lmdesc address Claudiu Beznea
@ 2026-05-12 12:12 ` Claudiu Beznea
  2026-05-12 20:48   ` Frank Li
  2026-05-13 23:52   ` sashiko-bot
  2026-05-12 12:12 ` [PATCH v5 07/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is enabled Claudiu Beznea
                   ` (10 subsequent siblings)
  16 siblings, 2 replies; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-12 12:12 UTC (permalink / raw)
  To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur
  Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc, Claudiu Beznea

Save the start LM descriptor to avoid looping through the entire
channel's LM descriptor list when computing the residue. This avoids
unnecessary iterations.

Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---

Changes in v5:
- none

Changes in v4:
- none

Changes in v3:
- none, this patch is new

 drivers/dma/sh/rz-dmac.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index c48858b68dee..d3926ecd63ac 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -58,6 +58,7 @@ struct rz_dmac_desc {
 	/* For slave sg */
 	struct scatterlist *sg;
 	unsigned int sgcount;
+	struct rz_lmdesc *start_lmdesc;
 };
 
 #define to_rz_dmac_desc(d)	container_of(d, struct rz_dmac_desc, vd)
@@ -343,6 +344,8 @@ static void rz_dmac_prepare_desc_for_memcpy(struct rz_dmac_chan *channel)
 	struct rz_dmac_desc *d = channel->desc;
 	u32 chcfg = CHCFG_MEM_COPY;
 
+	d->start_lmdesc = lmdesc;
+
 	/* prepare descriptor */
 	lmdesc->sa = d->src;
 	lmdesc->da = d->dest;
@@ -377,6 +380,7 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
 	}
 
 	lmdesc = channel->lmdesc.tail;
+	d->start_lmdesc = lmdesc;
 
 	for (i = 0, sg = sgl; i < sg_len; i++, sg = sg_next(sg)) {
 		if (d->direction == DMA_DEV_TO_MEM) {
@@ -693,9 +697,10 @@ rz_dmac_get_next_lmdesc(struct rz_lmdesc *base, struct rz_lmdesc *lmdesc)
 	return next;
 }
 
-static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel, u32 crla)
+static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel,
+						 struct rz_dmac_desc *desc, u32 crla)
 {
-	struct rz_lmdesc *lmdesc = channel->lmdesc.head;
+	struct rz_lmdesc *lmdesc = desc->start_lmdesc;
 	struct dma_chan *chan = &channel->vc.chan;
 	struct rz_dmac *dmac = to_rz_dmac(chan->device);
 	u32 residue = 0, i = 0;
@@ -794,7 +799,7 @@ static u32 rz_dmac_chan_get_residue(struct rz_dmac_chan *channel,
 	 * Calculate number of bytes transferred in processing virtual descriptor.
 	 * One virtual descriptor can have many lmdesc.
 	 */
-	return crtb + rz_dmac_calculate_residue_bytes_in_vd(channel, crla);
+	return crtb + rz_dmac_calculate_residue_bytes_in_vd(channel, current_desc, crla);
 }
 
 static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v5 07/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is enabled
  2026-05-12 12:12 [PATCH v5 00/17] Renesas: dmaengine and ASoC fixes Claudiu Beznea
                   ` (5 preceding siblings ...)
  2026-05-12 12:12 ` [PATCH v5 06/17] dmaengine: sh: rz-dmac: Save the start LM descriptor Claudiu Beznea
@ 2026-05-12 12:12 ` Claudiu Beznea
  2026-05-12 20:49   ` Frank Li
  2026-05-13 23:59   ` sashiko-bot
  2026-05-12 12:12 ` [PATCH v5 08/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is paused Claudiu Beznea
                   ` (9 subsequent siblings)
  16 siblings, 2 replies; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-12 12:12 UTC (permalink / raw)
  To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur
  Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc, Claudiu Beznea

Add a helper to check if the channel is enabled. This will be reused in
subsequent patches.

Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---

Changes in v5:
- none

Changes in v4:
- none

Changes in v3:
- none, this patch is new

 drivers/dma/sh/rz-dmac.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index d3926ecd63ac..c7337cf27136 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -279,6 +279,13 @@ static void rz_dmac_lmdesc_recycle(struct rz_dmac_chan *channel)
 	channel->lmdesc.head = lmdesc;
 }
 
+static bool rz_dmac_chan_is_enabled(struct rz_dmac_chan *chan)
+{
+	u32 val = rz_dmac_ch_readl(chan, CHSTAT, 1);
+
+	return !!(val & CHSTAT_EN);
+}
+
 static void rz_dmac_enable_hw(struct rz_dmac_chan *channel)
 {
 	struct dma_chan *chan = &channel->vc.chan;
@@ -840,8 +847,7 @@ static int rz_dmac_device_pause(struct dma_chan *chan)
 
 	guard(spinlock_irqsave)(&channel->vc.lock);
 
-	val = rz_dmac_ch_readl(channel, CHSTAT, 1);
-	if (!(val & CHSTAT_EN))
+	if (!rz_dmac_chan_is_enabled(channel))
 		return 0;
 
 	rz_dmac_ch_writel(channel, CHCTRL_SETSUS, CHCTRL, 1);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v5 08/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is paused
  2026-05-12 12:12 [PATCH v5 00/17] Renesas: dmaengine and ASoC fixes Claudiu Beznea
                   ` (6 preceding siblings ...)
  2026-05-12 12:12 ` [PATCH v5 07/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is enabled Claudiu Beznea
@ 2026-05-12 12:12 ` Claudiu Beznea
  2026-05-12 20:57   ` Frank Li
  2026-05-12 12:12 ` [PATCH v5 09/17] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing Claudiu Beznea
                   ` (8 subsequent siblings)
  16 siblings, 1 reply; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-12 12:12 UTC (permalink / raw)
  To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur
  Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc, Claudiu Beznea

Add a helper to check if the channel is paused. This will be reused in
subsequent patches.

Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---

Changes in v5:
- none

Changes in v4:
- none

Changes in v3:
- none, this patch is new

 drivers/dma/sh/rz-dmac.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index c7337cf27136..042f85e58a79 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -286,6 +286,13 @@ static bool rz_dmac_chan_is_enabled(struct rz_dmac_chan *chan)
 	return !!(val & CHSTAT_EN);
 }
 
+static bool rz_dmac_chan_is_paused(struct rz_dmac_chan *chan)
+{
+	u32 val = rz_dmac_ch_readl(chan, CHSTAT, 1);
+
+	return !!(val & CHSTAT_SUS);
+}
+
 static void rz_dmac_enable_hw(struct rz_dmac_chan *channel)
 {
 	struct dma_chan *chan = &channel->vc.chan;
@@ -822,12 +829,9 @@ static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
 		return status;
 
 	scoped_guard(spinlock_irqsave, &channel->vc.lock) {
-		u32 val;
-
 		residue = rz_dmac_chan_get_residue(channel, cookie);
 
-		val = rz_dmac_ch_readl(channel, CHSTAT, 1);
-		if (val & CHSTAT_SUS)
+		if (rz_dmac_chan_is_paused(channel))
 			status = DMA_PAUSED;
 	}
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v5 09/17] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing
  2026-05-12 12:12 [PATCH v5 00/17] Renesas: dmaengine and ASoC fixes Claudiu Beznea
                   ` (7 preceding siblings ...)
  2026-05-12 12:12 ` [PATCH v5 08/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is paused Claudiu Beznea
@ 2026-05-12 12:12 ` Claudiu Beznea
  2026-05-12 21:38   ` Frank Li
  2026-05-14  0:42   ` sashiko-bot
  2026-05-12 12:12 ` [PATCH v5 10/17] dmaengine: sh: rz-dmac: Refactor pause/resume code Claudiu Beznea
                   ` (7 subsequent siblings)
  16 siblings, 2 replies; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-12 12:12 UTC (permalink / raw)
  To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur
  Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc, Claudiu Beznea

The driver used a mix of virt-dma APIs and driver specific logic to
process descriptors. It maintained three internal queues: ld_free,
ld_queue, and ld_active as follows:
- ld_free: stores the descriptors pre-allocated at probe time
- ld_queue: stores descriptors after they are taken from ld_free and
  prepared. At the same time, vchan_tx_prep() queues them to
  vc->desc_allocated. The vc->desc_allocated list is then checked in
  rz_dmac_issue_pending() and rz_dmac_irq_handler_thread() before
  starting a new transfer via rz_dmac_xfer_desc(). In turn,
  rz_dmac_xfer_desc() grabs the next descriptor from vc->desc_issued and
  submits it for transfer
- ld_active: stores the descriptors currently being transferred

The interrupt handler moved a completed descriptor to ld_free before
invoking its completion callback. Once returned to ld_free, the
descriptor can be reused to prepare a new transfer. In theory, this
means the descriptor could be re-prepared before its completion
callback is called.

Commit fully back the driver by the virt-dma APIs. With this, only ld_free
need to be kept to track how many free descriptors are available. This
is now done as follows:
- the prepare stage removes the first descriptor from the ld_free and
  prepares it
- the completion calls for it vc->desc_free() (rz_dmac_virt_desc_free())
  which re-adds the descriptor at the end of ld_free

With this, the critical areas in prepare callbacks were minimized to only
getting the descriptor from the ld_free list.

This change introduces struct rz_dmac_chan::desc to keep track of the
currently transferred descriptor. It is cleared in
rz_dmac_terminate_all(), referenced from rz_dmac_issue_pending() to
determine whether a new transfer can be started, and from
rz_dmac_irq_handler_thread() once a descriptor has completed. Finally,
the rz_dmac_device_synchronize() was updated with vchan_synchronize()
call to ensure the terminated descriptor is freed and the tasklet is
killed.

With this, residue computation is also simplified, as it can now be
handled entirely through the virt-dma APIs.

The spin_lock/unlock operations from rz_dmac_irq_handler_thread() were
replaced by guard as the final code after rework is simpler this way.

As subsequent commits will set the Link End bit on the last descriptor
of a transfer, rz_dmac_enable_hw() is also adjusted as part of the full
conversion to virt-dma APIs. It no longer checks the channel enable
status itself; instead, its callers verify whether the channel is
enabled and whether the previous transfer has completed before starting
a new one.

Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---

Changes in v5:
- none

Changes in v4:
- in rz_dmac_tx_status(): return DMA_PAUSED if the channel is paused;
  call rz_dmac_chan_get_residue() only if status is not complete

Changes in v3:
- none, this patch is new

 drivers/dma/sh/rz-dmac.c | 233 +++++++++++++++------------------------
 1 file changed, 86 insertions(+), 147 deletions(-)

diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 042f85e58a79..53ee9fe65261 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -79,8 +79,6 @@ struct rz_dmac_chan {
 	int mid_rid;
 
 	struct list_head ld_free;
-	struct list_head ld_queue;
-	struct list_head ld_active;
 
 	struct {
 		struct rz_lmdesc *base;
@@ -299,7 +297,6 @@ static void rz_dmac_enable_hw(struct rz_dmac_chan *channel)
 	struct rz_dmac *dmac = to_rz_dmac(chan->device);
 	u32 nxla;
 	u32 chctrl;
-	u32 chstat;
 
 	dev_dbg(dmac->dev, "%s channel %d\n", __func__, channel->index);
 
@@ -307,14 +304,11 @@ static void rz_dmac_enable_hw(struct rz_dmac_chan *channel)
 
 	nxla = rz_dmac_lmdesc_addr(channel, channel->lmdesc.head);
 
-	chstat = rz_dmac_ch_readl(channel, CHSTAT, 1);
-	if (!(chstat & CHSTAT_EN)) {
-		chctrl = (channel->chctrl | CHCTRL_SETEN);
-		rz_dmac_ch_writel(channel, nxla, NXLA, 1);
-		rz_dmac_ch_writel(channel, channel->chcfg, CHCFG, 1);
-		rz_dmac_ch_writel(channel, CHCTRL_SWRST, CHCTRL, 1);
-		rz_dmac_ch_writel(channel, chctrl, CHCTRL, 1);
-	}
+	chctrl = (channel->chctrl | CHCTRL_SETEN);
+	rz_dmac_ch_writel(channel, nxla, NXLA, 1);
+	rz_dmac_ch_writel(channel, channel->chcfg, CHCFG, 1);
+	rz_dmac_ch_writel(channel, CHCTRL_SWRST, CHCTRL, 1);
+	rz_dmac_ch_writel(channel, chctrl, CHCTRL, 1);
 }
 
 static void rz_dmac_disable_hw(struct rz_dmac_chan *channel)
@@ -426,18 +420,20 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
 	channel->chctrl = CHCTRL_SETEN;
 }
 
-static int rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
+static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
 {
-	struct rz_dmac_desc *d = chan->desc;
 	struct virt_dma_desc *vd;
 
 	vd = vchan_next_desc(&chan->vc);
-	if (!vd)
-		return 0;
+	if (!vd) {
+		chan->desc = NULL;
+		return;
+	}
 
 	list_del(&vd->node);
+	chan->desc = to_rz_dmac_desc(vd);
 
-	switch (d->type) {
+	switch (chan->desc->type) {
 	case RZ_DMAC_DESC_MEMCPY:
 		rz_dmac_prepare_desc_for_memcpy(chan);
 		break;
@@ -445,14 +441,9 @@ static int rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
 	case RZ_DMAC_DESC_SLAVE_SG:
 		rz_dmac_prepare_descs_for_slave_sg(chan);
 		break;
-
-	default:
-		return -EINVAL;
 	}
 
 	rz_dmac_enable_hw(chan);
-
-	return 0;
 }
 
 /*
@@ -494,8 +485,6 @@ static void rz_dmac_free_chan_resources(struct dma_chan *chan)
 	rz_lmdesc_setup(channel, channel->lmdesc.base);
 
 	rz_dmac_disable_hw(channel);
-	list_splice_tail_init(&channel->ld_active, &channel->ld_free);
-	list_splice_tail_init(&channel->ld_queue, &channel->ld_free);
 
 	if (channel->mid_rid >= 0) {
 		clear_bit(channel->mid_rid, dmac->modules);
@@ -504,13 +493,19 @@ static void rz_dmac_free_chan_resources(struct dma_chan *chan)
 
 	spin_unlock_irqrestore(&channel->vc.lock, flags);
 
+	vchan_free_chan_resources(&channel->vc);
+
+	spin_lock_irqsave(&channel->vc.lock, flags);
+
 	list_for_each_entry_safe(desc, _desc, &channel->ld_free, node) {
+		list_del(&desc->node);
 		kfree(desc);
 		channel->descs_allocated--;
 	}
 
 	INIT_LIST_HEAD(&channel->ld_free);
-	vchan_free_chan_resources(&channel->vc);
+
+	spin_unlock_irqrestore(&channel->vc.lock, flags);
 }
 
 static struct dma_async_tx_descriptor *
@@ -529,15 +524,15 @@ rz_dmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
 		if (!desc)
 			return NULL;
 
-		desc->type = RZ_DMAC_DESC_MEMCPY;
-		desc->src = src;
-		desc->dest = dest;
-		desc->len = len;
-		desc->direction = DMA_MEM_TO_MEM;
-
-		list_move_tail(channel->ld_free.next, &channel->ld_queue);
+		list_del(&desc->node);
 	}
 
+	desc->type = RZ_DMAC_DESC_MEMCPY;
+	desc->src = src;
+	desc->dest = dest;
+	desc->len = len;
+	desc->direction = DMA_MEM_TO_MEM;
+
 	return vchan_tx_prep(&channel->vc, &desc->vd, flags);
 }
 
@@ -558,22 +553,22 @@ rz_dmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
 		if (!desc)
 			return NULL;
 
-		for_each_sg(sgl, sg, sg_len, i)
-			dma_length += sg_dma_len(sg);
+		list_del(&desc->node);
+	}
 
-		desc->type = RZ_DMAC_DESC_SLAVE_SG;
-		desc->sg = sgl;
-		desc->sgcount = sg_len;
-		desc->len = dma_length;
-		desc->direction = direction;
+	for_each_sg(sgl, sg, sg_len, i)
+		dma_length += sg_dma_len(sg);
 
-		if (direction == DMA_DEV_TO_MEM)
-			desc->src = channel->src_per_address;
-		else
-			desc->dest = channel->dst_per_address;
+	desc->type = RZ_DMAC_DESC_SLAVE_SG;
+	desc->sg = sgl;
+	desc->sgcount = sg_len;
+	desc->len = dma_length;
+	desc->direction = direction;
 
-		list_move_tail(channel->ld_free.next, &channel->ld_queue);
-	}
+	if (direction == DMA_DEV_TO_MEM)
+		desc->src = channel->src_per_address;
+	else
+		desc->dest = channel->dst_per_address;
 
 	return vchan_tx_prep(&channel->vc, &desc->vd, flags);
 }
@@ -588,8 +583,11 @@ static int rz_dmac_terminate_all(struct dma_chan *chan)
 	rz_dmac_disable_hw(channel);
 	rz_lmdesc_setup(channel, channel->lmdesc.base);
 
-	list_splice_tail_init(&channel->ld_active, &channel->ld_free);
-	list_splice_tail_init(&channel->ld_queue, &channel->ld_free);
+	if (channel->desc) {
+		vchan_terminate_vdesc(&channel->desc->vd);
+		channel->desc = NULL;
+	}
+
 	vchan_get_all_descriptors(&channel->vc, &head);
 	spin_unlock_irqrestore(&channel->vc.lock, flags);
 	vchan_dma_desc_free_list(&channel->vc, &head);
@@ -600,25 +598,16 @@ static int rz_dmac_terminate_all(struct dma_chan *chan)
 static void rz_dmac_issue_pending(struct dma_chan *chan)
 {
 	struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
-	struct rz_dmac *dmac = to_rz_dmac(chan->device);
-	struct rz_dmac_desc *desc;
 	unsigned long flags;
 
 	spin_lock_irqsave(&channel->vc.lock, flags);
 
-	if (!list_empty(&channel->ld_queue)) {
-		desc = list_first_entry(&channel->ld_queue,
-					struct rz_dmac_desc, node);
-		channel->desc = desc;
-		if (vchan_issue_pending(&channel->vc)) {
-			if (rz_dmac_xfer_desc(channel) < 0)
-				dev_warn(dmac->dev, "ch: %d couldn't issue DMA xfer\n",
-					 channel->index);
-			else
-				list_move_tail(channel->ld_queue.next,
-					       &channel->ld_active);
-		}
-	}
+	/*
+	 * Issue the descriptor. If another transfer is already in progress, the
+	 * issued descriptor will be handled after the current transfer finishes.
+	 */
+	if (vchan_issue_pending(&channel->vc) && !channel->desc)
+		rz_dmac_xfer_desc(channel);
 
 	spin_unlock_irqrestore(&channel->vc.lock, flags);
 }
@@ -676,13 +665,13 @@ static int rz_dmac_config(struct dma_chan *chan,
 
 static void rz_dmac_virt_desc_free(struct virt_dma_desc *vd)
 {
-	/*
-	 * Place holder
-	 * Descriptor allocation is done during alloc_chan_resources and
-	 * get freed during free_chan_resources.
-	 * list is used to manage the descriptors and avoid any memory
-	 * allocation/free during DMA read/write.
-	 */
+	struct rz_dmac_chan *channel = to_rz_dmac_chan(vd->tx.chan);
+	struct virt_dma_chan *vc = to_virt_chan(vd->tx.chan);
+	struct rz_dmac_desc *desc = to_rz_dmac_desc(vd);
+
+	guard(spinlock_irqsave)(&vc->lock);
+
+	list_add_tail(&desc->node, &channel->ld_free);
 }
 
 static void rz_dmac_device_synchronize(struct dma_chan *chan)
@@ -692,6 +681,8 @@ static void rz_dmac_device_synchronize(struct dma_chan *chan)
 	u32 chstat;
 	int ret;
 
+	vchan_synchronize(&channel->vc);
+
 	ret = read_poll_timeout(rz_dmac_ch_readl, chstat, !(chstat & CHSTAT_EN),
 				100, 100000, false, channel, CHSTAT, 1);
 	if (ret < 0)
@@ -739,58 +730,22 @@ static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel,
 static u32 rz_dmac_chan_get_residue(struct rz_dmac_chan *channel,
 				    dma_cookie_t cookie)
 {
-	struct rz_dmac_desc *current_desc, *desc;
-	enum dma_status status;
+	struct rz_dmac_desc *desc = NULL;
+	struct virt_dma_desc *vd;
 	u32 crla, crtb, i;
 
-	/* Get current processing virtual descriptor */
-	current_desc = list_first_entry_or_null(&channel->ld_active,
-						struct rz_dmac_desc, node);
-	if (!current_desc)
-		return 0;
-
-	/*
-	 * If the cookie corresponds to a descriptor that has been completed
-	 * there is no residue. The same check has already been performed by the
-	 * caller but without holding the channel lock, so the descriptor could
-	 * now be complete.
-	 */
-	status = dma_cookie_status(&channel->vc.chan, cookie, NULL);
-	if (status == DMA_COMPLETE)
-		return 0;
-
-	/*
-	 * If the cookie doesn't correspond to the currently processing virtual
-	 * descriptor then the descriptor hasn't been processed yet, and the
-	 * residue is equal to the full descriptor size. Also, a client driver
-	 * is possible to call this function before rz_dmac_irq_handler_thread()
-	 * runs. In this case, the running descriptor will be the next
-	 * descriptor, and will appear in the done list. So, if the argument
-	 * cookie matches the done list's cookie, we can assume the residue is
-	 * zero.
-	 */
-	if (cookie != current_desc->vd.tx.cookie) {
-		list_for_each_entry(desc, &channel->ld_free, node) {
-			if (cookie == desc->vd.tx.cookie)
-				return 0;
-		}
-
-		list_for_each_entry(desc, &channel->ld_queue, node) {
-			if (cookie == desc->vd.tx.cookie)
-				return desc->len;
-		}
-
-		list_for_each_entry(desc, &channel->ld_active, node) {
-			if (cookie == desc->vd.tx.cookie)
-				return desc->len;
-		}
+	vd = vchan_find_desc(&channel->vc, cookie);
+	if (vd) {
+		/* Descriptor has been issued but not yet processed. */
+		desc = to_rz_dmac_desc(vd);
+		return desc->len;
+	} else if (channel->desc && channel->desc->vd.tx.cookie == cookie) {
+		/* Descriptor is currently processed. */
+		desc = channel->desc;
+	}
 
-		/*
-		 * No descriptor found for the cookie, there's thus no residue.
-		 * This shouldn't happen if the calling driver passes a correct
-		 * cookie value.
-		 */
-		WARN(1, "No descriptor for cookie!");
+	if (!desc) {
+		/* Descriptor was not found. May be already completed by now. */
 		return 0;
 	}
 
@@ -813,7 +768,7 @@ static u32 rz_dmac_chan_get_residue(struct rz_dmac_chan *channel,
 	 * Calculate number of bytes transferred in processing virtual descriptor.
 	 * One virtual descriptor can have many lmdesc.
 	 */
-	return crtb + rz_dmac_calculate_residue_bytes_in_vd(channel, current_desc, crla);
+	return crtb + rz_dmac_calculate_residue_bytes_in_vd(channel, desc, crla);
 }
 
 static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
@@ -824,21 +779,17 @@ static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
 	enum dma_status status;
 	u32 residue;
 
-	status = dma_cookie_status(chan, cookie, txstate);
-	if (status == DMA_COMPLETE || !txstate)
-		return status;
-
 	scoped_guard(spinlock_irqsave, &channel->vc.lock) {
+		status = dma_cookie_status(chan, cookie, txstate);
+		if (status == DMA_COMPLETE || !txstate)
+			return status;
+
 		residue = rz_dmac_chan_get_residue(channel, cookie);
 
-		if (rz_dmac_chan_is_paused(channel))
+		if (status == DMA_IN_PROGRESS && rz_dmac_chan_is_paused(channel))
 			status = DMA_PAUSED;
 	}
 
-	/* if there's no residue and no paused, the cookie is complete */
-	if (!residue && status != DMA_PAUSED)
-		return DMA_COMPLETE;
-
 	dma_set_residue(txstate, residue);
 
 	return status;
@@ -918,28 +869,18 @@ static irqreturn_t rz_dmac_irq_handler(int irq, void *dev_id)
 static irqreturn_t rz_dmac_irq_handler_thread(int irq, void *dev_id)
 {
 	struct rz_dmac_chan *channel = dev_id;
-	struct rz_dmac_desc *desc = NULL;
-	unsigned long flags;
+	struct rz_dmac_desc *desc;
 
-	spin_lock_irqsave(&channel->vc.lock, flags);
+	guard(spinlock_irqsave)(&channel->vc.lock);
 
-	if (list_empty(&channel->ld_active)) {
-		/* Someone might have called terminate all */
-		goto out;
-	}
+	desc = channel->desc;
+	if (!desc)
+		return IRQ_HANDLED;
 
-	desc = list_first_entry(&channel->ld_active, struct rz_dmac_desc, node);
 	vchan_cookie_complete(&desc->vd);
-	list_move_tail(channel->ld_active.next, &channel->ld_free);
-	if (!list_empty(&channel->ld_queue)) {
-		desc = list_first_entry(&channel->ld_queue, struct rz_dmac_desc,
-					node);
-		channel->desc = desc;
-		if (rz_dmac_xfer_desc(channel) == 0)
-			list_move_tail(channel->ld_queue.next, &channel->ld_active);
-	}
-out:
-	spin_unlock_irqrestore(&channel->vc.lock, flags);
+	channel->desc = NULL;
+
+	rz_dmac_xfer_desc(channel);
 
 	return IRQ_HANDLED;
 }
@@ -1021,9 +962,7 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac,
 
 	channel->vc.desc_free = rz_dmac_virt_desc_free;
 	vchan_init(&channel->vc, &dmac->engine);
-	INIT_LIST_HEAD(&channel->ld_queue);
 	INIT_LIST_HEAD(&channel->ld_free);
-	INIT_LIST_HEAD(&channel->ld_active);
 
 	/* Initialize register for each channel */
 	rz_dmac_disable_hw(channel);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v5 10/17] dmaengine: sh: rz-dmac: Refactor pause/resume code
  2026-05-12 12:12 [PATCH v5 00/17] Renesas: dmaengine and ASoC fixes Claudiu Beznea
                   ` (8 preceding siblings ...)
  2026-05-12 12:12 ` [PATCH v5 09/17] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing Claudiu Beznea
@ 2026-05-12 12:12 ` Claudiu Beznea
  2026-05-12 21:43   ` Frank Li
  2026-05-14  0:57   ` sashiko-bot
  2026-05-12 12:12 ` [PATCH v5 11/17] dmaengine: sh: rz-dmac: Drop the update of channel->chctrl with CHCTRL_SETEN Claudiu Beznea
                   ` (6 subsequent siblings)
  16 siblings, 2 replies; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-12 12:12 UTC (permalink / raw)
  To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur
  Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc, Claudiu Beznea

Subsequent patches will add suspend/resume and cyclic DMA support to the
rz-dmac driver. This support needs to work on SoCs where power to most
components (including DMA) is turned off during system suspend. For this,
some channels (for example cyclic ones) may need to be paused and resumed
manually by the DMA driver during system suspend/resume.

Refactor the pause/resume support so the same code can be reused in the
system suspend/resume path.

Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---

Changes in v5:
- none

Changes in v4:
- reset channel->status in rz_dmac_free_chan_resources() and
  rz_dmac_terminate_all()

Changes in v3:
- none, this patch new new

 drivers/dma/sh/rz-dmac.c | 73 ++++++++++++++++++++++++++++++++++------
 1 file changed, 62 insertions(+), 11 deletions(-)

diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 53ee9fe65261..2bf796dcc5f6 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -18,6 +18,7 @@
 #include <linux/irqchip/irq-renesas-rzv2h.h>
 #include <linux/irqchip/irq-renesas-rzt2h.h>
 #include <linux/list.h>
+#include <linux/lockdep.h>
 #include <linux/module.h>
 #include <linux/of.h>
 #include <linux/of_dma.h>
@@ -63,6 +64,14 @@ struct rz_dmac_desc {
 
 #define to_rz_dmac_desc(d)	container_of(d, struct rz_dmac_desc, vd)
 
+/**
+ * enum rz_dmac_chan_status: RZ DMAC channel status
+ * @RZ_DMAC_CHAN_STATUS_PAUSED: Channel is paused though DMA engine callbacks
+ */
+enum rz_dmac_chan_status {
+	RZ_DMAC_CHAN_STATUS_PAUSED,
+};
+
 struct rz_dmac_chan {
 	struct virt_dma_chan vc;
 	void __iomem *ch_base;
@@ -74,6 +83,8 @@ struct rz_dmac_chan {
 	dma_addr_t src_per_address;
 	dma_addr_t dst_per_address;
 
+	unsigned long status;
+
 	u32 chcfg;
 	u32 chctrl;
 	int mid_rid;
@@ -491,6 +502,8 @@ static void rz_dmac_free_chan_resources(struct dma_chan *chan)
 		channel->mid_rid = -EINVAL;
 	}
 
+	channel->status = 0;
+
 	spin_unlock_irqrestore(&channel->vc.lock, flags);
 
 	vchan_free_chan_resources(&channel->vc);
@@ -589,6 +602,9 @@ static int rz_dmac_terminate_all(struct dma_chan *chan)
 	}
 
 	vchan_get_all_descriptors(&channel->vc, &head);
+
+	channel->status = 0;
+
 	spin_unlock_irqrestore(&channel->vc.lock, flags);
 	vchan_dma_desc_free_list(&channel->vc, &head);
 
@@ -795,35 +811,70 @@ static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
 	return status;
 }
 
-static int rz_dmac_device_pause(struct dma_chan *chan)
+static int rz_dmac_device_pause_set(struct rz_dmac_chan *channel,
+				    unsigned long set_bitmask)
 {
-	struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
+	int ret = 0;
 	u32 val;
 
-	guard(spinlock_irqsave)(&channel->vc.lock);
+	lockdep_assert_held(&channel->vc.lock);
 
 	if (!rz_dmac_chan_is_enabled(channel))
 		return 0;
 
+	if (rz_dmac_chan_is_paused(channel))
+		goto set_bit;
+
 	rz_dmac_ch_writel(channel, CHCTRL_SETSUS, CHCTRL, 1);
-	return read_poll_timeout_atomic(rz_dmac_ch_readl, val,
-					(val & CHSTAT_SUS), 1, 1024,
-					false, channel, CHSTAT, 1);
+	ret = read_poll_timeout_atomic(rz_dmac_ch_readl, val,
+				       (val & CHSTAT_SUS), 1, 1024, false,
+				       channel, CHSTAT, 1);
+
+set_bit:
+	channel->status |= set_bitmask;
+
+	return ret;
 }
 
-static int rz_dmac_device_resume(struct dma_chan *chan)
+static int rz_dmac_device_pause(struct dma_chan *chan)
 {
 	struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
-	u32 val;
 
 	guard(spinlock_irqsave)(&channel->vc.lock);
 
+	return rz_dmac_device_pause_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED));
+}
+
+static int rz_dmac_device_resume_set(struct rz_dmac_chan *channel,
+				     unsigned long clear_bitmask)
+{
+	int ret = 0;
+	u32 val;
+
+	lockdep_assert_held(&channel->vc.lock);
+
 	/* Do not check CHSTAT_SUS but rely on HW capabilities. */
 
 	rz_dmac_ch_writel(channel, CHCTRL_CLRSUS, CHCTRL, 1);
-	return read_poll_timeout_atomic(rz_dmac_ch_readl, val,
-					!(val & CHSTAT_SUS), 1, 1024,
-					false, channel, CHSTAT, 1);
+	ret = read_poll_timeout_atomic(rz_dmac_ch_readl, val,
+				       !(val & CHSTAT_SUS), 1, 1024, false,
+				       channel, CHSTAT, 1);
+
+	channel->status &= ~clear_bitmask;
+
+	return ret;
+}
+
+static int rz_dmac_device_resume(struct dma_chan *chan)
+{
+	struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
+
+	guard(spinlock_irqsave)(&channel->vc.lock);
+
+	if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_PAUSED)))
+		return 0;
+
+	return rz_dmac_device_resume_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED));
 }
 
 /*
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v5 11/17] dmaengine: sh: rz-dmac: Drop the update of channel->chctrl with CHCTRL_SETEN
  2026-05-12 12:12 [PATCH v5 00/17] Renesas: dmaengine and ASoC fixes Claudiu Beznea
                   ` (9 preceding siblings ...)
  2026-05-12 12:12 ` [PATCH v5 10/17] dmaengine: sh: rz-dmac: Refactor pause/resume code Claudiu Beznea
@ 2026-05-12 12:12 ` Claudiu Beznea
  2026-05-12 21:55   ` Frank Li
  2026-05-12 12:12 ` [PATCH v5 12/17] dmaengine: sh: rz-dmac: Add cyclic DMA support Claudiu Beznea
                   ` (5 subsequent siblings)
  16 siblings, 1 reply; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-12 12:12 UTC (permalink / raw)
  To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur
  Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc, Claudiu Beznea

The CHCTRL_SETEN bit is explicitly set in rz_dmac_enable_hw(). Updating
struct rz_dmac_chan::chctrl with this bit in
rz_dmac_prepare_desc_for_memcpy() and rz_dmac_prepare_descs_for_slave_sg()
is unnecessary in the current code base. Moreover, it conflicts with the
configuration sequence that will be used for cyclic DMA channels during
suspend to RAM. Cyclic DMA support will be introduced in subsequent
commits.

This is a preparatory commit for cyclic DMA suspend to RAM support.

Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---

Changes in v5:
- none

Changes in v4:
- set channel->chctrl = 0 in rz_dmac_prepare_descs_for_slave_sg()

Changes in v3:
- none

Changes in v2:
- fixed typos in patch title and patch description

 drivers/dma/sh/rz-dmac.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 2bf796dcc5f6..2de519b581b6 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -377,7 +377,7 @@ static void rz_dmac_prepare_desc_for_memcpy(struct rz_dmac_chan *channel)
 	rz_dmac_set_dma_req_no(dmac, channel->index, dmac->info->default_dma_req_no);
 
 	channel->chcfg = chcfg;
-	channel->chctrl = CHCTRL_STG | CHCTRL_SETEN;
+	channel->chctrl = CHCTRL_STG;
 }
 
 static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
@@ -428,7 +428,7 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
 
 	rz_dmac_set_dma_req_no(dmac, channel->index, channel->mid_rid);
 
-	channel->chctrl = CHCTRL_SETEN;
+	channel->chctrl = 0;
 }
 
 static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v5 12/17] dmaengine: sh: rz-dmac: Add cyclic DMA support
  2026-05-12 12:12 [PATCH v5 00/17] Renesas: dmaengine and ASoC fixes Claudiu Beznea
                   ` (10 preceding siblings ...)
  2026-05-12 12:12 ` [PATCH v5 11/17] dmaengine: sh: rz-dmac: Drop the update of channel->chctrl with CHCTRL_SETEN Claudiu Beznea
@ 2026-05-12 12:12 ` Claudiu Beznea
  2026-05-12 22:00   ` Frank Li
  2026-05-14  1:43   ` sashiko-bot
  2026-05-12 12:12 ` [PATCH v5 13/17] dmaengine: sh: rz-dmac: Add runtime PM support Claudiu Beznea
                   ` (4 subsequent siblings)
  16 siblings, 2 replies; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-12 12:12 UTC (permalink / raw)
  To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur
  Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc, Claudiu Beznea

Add cyclic DMA support to the RZ DMAC driver. A per-channel status bit is
introduced to mark cyclic channels and is set during the DMA prepare
callback. The IRQ handler checks this status bit and calls
vchan_cyclic_callback() accordingly.

Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---

Changes in v5:
- none

Changes in v4:
- drop the nxla update logic in rz_dmac_lmdesc_recycle() as this is
  not needed for any kind of transfers
- drop the update of channel->status = 0 from rz_dmac_free_chan_resources()
  and rz_dmac_terminate_all() as this was moved in patch 09/17

Changes in v3:
- updated rz_dmac_lmdesc_recycle() to restore the lmdesc->nxla
- in rz_dmac_prepare_descs_for_cyclic() update directly the
  desc->start_lmdesc with the descriptor pointer insted of the
  descriptor address
- used rz_dmac_lmdesc_addr() to compute the descritor address
- set channel->status = 0 in rz_dmac_free_chan_resources()
- in rz_dmac_prep_dma_cyclic() check for invalid periods or buffer len
  and limit the critical area protected by spinlock
- set channel->status = 0 in rz_dmac_terminate_all()
- updated rz_dmac_calculate_residue_bytes_in_vd() to use 
  rz_dmac_lmdesc_addr()
- dropped goto in rz_dmac_irq_handler_thread() as it is not needed
  anymore; dropped also the local variable desc

Changes in v2:
- none

 drivers/dma/sh/rz-dmac.c | 136 +++++++++++++++++++++++++++++++++++++--
 1 file changed, 130 insertions(+), 6 deletions(-)

diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 2de519b581b6..d6ad070be705 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -35,6 +35,7 @@
 enum  rz_dmac_prep_type {
 	RZ_DMAC_DESC_MEMCPY,
 	RZ_DMAC_DESC_SLAVE_SG,
+	RZ_DMAC_DESC_CYCLIC,
 };
 
 struct rz_lmdesc {
@@ -67,9 +68,11 @@ struct rz_dmac_desc {
 /**
  * enum rz_dmac_chan_status: RZ DMAC channel status
  * @RZ_DMAC_CHAN_STATUS_PAUSED: Channel is paused though DMA engine callbacks
+ * @RZ_DMAC_CHAN_STATUS_CYCLIC: Channel is cyclic
  */
 enum rz_dmac_chan_status {
 	RZ_DMAC_CHAN_STATUS_PAUSED,
+	RZ_DMAC_CHAN_STATUS_CYCLIC,
 };
 
 struct rz_dmac_chan {
@@ -191,6 +194,7 @@ struct rz_dmac {
 
 /* LINK MODE DESCRIPTOR */
 #define HEADER_LV			BIT(0)
+#define HEADER_WBD			BIT(2)
 
 #define RZ_DMAC_MAX_CHAN_DESCRIPTORS	16
 #define RZ_DMAC_MAX_CHANNELS		16
@@ -431,6 +435,57 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
 	channel->chctrl = 0;
 }
 
+static void rz_dmac_prepare_descs_for_cyclic(struct rz_dmac_chan *channel)
+{
+	struct dma_chan *chan = &channel->vc.chan;
+	struct rz_dmac *dmac = to_rz_dmac(chan->device);
+	struct rz_dmac_desc *d = channel->desc;
+	size_t period_len = d->sgcount;
+	struct rz_lmdesc *lmdesc;
+	size_t buf_len = d->len;
+	size_t periods = buf_len / period_len;
+
+	lockdep_assert_held(&channel->vc.lock);
+
+	channel->chcfg |= CHCFG_SEL(channel->index) | CHCFG_DMS;
+
+	if (d->direction == DMA_DEV_TO_MEM) {
+		channel->chcfg |= CHCFG_SAD;
+		channel->chcfg &= ~CHCFG_REQD;
+	} else {
+		channel->chcfg |= CHCFG_DAD | CHCFG_REQD;
+	}
+
+	lmdesc = channel->lmdesc.tail;
+	d->start_lmdesc = lmdesc;
+
+	for (size_t i = 0; i < periods; i++) {
+		if (d->direction == DMA_DEV_TO_MEM) {
+			lmdesc->sa = d->src;
+			lmdesc->da = d->dest + (i * period_len);
+		} else {
+			lmdesc->sa = d->src + (i * period_len);
+			lmdesc->da = d->dest;
+		}
+
+		lmdesc->tb = period_len;
+		lmdesc->chitvl = 0;
+		lmdesc->chext = 0;
+		lmdesc->chcfg = channel->chcfg;
+		lmdesc->header = HEADER_LV | HEADER_WBD;
+
+		if (i == periods - 1)
+			lmdesc->nxla = rz_dmac_lmdesc_addr(channel, d->start_lmdesc);
+
+		if (++lmdesc >= (channel->lmdesc.base + DMAC_NR_LMDESC))
+			lmdesc = channel->lmdesc.base;
+	}
+
+	channel->lmdesc.tail = lmdesc;
+
+	rz_dmac_set_dma_req_no(dmac, channel->index, channel->mid_rid);
+}
+
 static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
 {
 	struct virt_dma_desc *vd;
@@ -452,6 +507,10 @@ static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
 	case RZ_DMAC_DESC_SLAVE_SG:
 		rz_dmac_prepare_descs_for_slave_sg(chan);
 		break;
+
+	case RZ_DMAC_DESC_CYCLIC:
+		rz_dmac_prepare_descs_for_cyclic(chan);
+		break;
 	}
 
 	rz_dmac_enable_hw(chan);
@@ -586,6 +645,55 @@ rz_dmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
 	return vchan_tx_prep(&channel->vc, &desc->vd, flags);
 }
 
+static struct dma_async_tx_descriptor *
+rz_dmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
+			size_t buf_len, size_t period_len,
+			enum dma_transfer_direction direction,
+			unsigned long flags)
+{
+	struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
+	struct rz_dmac_desc *desc;
+	size_t periods;
+
+	if (!is_slave_direction(direction))
+		return NULL;
+
+	if (!period_len || !buf_len)
+		return NULL;
+
+	periods = buf_len / period_len;
+	if (!periods || periods > DMAC_NR_LMDESC)
+		return NULL;
+
+	scoped_guard(spinlock_irqsave, &channel->vc.lock) {
+		if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC))
+			return NULL;
+
+		desc = list_first_entry_or_null(&channel->ld_free, struct rz_dmac_desc, node);
+		if (!desc)
+			return NULL;
+
+		list_del(&desc->node);
+
+		channel->status |= BIT(RZ_DMAC_CHAN_STATUS_CYCLIC);
+	}
+
+	desc->type = RZ_DMAC_DESC_CYCLIC;
+	desc->sgcount = period_len;
+	desc->len = buf_len;
+	desc->direction = direction;
+
+	if (direction == DMA_DEV_TO_MEM) {
+		desc->src = channel->src_per_address;
+		desc->dest = buf_addr;
+	} else {
+		desc->src = buf_addr;
+		desc->dest = channel->dst_per_address;
+	}
+
+	return vchan_tx_prep(&channel->vc, &desc->vd, flags);
+}
+
 static int rz_dmac_terminate_all(struct dma_chan *chan)
 {
 	struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
@@ -733,9 +841,18 @@ static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel,
 	}
 
 	/* Calculate residue from next lmdesc to end of virtual desc */
-	while (lmdesc->chcfg & CHCFG_DEM) {
-		residue += lmdesc->tb;
-		lmdesc = rz_dmac_get_next_lmdesc(channel->lmdesc.base, lmdesc);
+	if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)) {
+		u32 start_lmdesc_addr = rz_dmac_lmdesc_addr(channel, desc->start_lmdesc);
+
+		while (lmdesc->nxla != start_lmdesc_addr) {
+			residue += lmdesc->tb;
+			lmdesc = rz_dmac_get_next_lmdesc(channel->lmdesc.base, lmdesc);
+		}
+	} else {
+		while (lmdesc->chcfg & CHCFG_DEM) {
+			residue += lmdesc->tb;
+			lmdesc = rz_dmac_get_next_lmdesc(channel->lmdesc.base, lmdesc);
+		}
 	}
 
 	dev_dbg(dmac->dev, "%s: VD residue is %u\n", __func__, residue);
@@ -928,10 +1045,14 @@ static irqreturn_t rz_dmac_irq_handler_thread(int irq, void *dev_id)
 	if (!desc)
 		return IRQ_HANDLED;
 
-	vchan_cookie_complete(&desc->vd);
-	channel->desc = NULL;
+	if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)) {
+		vchan_cyclic_callback(&desc->vd);
+	} else {
+		vchan_cookie_complete(&desc->vd);
+		channel->desc = NULL;
 
-	rz_dmac_xfer_desc(channel);
+		rz_dmac_xfer_desc(channel);
+	}
 
 	return IRQ_HANDLED;
 }
@@ -1183,6 +1304,8 @@ static int rz_dmac_probe(struct platform_device *pdev)
 	engine = &dmac->engine;
 	dma_cap_set(DMA_SLAVE, engine->cap_mask);
 	dma_cap_set(DMA_MEMCPY, engine->cap_mask);
+	dma_cap_set(DMA_CYCLIC, engine->cap_mask);
+	engine->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
 	engine->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
 	rz_dmac_writel(dmac, DCTRL_DEFAULT, CHANNEL_0_7_COMMON_BASE + DCTRL);
 	rz_dmac_writel(dmac, DCTRL_DEFAULT, CHANNEL_8_15_COMMON_BASE + DCTRL);
@@ -1194,6 +1317,7 @@ static int rz_dmac_probe(struct platform_device *pdev)
 	engine->device_tx_status = rz_dmac_tx_status;
 	engine->device_prep_slave_sg = rz_dmac_prep_slave_sg;
 	engine->device_prep_dma_memcpy = rz_dmac_prep_dma_memcpy;
+	engine->device_prep_dma_cyclic = rz_dmac_prep_dma_cyclic;
 	engine->device_config = rz_dmac_config;
 	engine->device_terminate_all = rz_dmac_terminate_all;
 	engine->device_issue_pending = rz_dmac_issue_pending;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v5 13/17] dmaengine: sh: rz-dmac: Add runtime PM support
  2026-05-12 12:12 [PATCH v5 00/17] Renesas: dmaengine and ASoC fixes Claudiu Beznea
                   ` (11 preceding siblings ...)
  2026-05-12 12:12 ` [PATCH v5 12/17] dmaengine: sh: rz-dmac: Add cyclic DMA support Claudiu Beznea
@ 2026-05-12 12:12 ` Claudiu Beznea
  2026-05-12 22:03   ` Frank Li
  2026-05-14  2:08   ` sashiko-bot
  2026-05-12 12:12 ` [PATCH v5 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support Claudiu Beznea
                   ` (3 subsequent siblings)
  16 siblings, 2 replies; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-12 12:12 UTC (permalink / raw)
  To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur
  Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc, Claudiu Beznea

Protect the driver exposed APIs with runtime PM suspend/resume calls
before accessing HW registers. As the current driver leaves runtime PM
enabled in probe, the purpose of the changes in this patch is to avoid
accessing HW registers after a failed system suspend leaves the runtime
PM state of the device improperly reinitialized.

In that case, the driver remains bound to the device, the APIs are still
exposed, and any access to HW registers without runtime resuming the
device may lead to synchronous aborts.

This patch prepares the driver for suspend-to-RAM support.

Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---

Changes in v5:
- none, this patch is new

 drivers/dma/sh/rz-dmac.c | 48 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index d6ad070be705..df91657fd5e3 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -488,7 +488,15 @@ static void rz_dmac_prepare_descs_for_cyclic(struct rz_dmac_chan *channel)
 
 static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
 {
+	struct dma_chan *ch = &chan->vc.chan;
+	struct rz_dmac *dmac = to_rz_dmac(ch->device);
 	struct virt_dma_desc *vd;
+	int ret;
+
+	PM_RUNTIME_ACQUIRE_IF_ENABLED(dmac->dev, pm);
+	ret = PM_RUNTIME_ACQUIRE_ERR(&pm);
+	if (ret)
+		return;
 
 	vd = vchan_next_desc(&chan->vc);
 	if (!vd) {
@@ -549,6 +557,12 @@ static void rz_dmac_free_chan_resources(struct dma_chan *chan)
 	struct rz_dmac *dmac = to_rz_dmac(chan->device);
 	struct rz_dmac_desc *desc, *_desc;
 	unsigned long flags;
+	int ret;
+
+	PM_RUNTIME_ACQUIRE_IF_ENABLED(dmac->dev, pm);
+	ret = PM_RUNTIME_ACQUIRE_ERR(&pm);
+	if (ret)
+		return;
 
 	spin_lock_irqsave(&channel->vc.lock, flags);
 
@@ -697,8 +711,15 @@ rz_dmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
 static int rz_dmac_terminate_all(struct dma_chan *chan)
 {
 	struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
+	struct rz_dmac *dmac = to_rz_dmac(chan->device);
 	unsigned long flags;
 	LIST_HEAD(head);
+	int ret;
+
+	PM_RUNTIME_ACQUIRE_IF_ENABLED(dmac->dev, pm);
+	ret = PM_RUNTIME_ACQUIRE_ERR(&pm);
+	if (ret)
+		return ret;
 
 	spin_lock_irqsave(&channel->vc.lock, flags);
 	rz_dmac_disable_hw(channel);
@@ -807,6 +828,11 @@ static void rz_dmac_device_synchronize(struct dma_chan *chan)
 
 	vchan_synchronize(&channel->vc);
 
+	PM_RUNTIME_ACQUIRE_IF_ENABLED(dmac->dev, pm);
+	ret = PM_RUNTIME_ACQUIRE_ERR(&pm);
+	if (ret)
+		return;
+
 	ret = read_poll_timeout(rz_dmac_ch_readl, chstat, !(chstat & CHSTAT_EN),
 				100, 100000, false, channel, CHSTAT, 1);
 	if (ret < 0)
@@ -909,8 +935,15 @@ static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
 					 struct dma_tx_state *txstate)
 {
 	struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
+	struct rz_dmac *dmac = to_rz_dmac(chan->device);
 	enum dma_status status;
 	u32 residue;
+	int ret;
+
+	PM_RUNTIME_ACQUIRE_IF_ENABLED(dmac->dev, pm);
+	ret = PM_RUNTIME_ACQUIRE_ERR(&pm);
+	if (ret)
+		return ret;
 
 	scoped_guard(spinlock_irqsave, &channel->vc.lock) {
 		status = dma_cookie_status(chan, cookie, txstate);
@@ -956,6 +989,13 @@ static int rz_dmac_device_pause_set(struct rz_dmac_chan *channel,
 static int rz_dmac_device_pause(struct dma_chan *chan)
 {
 	struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
+	struct rz_dmac *dmac = to_rz_dmac(chan->device);
+	int ret;
+
+	PM_RUNTIME_ACQUIRE_IF_ENABLED(dmac->dev, pm);
+	ret = PM_RUNTIME_ACQUIRE_ERR(&pm);
+	if (ret)
+		return ret;
 
 	guard(spinlock_irqsave)(&channel->vc.lock);
 
@@ -985,6 +1025,13 @@ static int rz_dmac_device_resume_set(struct rz_dmac_chan *channel,
 static int rz_dmac_device_resume(struct dma_chan *chan)
 {
 	struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
+	struct rz_dmac *dmac = to_rz_dmac(chan->device);
+	int ret;
+
+	PM_RUNTIME_ACQUIRE_IF_ENABLED(dmac->dev, pm);
+	ret = PM_RUNTIME_ACQUIRE_ERR(&pm);
+	if (ret)
+		return ret;
 
 	guard(spinlock_irqsave)(&channel->vc.lock);
 
@@ -1265,6 +1312,7 @@ static int rz_dmac_probe(struct platform_device *pdev)
 		return dev_err_probe(&pdev->dev, PTR_ERR(dmac->rstc),
 				     "failed to get resets\n");
 
+	pm_runtime_irq_safe(&pdev->dev);
 	pm_runtime_enable(&pdev->dev);
 	ret = pm_runtime_resume_and_get(&pdev->dev);
 	if (ret < 0) {
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v5 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support
  2026-05-12 12:12 [PATCH v5 00/17] Renesas: dmaengine and ASoC fixes Claudiu Beznea
                   ` (12 preceding siblings ...)
  2026-05-12 12:12 ` [PATCH v5 13/17] dmaengine: sh: rz-dmac: Add runtime PM support Claudiu Beznea
@ 2026-05-12 12:12 ` Claudiu Beznea
  2026-05-14  3:04   ` sashiko-bot
  2026-05-12 12:12 ` [PATCH v5 15/17] ASoC: renesas: rz-ssi: Add pause support Claudiu Beznea
                   ` (2 subsequent siblings)
  16 siblings, 1 reply; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-12 12:12 UTC (permalink / raw)
  To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur
  Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc, Claudiu Beznea

The Renesas RZ/G3S SoC supports a power saving mode in which power to most
of the SoC components is turned off, including the DMA IP. Add suspend to
RAM support to save and restore the DMA IP registers.

Cyclic DMA channels require special handling. Since they can be paused and
resumed during system suspend/resume, the driver restores additional
registers for these channels during the system resume phase. If a channel
was not explicitly paused during suspend, the driver ensures that it is
paused and resumed as part of the system suspend/resume flow. This might be
the case of a serial device being used with no_console_suspend.

For non-cyclic channels, the dev_pm_ops::prepare callback waits for all
the ongoing transfers to complete before allowing suspend-to-RAM to
proceed.

Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---

Changes in v5:
- runtime PM enable in rz_dmac_suspend_prepare() and rz_dmac_suspend_recover()
- initialize ret in rz_dmac_suspend()
- in suspend/resume APIs changed the order b/w runtime PM and reset calls
  to follow the sequence present in remove and probe
- in rz_dmac_suspend(): take into account the error code returned by
  pm_runtime_put_sync()
- in rz_dmac_resume(): use "return errors ? : 0;" instead of
  "return errors ? : ret;"

Changes in v4:
- in rz_dmac_device_synchronize() kept the read_poll_timeout() as
  this doesn't fail anymore with the proper status return from
  ->device_tx_status() API in case the channel is paused; with it
  the patch description was updated
- keep the cleanup path in rz_dmac_suspend() simpler to avoid
  confusion when using guard()
- used SYSTEM_SLEEP_PM_OPS() as there is no need for having the
  suspend/resume callbacks being called in NOIRQ phase

Changes in v3:
- dropped RZ_DMAC_CHAN_STATUS_SYS_SUSPENDED
- dropped read_poll_timeout() from rz_dmac_device_synchronze() as
  with audio drivers this times out all the time on suspend because
  the audio DMA is already paused when the rz_dmac_device_synchronize()
  is called; updated the commit description to describe this change
- call rz_dmac_device_pause_internal() only if RZ_DMAC_CHAN_STATUS_PAUSED
  bit is not set or the device is enabled in HW
- updated rz_dmac_device_resume_set() to have it simpler and cover
  the cases when it is called with the channel enabled or paused;
  updated the comment describing the covered use cases
- call rz_dmac_device_resume_internal() only if
  RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL bit is set
- in rz_dmac_chan_is_enabled() return -EAGAIN only if the channel is
  enabled in HW
- in rz_dmac_suspend_recover() drop the update of
  RZ_DMAC_CHAN_STATUS_SYS_SUSPENDED as this is not available anymore
- in rz_dmac_suspend() call rz_dmac_device_pause_internal() unconditionally
  as the logic is now handled inside the called function; also, do not
  ignore anymore the failure of internal suspend and abort the suspend
  instead
- report channel internal resume failures in rz_dmac_resume()
- use rz_dmac_disable_hw() instead of open coding it in rz_dmac_resume()
- call rz_dmac_device_resume_internal() uncoditionally as the skip
  logic is now handled in the function itself
- use NOIRQ_SYSTEM_SLEEP_PM_OPS()
- didn't collect Tommaso's Tb tag as the series was changed a lot since
  v2

Changes in v2:
- fixed typos in patch description
- in rz_dmac_suspend_prepare(): return -EAGAIN based on the value returned
  by vchan_issue_pending()
- in rz_dmac_suspend_recover(): clear RZ_DMAC_CHAN_STATUS_SYS_SUSPENDED for
  non cyclic channels
- in rz_dmac_resume(): call rz_dmac_set_dma_req_no() only for cyclic channels


 drivers/dma/sh/rz-dmac.c | 209 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 204 insertions(+), 5 deletions(-)

diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index df91657fd5e3..b0a8590f4a9d 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -69,10 +69,12 @@ struct rz_dmac_desc {
  * enum rz_dmac_chan_status: RZ DMAC channel status
  * @RZ_DMAC_CHAN_STATUS_PAUSED: Channel is paused though DMA engine callbacks
  * @RZ_DMAC_CHAN_STATUS_CYCLIC: Channel is cyclic
+ * @RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL: Channel is paused through driver internal logic
  */
 enum rz_dmac_chan_status {
 	RZ_DMAC_CHAN_STATUS_PAUSED,
 	RZ_DMAC_CHAN_STATUS_CYCLIC,
+	RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL,
 };
 
 struct rz_dmac_chan {
@@ -92,6 +94,10 @@ struct rz_dmac_chan {
 	u32 chctrl;
 	int mid_rid;
 
+	struct {
+		u32 nxla;
+	} pm_state;
+
 	struct list_head ld_free;
 
 	struct {
@@ -1002,20 +1008,57 @@ static int rz_dmac_device_pause(struct dma_chan *chan)
 	return rz_dmac_device_pause_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED));
 }
 
+static int rz_dmac_device_pause_internal(struct rz_dmac_chan *channel)
+{
+	lockdep_assert_held(&channel->vc.lock);
+
+	/* Skip channels explicitly paused by consummers or disabled. */
+	if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_PAUSED) ||
+	    !rz_dmac_chan_is_enabled(channel))
+		return 0;
+
+	return rz_dmac_device_pause_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL));
+}
+
 static int rz_dmac_device_resume_set(struct rz_dmac_chan *channel,
 				     unsigned long clear_bitmask)
 {
-	int ret = 0;
 	u32 val;
+	int ret;
 
 	lockdep_assert_held(&channel->vc.lock);
 
-	/* Do not check CHSTAT_SUS but rely on HW capabilities. */
+	/*
+	 * We can be:
+	 *
+	 * 1/ after the channel was paused by a consummer and now it
+	 *    needs to be resummed
+	 * 2/ after the channel was paused internally (as a result of
+	 *    a system suspend with power loss or not)
+	 * 3/ after the channel was paused by a consummer, the system
+	 *    went through a system suspend (with power loss or not)
+	 *    and the consummer wants to resume the channel
+	 *
+	 * To cover all the above cases we set both CLRSUS and SETEN.
+	 *
+	 * In case 1/ setting SETEN while the channel is still enabled
+	 * is harmless for the controller.
+	 *
+	 * In case 2/ the channel is disabled when calling this function
+	 * and setting CLRSUS is harmless for the controller as the
+	 * channel is disabled anyway.
+	 *
+	 * In case 3/ the channel is disabled/enabled if the system
+	 * went though a suspend with power loss/or not and setting
+	 * CLRSUS/SETEN is harmless for the controller as the channel
+	 * is enabled/disabled anyway.
+	 */
+
+	rz_dmac_ch_writel(channel, CHCTRL_CLRSUS | CHCTRL_SETEN, CHCTRL, 1);
 
-	rz_dmac_ch_writel(channel, CHCTRL_CLRSUS, CHCTRL, 1);
 	ret = read_poll_timeout_atomic(rz_dmac_ch_readl, val,
-				       !(val & CHSTAT_SUS), 1, 1024, false,
-				       channel, CHSTAT, 1);
+				       ((val & (CHSTAT_SUS | CHSTAT_EN)) == CHSTAT_EN),
+				       1, 1024, false, channel, CHSTAT, 1);
 
 	channel->status &= ~clear_bitmask;
 
@@ -1041,6 +1084,16 @@ static int rz_dmac_device_resume(struct dma_chan *chan)
 	return rz_dmac_device_resume_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED));
 }
 
+static int rz_dmac_device_resume_internal(struct rz_dmac_chan *channel)
+{
+	lockdep_assert_held(&channel->vc.lock);
+
+	if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL)))
+		return 0;
+
+	return rz_dmac_device_resume_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL));
+}
+
 /*
  * -----------------------------------------------------------------------------
  * IRQ handling
@@ -1406,6 +1459,151 @@ static void rz_dmac_remove(struct platform_device *pdev)
 	pm_runtime_disable(&pdev->dev);
 }
 
+static int rz_dmac_suspend_prepare(struct device *dev)
+{
+	struct rz_dmac *dmac = dev_get_drvdata(dev);
+	int ret;
+
+	PM_RUNTIME_ACQUIRE_IF_ENABLED(dmac->dev, pm);
+	ret = PM_RUNTIME_ACQUIRE_ERR(&pm);
+	if (ret)
+		return ret;
+
+	for (unsigned int i = 0; i < dmac->n_channels; i++) {
+		struct rz_dmac_chan *channel = &dmac->channels[i];
+
+		guard(spinlock_irqsave)(&channel->vc.lock);
+
+		/* Wait for transfer completion, except in cyclic case. */
+		if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC))
+			continue;
+
+		if (rz_dmac_chan_is_enabled(channel))
+			return -EAGAIN;
+	}
+
+	return 0;
+}
+
+static void rz_dmac_suspend_recover(struct rz_dmac *dmac)
+{
+	int ret;
+
+	PM_RUNTIME_ACQUIRE_IF_ENABLED(dmac->dev, pm);
+	ret = PM_RUNTIME_ACQUIRE_ERR(&pm);
+	if (ret)
+		return;
+
+	for (unsigned int i = 0; i < dmac->n_channels; i++) {
+		struct rz_dmac_chan *channel = &dmac->channels[i];
+
+		guard(spinlock_irqsave)(&channel->vc.lock);
+
+		if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)))
+			continue;
+
+		rz_dmac_device_resume_internal(channel);
+	}
+}
+
+static int rz_dmac_suspend(struct device *dev)
+{
+	struct rz_dmac *dmac = dev_get_drvdata(dev);
+	int ret = 0;
+
+	for (unsigned int i = 0; i < dmac->n_channels; i++) {
+		struct rz_dmac_chan *channel = &dmac->channels[i];
+
+		guard(spinlock_irqsave)(&channel->vc.lock);
+
+		if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)))
+			continue;
+
+		ret = rz_dmac_device_pause_internal(channel);
+		if (ret) {
+			dev_err(dev, "Failed to suspend channel %s\n",
+				dma_chan_name(&channel->vc.chan));
+			break;
+		}
+
+		channel->pm_state.nxla = rz_dmac_ch_readl(channel, NXLA, 1);
+	}
+
+	if (ret)
+		goto suspend_recover;
+
+	ret = reset_control_assert(dmac->rstc);
+	if (ret)
+		goto suspend_recover;
+
+	ret = pm_runtime_put_sync(dev);
+	if (ret < 0)
+		goto reset_deassert;
+
+	return 0;
+
+reset_deassert:
+	reset_control_deassert(dmac->rstc);
+suspend_recover:
+	rz_dmac_suspend_recover(dmac);
+	return ret;
+}
+
+static int rz_dmac_resume(struct device *dev)
+{
+	struct rz_dmac *dmac = dev_get_drvdata(dev);
+	int errors = 0, ret;
+
+	ret = pm_runtime_resume_and_get(dev);
+	if (ret)
+		return ret;
+
+	ret = reset_control_deassert(dmac->rstc);
+	if (ret) {
+		/*
+		 * Do not put runtime PM here and keep the same state as in
+		 * probe. As subsequent suspend/resume cycles may follow, leave
+		 * the runtime PM as is, here, to avoid imbalances.
+		 */
+		return ret;
+	}
+
+	rz_dmac_writel(dmac, DCTRL_DEFAULT, CHANNEL_0_7_COMMON_BASE + DCTRL);
+	rz_dmac_writel(dmac, DCTRL_DEFAULT, CHANNEL_8_15_COMMON_BASE + DCTRL);
+
+	for (unsigned int i = 0; i < dmac->n_channels; i++) {
+		struct rz_dmac_chan *channel = &dmac->channels[i];
+
+		guard(spinlock_irqsave)(&channel->vc.lock);
+
+		rz_dmac_disable_hw(&dmac->channels[i]);
+
+		if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)))
+			continue;
+
+		rz_dmac_set_dma_req_no(dmac, channel->index, channel->mid_rid);
+
+		rz_dmac_ch_writel(channel, channel->pm_state.nxla, NXLA, 1);
+		rz_dmac_ch_writel(channel, channel->chcfg, CHCFG, 1);
+		rz_dmac_ch_writel(channel, CHCTRL_SWRST, CHCTRL, 1);
+		rz_dmac_ch_writel(channel, channel->chctrl, CHCTRL, 1);
+
+		ret = rz_dmac_device_resume_internal(channel);
+		if (ret) {
+			errors = ret;
+			dev_err(dev, "Failed to resume channel %s, ret=%d\n",
+				dma_chan_name(&channel->vc.chan), ret);
+		}
+	}
+
+	return errors ? : 0;
+}
+
+static const struct dev_pm_ops rz_dmac_pm_ops = {
+	.prepare = rz_dmac_suspend_prepare,
+	SYSTEM_SLEEP_PM_OPS(rz_dmac_suspend, rz_dmac_resume)
+};
+
 static const struct rz_dmac_info rz_dmac_v2h_info = {
 	.icu_register_dma_req = rzv2h_icu_register_dma_req,
 	.default_dma_req_no = RZV2H_ICU_DMAC_REQ_NO_DEFAULT,
@@ -1432,6 +1630,7 @@ static struct platform_driver rz_dmac_driver = {
 	.driver		= {
 		.name	= "rz-dmac",
 		.of_match_table = of_rz_dmac_match,
+		.pm	= pm_sleep_ptr(&rz_dmac_pm_ops),
 	},
 	.probe		= rz_dmac_probe,
 	.remove		= rz_dmac_remove,
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v5 15/17] ASoC: renesas: rz-ssi: Add pause support
  2026-05-12 12:12 [PATCH v5 00/17] Renesas: dmaengine and ASoC fixes Claudiu Beznea
                   ` (13 preceding siblings ...)
  2026-05-12 12:12 ` [PATCH v5 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support Claudiu Beznea
@ 2026-05-12 12:12 ` Claudiu Beznea
  2026-05-14  3:54   ` sashiko-bot
  2026-05-12 12:12 ` [PATCH v5 16/17] ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs Claudiu Beznea
  2026-05-12 12:12 ` [PATCH v5 17/17] dmaengine: sh: rz-dmac: Set the Link End (LE) bit on the last descriptor Claudiu Beznea
  16 siblings, 1 reply; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-12 12:12 UTC (permalink / raw)
  To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur
  Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc, Claudiu Beznea

Add pause support as a preparatory step to switch to PCM dmaengine APIs.

Acked-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---

Changes in v5:
- none

Changes in v4:
- collected tags

Changes in v3:
- none, this patch is new

 sound/soc/renesas/rz-ssi.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/sound/soc/renesas/rz-ssi.c b/sound/soc/renesas/rz-ssi.c
index 71e434cfe07b..d4e1dded3a9c 100644
--- a/sound/soc/renesas/rz-ssi.c
+++ b/sound/soc/renesas/rz-ssi.c
@@ -847,6 +847,7 @@ static int rz_ssi_dai_trigger(struct snd_pcm_substream *substream, int cmd,
 
 	switch (cmd) {
 	case SNDRV_PCM_TRIGGER_RESUME:
+	case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
 		ret = rz_ssi_trigger_resume(ssi, strm);
 		if (ret)
 			return ret;
@@ -888,6 +889,7 @@ static int rz_ssi_dai_trigger(struct snd_pcm_substream *substream, int cmd,
 		break;
 
 	case SNDRV_PCM_TRIGGER_SUSPEND:
+	case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
 		rz_ssi_stop(ssi, strm);
 		break;
 
@@ -1054,7 +1056,8 @@ static const struct snd_pcm_hardware rz_ssi_pcm_hardware = {
 	.info			= SNDRV_PCM_INFO_INTERLEAVED	|
 				  SNDRV_PCM_INFO_MMAP		|
 				  SNDRV_PCM_INFO_MMAP_VALID	|
-				  SNDRV_PCM_INFO_RESUME,
+				  SNDRV_PCM_INFO_RESUME		|
+				  SNDRV_PCM_INFO_PAUSE,
 	.buffer_bytes_max	= PREALLOC_BUFFER,
 	.period_bytes_min	= 32,
 	.period_bytes_max	= 8192,
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v5 16/17] ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs
  2026-05-12 12:12 [PATCH v5 00/17] Renesas: dmaengine and ASoC fixes Claudiu Beznea
                   ` (14 preceding siblings ...)
  2026-05-12 12:12 ` [PATCH v5 15/17] ASoC: renesas: rz-ssi: Add pause support Claudiu Beznea
@ 2026-05-12 12:12 ` Claudiu Beznea
  2026-05-14  4:52   ` sashiko-bot
  2026-05-12 12:12 ` [PATCH v5 17/17] dmaengine: sh: rz-dmac: Set the Link End (LE) bit on the last descriptor Claudiu Beznea
  16 siblings, 1 reply; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-12 12:12 UTC (permalink / raw)
  To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur
  Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc, Claudiu Beznea

On Renesas RZ/G2L and RZ/G3S SoCs (where this was tested), captured audio
files occasionally contained random spikes when viewed with a tool such
as Audacity. These spikes were also audible as popping noises.

Using cyclic DMA resolves this issue. The driver was reworked to use the
existing support provided by the generic PCM dmaengine APIs. In addition
to eliminating the random spikes, the following issues were addressed:
- blank periods at the beginning of recorded files, which occurred
  intermittently, are no longer present
- no overruns or underruns were observed when continuously recording
  short audio files (e.g. 5 seconds long) in a loop
- concurrency issues in the SSI driver when enqueuing DMA requests were
  eliminated; previously, DMA requests could be prepared and submitted
  both from the DMA completion callback and the interrupt handler, which
  led to crashes after several hours of testing
- the SSI driver logic is simplified
- the number of generated interrupts is reduced by approximately 250%

In the SSI platform driver probe function, the following changes were
made:
- the driver-specific DMA configuration was removed in favor of the
  generic PCM dmaengine APIs. As a result, explicit cleanup goto labels
  are no longer required and the driver remove callback was dropped,
  since resource management is now handled via devres helpers
- special handling was added for IP variants operating in half-duplex
  mode, where the DMA channel name in the device tree is "rt"; this DMA
  channel name is taken into account and passed to the generic PCM
  dmaengine configuration data

All code previously responsible for preparing and completing DMA
transfers was removed, as this functionality is now handled entirely by
the generic PCM dmaengine APIs.

Since DMA channels must be paused and resumed during recovery paths
(overruns and underruns reported by the hardware), the DMA channel
references are stored in rz_ssi_hw_params().

The logic in rz_ssi_is_dma_enabled() was updated to reflect that the
driver no longer manages DMA transfers directly.

To avoid software reported underruns (e.g. when running aplay during
consecutive suspend/resume cycles, or when the CPU is nearly 100%
loaded), rz_ssi_pcm_hardware.buffer_bytes_max was increased to 192K.
At the same time, rz_ssi_pcm_hardware.period_bytes_max was set to 48K
to reduce interrupt overhead.

Finally, rz_ssi_stream_is_play() was removed, as it had only a single
remaining user after this rework, and its logic was inlined at the call
site.

Acked-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---

Changes in v5:
- in rz_ssi_interrupt(): check if playback and capture dmas are the same
  to avoid calling dmaengine_pause() twice, on the same DMA channel
- in rz_ssi_shutdown(): set ssi->dmas[i] = NULL
- still preserved the Mark's Ack

Changes in v4:
- collected tags
- in rz_ssi_interrupt() checked the dma channel is valid before
  calling dmaengine_pause(); at the same time initialized the
  rz_ssi->dmas[] with NULL in case the DMA is not available in
  rz_ssi_dai_hw_params()
- set rz_ssi_dmaengine_pcm_conf.prealloc_buffer_size
- dinamically allocate the object of type snd_dmaengine_pcm_config passed
  to devm_snd_dmaengine_pcm_register() to avoid issues when the driver
  is instantiated for more than one HW instance
- I considered keeping the ack was still OK; Mark, please let me know if
  you consider otherwise

Changes in v3:
- s/CONFIG_SND_SOC_GENERIC_DMAENGINE_PCM/SND_SOC_GENERIC_DMAENGINE_PCM
  in Kconfig
- in rz_ssi_clk_setup(): drop the update of dma_dai->maxburst
- in rz_ssi_interrupt(): pause the DMA channels in case of HW over/underruns
- add different open APIs for rz_ssi_soc_component_pio and
  rz_ssi_soc_component_dma 
- set rz_ssi_pcm_hardware to rz_ssi_dmaengine_pcm_conf.pcm_hardware
  and updated the buffer_bytes_max to avoid underruns detected by
  applications just before suspending; along with it updated
  period_bytes_max for lower interrupt overhead; updated the patch
  description for this; with it updated the snd_pcm_set_managed_buffer_all()
  arguments to use the rz_ssi_pcm_hardware
- added back rz_ssi_soc_component_pio.pcm_new instantiation as the
  PIO mode was broken w/o it
- use specific rz_ssi_soc_component_dma.open implementation for DMA
- updated rz_ssi_dmaengine_pcm_conf.chan_names[].{tx, rx} either if
  there is about full or half duplex instantiation and move the flags
  variable local to the code block that uses it
- check devm_snd_dmaengine_pcm_register() for defer errors

Changes in v2:
- fixed typos in patch description
- select CONFIG_SND_SOC_GENERIC_DMAENGINE_PCM for rz-ssi driver
- in rz_ssi_dai_hw_params() check if DMA is enabled before calling
  snd_dmaengine_pcm_get_chan() to avoid failures for PIO mode
- do not drop rz_ssi_pcm_pointer() and rz_ssi_pcm_new() as these
  are necessary for PIO mode
- added 2 struct snd_soc_component_driver, one for PIO mode, one for
  DMA and updated probe() to register the proper
  snd_soc_component_driver based on the working mode

 sound/soc/renesas/Kconfig  |   1 +
 sound/soc/renesas/rz-ssi.c | 388 ++++++++++++-------------------------
 2 files changed, 130 insertions(+), 259 deletions(-)

diff --git a/sound/soc/renesas/Kconfig b/sound/soc/renesas/Kconfig
index 11c2027c88a7..6520217e7407 100644
--- a/sound/soc/renesas/Kconfig
+++ b/sound/soc/renesas/Kconfig
@@ -56,6 +56,7 @@ config SND_SOC_MSIOF
 config SND_SOC_RZ
 	tristate "RZ/G2L series SSIF-2 support"
 	depends on ARCH_RZG2L || COMPILE_TEST
+	select SND_SOC_GENERIC_DMAENGINE_PCM
 	help
 	  This option enables RZ/G2L SSIF-2 sound support.
 
diff --git a/sound/soc/renesas/rz-ssi.c b/sound/soc/renesas/rz-ssi.c
index d4e1dded3a9c..5406f9a3547f 100644
--- a/sound/soc/renesas/rz-ssi.c
+++ b/sound/soc/renesas/rz-ssi.c
@@ -13,6 +13,8 @@
 #include <linux/module.h>
 #include <linux/pm_runtime.h>
 #include <linux/reset.h>
+#include <sound/dmaengine_pcm.h>
+#include <sound/pcm.h>
 #include <sound/pcm_params.h>
 #include <sound/soc.h>
 
@@ -87,8 +89,6 @@ struct rz_ssi_stream {
 	struct rz_ssi_priv *priv;
 	struct snd_pcm_substream *substream;
 	int fifo_sample_size;	/* sample capacity of SSI FIFO */
-	int dma_buffer_pos;	/* The address for the next DMA descriptor */
-	int completed_dma_buf_pos; /* The address of the last completed DMA descriptor. */
 	int period_counter;	/* for keeping track of periods transferred */
 	int buffer_pos;		/* current frame position in the buffer */
 	int running;		/* 0=stopped, 1=running */
@@ -96,8 +96,6 @@ struct rz_ssi_stream {
 	int uerr_num;
 	int oerr_num;
 
-	struct dma_chan *dma_ch;
-
 	int (*transfer)(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm);
 };
 
@@ -108,7 +106,6 @@ struct rz_ssi_priv {
 	struct clk *sfr_clk;
 	struct clk *clk;
 
-	phys_addr_t phys;
 	int irq_int;
 	int irq_tx;
 	int irq_rx;
@@ -148,9 +145,10 @@ struct rz_ssi_priv {
 		unsigned int sample_width;
 		unsigned int sample_bits;
 	} hw_params_cache;
-};
 
-static void rz_ssi_dma_complete(void *data);
+	struct snd_dmaengine_dai_dma_data dma_dais[SNDRV_PCM_STREAM_LAST + 1];
+	struct dma_chan *dmas[SNDRV_PCM_STREAM_LAST + 1];
+};
 
 static void rz_ssi_reg_writel(struct rz_ssi_priv *priv, uint reg, u32 data)
 {
@@ -172,11 +170,6 @@ static void rz_ssi_reg_mask_setl(struct rz_ssi_priv *priv, uint reg,
 	writel(val, (priv->base + reg));
 }
 
-static inline bool rz_ssi_stream_is_play(struct snd_pcm_substream *substream)
-{
-	return substream->stream == SNDRV_PCM_STREAM_PLAYBACK;
-}
-
 static inline struct rz_ssi_stream *
 rz_ssi_stream_get(struct rz_ssi_priv *ssi, struct snd_pcm_substream *substream)
 {
@@ -185,7 +178,7 @@ rz_ssi_stream_get(struct rz_ssi_priv *ssi, struct snd_pcm_substream *substream)
 
 static inline bool rz_ssi_is_dma_enabled(struct rz_ssi_priv *ssi)
 {
-	return (ssi->playback.dma_ch && (ssi->dma_rt || ssi->capture.dma_ch));
+	return !ssi->playback.transfer && !ssi->capture.transfer;
 }
 
 static void rz_ssi_set_substream(struct rz_ssi_stream *strm,
@@ -215,8 +208,6 @@ static void rz_ssi_stream_init(struct rz_ssi_stream *strm,
 			       struct snd_pcm_substream *substream)
 {
 	rz_ssi_set_substream(strm, substream);
-	strm->dma_buffer_pos = 0;
-	strm->completed_dma_buf_pos = 0;
 	strm->period_counter = 0;
 	strm->buffer_pos = 0;
 
@@ -242,12 +233,13 @@ static void rz_ssi_stream_quit(struct rz_ssi_priv *ssi,
 		dev_info(dev, "underrun = %d\n", strm->uerr_num);
 }
 
-static int rz_ssi_clk_setup(struct rz_ssi_priv *ssi, unsigned int rate,
-			    unsigned int channels)
+static int rz_ssi_clk_setup(struct rz_ssi_priv *ssi, struct snd_pcm_substream *substream,
+			    unsigned int rate, unsigned int channels)
 {
 	static u8 ckdv[] = { 1,  2,  4,  8, 16, 32, 64, 128, 6, 12, 24, 48, 96 };
 	unsigned int channel_bits = 32;	/* System Word Length */
 	unsigned long bclk_rate = rate * channels * channel_bits;
+	struct snd_dmaengine_dai_dma_data *dma_dai;
 	unsigned int div;
 	unsigned int i;
 	u32 ssicr = 0;
@@ -290,6 +282,8 @@ static int rz_ssi_clk_setup(struct rz_ssi_priv *ssi, unsigned int rate,
 		return -EINVAL;
 	}
 
+	dma_dai = &ssi->dma_dais[substream->stream];
+
 	/*
 	 * DWL: Data Word Length = {16, 24, 32} bits
 	 * SWL: System Word Length = 32 bits
@@ -298,12 +292,15 @@ static int rz_ssi_clk_setup(struct rz_ssi_priv *ssi, unsigned int rate,
 	switch (ssi->hw_params_cache.sample_width) {
 	case 16:
 		ssicr |= SSICR_DWL(1);
+		dma_dai->addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES;
 		break;
 	case 24:
 		ssicr |= SSICR_DWL(5) | SSICR_PDTA;
+		dma_dai->addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
 		break;
 	case 32:
 		ssicr |= SSICR_DWL(6);
+		dma_dai->addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
 		break;
 	default:
 		dev_err(ssi->dev, "Not support %u data width",
@@ -344,7 +341,7 @@ static void rz_ssi_set_idle(struct rz_ssi_priv *ssi)
 
 static int rz_ssi_start(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm)
 {
-	bool is_play = rz_ssi_stream_is_play(strm->substream);
+	bool is_play = strm->substream->stream == SNDRV_PCM_STREAM_PLAYBACK;
 	bool is_full_duplex;
 	u32 ssicr, ssifcr;
 
@@ -423,14 +420,6 @@ static int rz_ssi_stop(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm)
 	/* Disable TX/RX */
 	rz_ssi_reg_mask_setl(ssi, SSICR, SSICR_TEN | SSICR_REN, 0);
 
-	/* Cancel all remaining DMA transactions */
-	if (rz_ssi_is_dma_enabled(ssi)) {
-		if (ssi->playback.dma_ch)
-			dmaengine_terminate_async(ssi->playback.dma_ch);
-		if (ssi->capture.dma_ch)
-			dmaengine_terminate_async(ssi->capture.dma_ch);
-	}
-
 	rz_ssi_set_idle(ssi);
 
 	return 0;
@@ -458,10 +447,6 @@ static void rz_ssi_pointer_update(struct rz_ssi_stream *strm, int frames)
 		snd_pcm_period_elapsed(strm->substream);
 		strm->period_counter = current_period;
 	}
-
-	strm->completed_dma_buf_pos += runtime->period_size;
-	if (strm->completed_dma_buf_pos >= runtime->buffer_size)
-		strm->completed_dma_buf_pos = 0;
 }
 
 static int rz_ssi_pio_recv(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm)
@@ -606,12 +591,6 @@ static irqreturn_t rz_ssi_interrupt(int irq, void *data)
 	if (irq == ssi->irq_int) { /* error or idle */
 		bool is_stopped = !!(ssisr & (SSISR_RUIRQ | SSISR_ROIRQ |
 					      SSISR_TUIRQ | SSISR_TOIRQ));
-		int i, count;
-
-		if (rz_ssi_is_dma_enabled(ssi))
-			count = 4;
-		else
-			count = 1;
 
 		if (ssi->capture.substream && is_stopped) {
 			if (ssisr & SSISR_RUIRQ)
@@ -631,18 +610,34 @@ static irqreturn_t rz_ssi_interrupt(int irq, void *data)
 			rz_ssi_stop(ssi, strm_playback);
 		}
 
+		if (!rz_ssi_is_stream_running(&ssi->playback) &&
+		    !rz_ssi_is_stream_running(&ssi->capture) &&
+		    rz_ssi_is_dma_enabled(ssi)) {
+			if (ssi->dmas[SNDRV_PCM_STREAM_PLAYBACK])
+				dmaengine_pause(ssi->dmas[SNDRV_PCM_STREAM_PLAYBACK]);
+			if (ssi->dmas[SNDRV_PCM_STREAM_CAPTURE] &&
+			    /* Avoid calling pause twice in case of half duplex. */
+			    ssi->dmas[SNDRV_PCM_STREAM_PLAYBACK] !=
+			    ssi->dmas[SNDRV_PCM_STREAM_CAPTURE])
+				dmaengine_pause(ssi->dmas[SNDRV_PCM_STREAM_CAPTURE]);
+		}
+
 		/* Clear all flags */
 		rz_ssi_reg_mask_setl(ssi, SSISR, SSISR_TOIRQ | SSISR_TUIRQ |
 				     SSISR_ROIRQ | SSISR_RUIRQ, 0);
 
 		/* Add/remove more data */
 		if (ssi->capture.substream && is_stopped) {
-			for (i = 0; i < count; i++)
+			if (rz_ssi_is_dma_enabled(ssi))
+				dmaengine_resume(ssi->dmas[SNDRV_PCM_STREAM_CAPTURE]);
+			else
 				strm_capture->transfer(ssi, strm_capture);
 		}
 
 		if (ssi->playback.substream && is_stopped) {
-			for (i = 0; i < count; i++)
+			if (rz_ssi_is_dma_enabled(ssi))
+				dmaengine_resume(ssi->dmas[SNDRV_PCM_STREAM_PLAYBACK]);
+			else
 				strm_playback->transfer(ssi, strm_playback);
 		}
 
@@ -679,153 +674,11 @@ static irqreturn_t rz_ssi_interrupt(int irq, void *data)
 	return IRQ_HANDLED;
 }
 
-static int rz_ssi_dma_slave_config(struct rz_ssi_priv *ssi,
-				   struct dma_chan *dma_ch, bool is_play)
-{
-	struct dma_slave_config cfg;
-
-	memset(&cfg, 0, sizeof(cfg));
-
-	cfg.direction = is_play ? DMA_MEM_TO_DEV : DMA_DEV_TO_MEM;
-	cfg.dst_addr = ssi->phys + SSIFTDR;
-	cfg.src_addr = ssi->phys + SSIFRDR;
-	if (ssi->hw_params_cache.sample_width == 16) {
-		cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES;
-		cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES;
-	} else {
-		cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
-		cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
-	}
-
-	return dmaengine_slave_config(dma_ch, &cfg);
-}
-
-static int rz_ssi_dma_transfer(struct rz_ssi_priv *ssi,
-			       struct rz_ssi_stream *strm)
-{
-	struct snd_pcm_substream *substream = strm->substream;
-	struct dma_async_tx_descriptor *desc;
-	struct snd_pcm_runtime *runtime;
-	enum dma_transfer_direction dir;
-	u32 dma_paddr, dma_size;
-	int amount;
-
-	if (!rz_ssi_stream_is_valid(ssi, strm))
-		return -EINVAL;
-
-	runtime = substream->runtime;
-	if (runtime->state == SNDRV_PCM_STATE_DRAINING)
-		/*
-		 * Stream is ending, so do not queue up any more DMA
-		 * transfers otherwise we play partial sound clips
-		 * because we can't shut off the DMA quick enough.
-		 */
-		return 0;
-
-	dir = rz_ssi_stream_is_play(substream) ? DMA_MEM_TO_DEV : DMA_DEV_TO_MEM;
-
-	/* Always transfer 1 period */
-	amount = runtime->period_size;
-
-	/* DMA physical address and size */
-	dma_paddr = runtime->dma_addr + frames_to_bytes(runtime,
-							strm->dma_buffer_pos);
-	dma_size = frames_to_bytes(runtime, amount);
-	desc = dmaengine_prep_slave_single(strm->dma_ch, dma_paddr, dma_size,
-					   dir,
-					   DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
-	if (!desc) {
-		dev_err(ssi->dev, "dmaengine_prep_slave_single() fail\n");
-		return -ENOMEM;
-	}
-
-	desc->callback = rz_ssi_dma_complete;
-	desc->callback_param = strm;
-
-	if (dmaengine_submit(desc) < 0) {
-		dev_err(ssi->dev, "dmaengine_submit() fail\n");
-		return -EIO;
-	}
-
-	/* Update DMA pointer */
-	strm->dma_buffer_pos += amount;
-	if (strm->dma_buffer_pos >= runtime->buffer_size)
-		strm->dma_buffer_pos = 0;
-
-	/* Start DMA */
-	dma_async_issue_pending(strm->dma_ch);
-
-	return 0;
-}
-
-static void rz_ssi_dma_complete(void *data)
-{
-	struct rz_ssi_stream *strm = (struct rz_ssi_stream *)data;
-
-	if (!strm->running || !strm->substream || !strm->substream->runtime)
-		return;
-
-	/* Note that next DMA transaction has probably already started */
-	rz_ssi_pointer_update(strm, strm->substream->runtime->period_size);
-
-	/* Queue up another DMA transaction */
-	rz_ssi_dma_transfer(strm->priv, strm);
-}
-
-static void rz_ssi_release_dma_channels(struct rz_ssi_priv *ssi)
-{
-	if (ssi->playback.dma_ch) {
-		dma_release_channel(ssi->playback.dma_ch);
-		ssi->playback.dma_ch = NULL;
-		if (ssi->dma_rt)
-			ssi->dma_rt = false;
-	}
-
-	if (ssi->capture.dma_ch) {
-		dma_release_channel(ssi->capture.dma_ch);
-		ssi->capture.dma_ch = NULL;
-	}
-}
-
-static int rz_ssi_dma_request(struct rz_ssi_priv *ssi, struct device *dev)
-{
-	ssi->playback.dma_ch = dma_request_chan(dev, "tx");
-	if (IS_ERR(ssi->playback.dma_ch))
-		ssi->playback.dma_ch = NULL;
-
-	ssi->capture.dma_ch = dma_request_chan(dev, "rx");
-	if (IS_ERR(ssi->capture.dma_ch))
-		ssi->capture.dma_ch = NULL;
-
-	if (!ssi->playback.dma_ch && !ssi->capture.dma_ch) {
-		ssi->playback.dma_ch = dma_request_chan(dev, "rt");
-		if (IS_ERR(ssi->playback.dma_ch)) {
-			ssi->playback.dma_ch = NULL;
-			goto no_dma;
-		}
-
-		ssi->dma_rt = true;
-	}
-
-	if (!rz_ssi_is_dma_enabled(ssi))
-		goto no_dma;
-
-	return 0;
-
-no_dma:
-	rz_ssi_release_dma_channels(ssi);
-
-	return -ENODEV;
-}
-
 static int rz_ssi_trigger_resume(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm)
 {
 	struct snd_pcm_substream *substream = strm->substream;
-	struct snd_pcm_runtime *runtime = substream->runtime;
 	int ret;
 
-	strm->dma_buffer_pos = strm->completed_dma_buf_pos + runtime->period_size;
-
 	if (rz_ssi_is_stream_running(&ssi->playback) ||
 	    rz_ssi_is_stream_running(&ssi->capture))
 		return 0;
@@ -834,7 +687,7 @@ static int rz_ssi_trigger_resume(struct rz_ssi_priv *ssi, struct rz_ssi_stream *
 	if (ret)
 		return ret;
 
-	return rz_ssi_clk_setup(ssi, ssi->hw_params_cache.rate,
+	return rz_ssi_clk_setup(ssi, substream, ssi->hw_params_cache.rate,
 				ssi->hw_params_cache.channels);
 }
 
@@ -843,7 +696,7 @@ static int rz_ssi_dai_trigger(struct snd_pcm_substream *substream, int cmd,
 {
 	struct rz_ssi_priv *ssi = snd_soc_dai_get_drvdata(dai);
 	struct rz_ssi_stream *strm = rz_ssi_stream_get(ssi, substream);
-	int ret = 0, i, num_transfer = 1;
+	int ret = 0;
 
 	switch (cmd) {
 	case SNDRV_PCM_TRIGGER_RESUME:
@@ -858,28 +711,7 @@ static int rz_ssi_dai_trigger(struct snd_pcm_substream *substream, int cmd,
 		if (cmd == SNDRV_PCM_TRIGGER_START)
 			rz_ssi_stream_init(strm, substream);
 
-		if (rz_ssi_is_dma_enabled(ssi)) {
-			bool is_playback = rz_ssi_stream_is_play(substream);
-
-			if (ssi->dma_rt)
-				ret = rz_ssi_dma_slave_config(ssi, ssi->playback.dma_ch,
-							      is_playback);
-			else
-				ret = rz_ssi_dma_slave_config(ssi, strm->dma_ch,
-							      is_playback);
-
-			/* Fallback to pio */
-			if (ret < 0) {
-				ssi->playback.transfer = rz_ssi_pio_send;
-				ssi->capture.transfer = rz_ssi_pio_recv;
-				rz_ssi_release_dma_channels(ssi);
-			} else {
-				/* For DMA, queue up multiple DMA descriptors */
-				num_transfer = 4;
-			}
-		}
-
-		for (i = 0; i < num_transfer; i++) {
+		if (!rz_ssi_is_dma_enabled(ssi)) {
 			ret = strm->transfer(ssi, strm);
 			if (ret)
 				return ret;
@@ -975,6 +807,8 @@ static void rz_ssi_shutdown(struct snd_pcm_substream *substream,
 		ssi->dup.tx_active = false;
 	else
 		ssi->dup.rx_active = false;
+
+	ssi->dmas[substream->stream] = NULL;
 }
 
 static bool rz_ssi_is_valid_hw_params(struct rz_ssi_priv *ssi, unsigned int rate,
@@ -1026,6 +860,12 @@ static int rz_ssi_dai_hw_params(struct snd_pcm_substream *substream,
 		return -EINVAL;
 	}
 
+	/* Save the DMA channels for recovery. */
+	if (rz_ssi_is_dma_enabled(ssi))
+		ssi->dmas[substream->stream] = snd_dmaengine_pcm_get_chan(substream);
+	else
+		ssi->dmas[substream->stream] = NULL;
+
 	if (rz_ssi_is_stream_running(&ssi->playback) ||
 	    rz_ssi_is_stream_running(&ssi->capture)) {
 		if (rz_ssi_is_valid_hw_params(ssi, rate, channels, sample_width, sample_bits))
@@ -1041,10 +881,21 @@ static int rz_ssi_dai_hw_params(struct snd_pcm_substream *substream,
 	if (ret)
 		return ret;
 
-	return rz_ssi_clk_setup(ssi, rate, channels);
+	return rz_ssi_clk_setup(ssi, substream, rate, channels);
+}
+
+static int rz_ssi_dai_probe(struct snd_soc_dai *dai)
+{
+	struct rz_ssi_priv *ssi = snd_soc_dai_get_drvdata(dai);
+
+	snd_soc_dai_init_dma_data(dai, &ssi->dma_dais[SNDRV_PCM_STREAM_PLAYBACK],
+				  &ssi->dma_dais[SNDRV_PCM_STREAM_CAPTURE]);
+
+	return 0;
 }
 
 static const struct snd_soc_dai_ops rz_ssi_dai_ops = {
+	.probe		= rz_ssi_dai_probe,
 	.startup	= rz_ssi_startup,
 	.shutdown	= rz_ssi_shutdown,
 	.trigger	= rz_ssi_dai_trigger,
@@ -1058,9 +909,9 @@ static const struct snd_pcm_hardware rz_ssi_pcm_hardware = {
 				  SNDRV_PCM_INFO_MMAP_VALID	|
 				  SNDRV_PCM_INFO_RESUME		|
 				  SNDRV_PCM_INFO_PAUSE,
-	.buffer_bytes_max	= PREALLOC_BUFFER,
+	.buffer_bytes_max	= 192 * 1024,
 	.period_bytes_min	= 32,
-	.period_bytes_max	= 8192,
+	.period_bytes_max	= 48 * 1024,
 	.channels_min		= SSI_CHAN_MIN,
 	.channels_max		= SSI_CHAN_MAX,
 	.periods_min		= 1,
@@ -1068,8 +919,8 @@ static const struct snd_pcm_hardware rz_ssi_pcm_hardware = {
 	.fifo_size		= 32 * 2,
 };
 
-static int rz_ssi_pcm_open(struct snd_soc_component *component,
-			   struct snd_pcm_substream *substream)
+static int rz_ssi_pcm_open_pio(struct snd_soc_component *component,
+			       struct snd_pcm_substream *substream)
 {
 	snd_soc_set_runtime_hwparams(substream, &rz_ssi_pcm_hardware);
 
@@ -1077,6 +928,13 @@ static int rz_ssi_pcm_open(struct snd_soc_component *component,
 					    SNDRV_PCM_HW_PARAM_PERIODS);
 }
 
+static int rz_ssi_pcm_open_dma(struct snd_soc_component *component,
+			       struct snd_pcm_substream *substream)
+{
+	return snd_pcm_hw_constraint_integer(substream->runtime,
+					     SNDRV_PCM_HW_PARAM_PERIODS);
+}
+
 static snd_pcm_uframes_t rz_ssi_pcm_pointer(struct snd_soc_component *component,
 					    struct snd_pcm_substream *substream)
 {
@@ -1093,7 +951,8 @@ static int rz_ssi_pcm_new(struct snd_soc_component *component,
 {
 	snd_pcm_set_managed_buffer_all(rtd->pcm, SNDRV_DMA_TYPE_DEV,
 				       rtd->card->snd_card->dev,
-				       PREALLOC_BUFFER, PREALLOC_BUFFER_MAX);
+				       rz_ssi_pcm_hardware.buffer_bytes_max,
+				       rz_ssi_pcm_hardware.buffer_bytes_max);
 	return 0;
 }
 
@@ -1116,16 +975,30 @@ static struct snd_soc_dai_driver rz_ssi_soc_dai[] = {
 	},
 };
 
-static const struct snd_soc_component_driver rz_ssi_soc_component = {
+static const struct snd_soc_component_driver rz_ssi_soc_component_pio = {
 	.name			= "rz-ssi",
-	.open			= rz_ssi_pcm_open,
+	.open			= rz_ssi_pcm_open_pio,
 	.pointer		= rz_ssi_pcm_pointer,
 	.pcm_new		= rz_ssi_pcm_new,
 	.legacy_dai_naming	= 1,
 };
 
+static const struct snd_soc_component_driver rz_ssi_soc_component_dma = {
+	.name			= "rz-ssi",
+	.open			= rz_ssi_pcm_open_dma,
+	.legacy_dai_naming	= 1,
+};
+
+static const struct snd_dmaengine_pcm_config rz_ssi_dmaengine_pcm_conf = {
+	.pcm_hardware		= &rz_ssi_pcm_hardware,
+	.prealloc_buffer_size	= 192 * 1024,
+	.prepare_slave_config	= snd_dmaengine_pcm_prepare_slave_config,
+};
+
 static int rz_ssi_probe(struct platform_device *pdev)
 {
+	const struct snd_soc_component_driver *component_driver;
+	struct device_node *np = pdev->dev.of_node;
 	struct device *dev = &pdev->dev;
 	struct rz_ssi_priv *ssi;
 	struct clk *audio_clk;
@@ -1141,7 +1014,6 @@ static int rz_ssi_probe(struct platform_device *pdev)
 	if (IS_ERR(ssi->base))
 		return PTR_ERR(ssi->base);
 
-	ssi->phys = res->start;
 	ssi->clk = devm_clk_get(dev, "ssi");
 	if (IS_ERR(ssi->clk))
 		return PTR_ERR(ssi->clk);
@@ -1165,16 +1037,43 @@ static int rz_ssi_probe(struct platform_device *pdev)
 
 	ssi->audio_mck = ssi->audio_clk_1 ? ssi->audio_clk_1 : ssi->audio_clk_2;
 
-	/* Detect DMA support */
-	ret = rz_ssi_dma_request(ssi, dev);
-	if (ret < 0) {
+	ssi->dma_dais[SNDRV_PCM_STREAM_PLAYBACK].addr = (dma_addr_t)res->start + SSIFTDR;
+	ssi->dma_dais[SNDRV_PCM_STREAM_CAPTURE].addr =  (dma_addr_t)res->start + SSIFRDR;
+
+	if (of_property_present(np, "dma-names")) {
+		struct snd_dmaengine_pcm_config *config;
+		unsigned int flags = 0;
+
+		config = devm_kzalloc(dev, sizeof(*config), GFP_KERNEL);
+		if (!config)
+			return -ENOMEM;
+
+		config->pcm_hardware = rz_ssi_dmaengine_pcm_conf.pcm_hardware;
+		config->prealloc_buffer_size = rz_ssi_dmaengine_pcm_conf.prealloc_buffer_size;
+		config->prepare_slave_config = rz_ssi_dmaengine_pcm_conf.prepare_slave_config;
+
+		if (of_property_match_string(np, "dma-names", "rt") == 0) {
+			flags = SND_DMAENGINE_PCM_FLAG_HALF_DUPLEX;
+			config->chan_names[SNDRV_PCM_STREAM_PLAYBACK] = "rt";
+		} else {
+			config->chan_names[SNDRV_PCM_STREAM_PLAYBACK] = "tx";
+			config->chan_names[SNDRV_PCM_STREAM_CAPTURE] = "rx";
+		}
+		ret = devm_snd_dmaengine_pcm_register(&pdev->dev, config, flags);
+	} else {
+		ret = -ENODEV;
+	}
+
+	if (ret == -EPROBE_DEFER) {
+		return ret;
+	} else if (ret) {
 		dev_warn(dev, "DMA not available, using PIO\n");
 		ssi->playback.transfer = rz_ssi_pio_send;
 		ssi->capture.transfer = rz_ssi_pio_recv;
+		component_driver = &rz_ssi_soc_component_pio;
 	} else {
-		dev_info(dev, "DMA enabled");
-		ssi->playback.transfer = rz_ssi_dma_transfer;
-		ssi->capture.transfer = rz_ssi_dma_transfer;
+		dev_info(dev, "DMA enabled\n");
+		component_driver = &rz_ssi_soc_component_dma;
 	}
 
 	ssi->playback.priv = ssi;
@@ -1185,17 +1084,13 @@ static int rz_ssi_probe(struct platform_device *pdev)
 
 	/* Error Interrupt */
 	ssi->irq_int = platform_get_irq_byname(pdev, "int_req");
-	if (ssi->irq_int < 0) {
-		ret = ssi->irq_int;
-		goto err_release_dma_chs;
-	}
+	if (ssi->irq_int < 0)
+		return ssi->irq_int;
 
 	ret = devm_request_irq(dev, ssi->irq_int, rz_ssi_interrupt,
 			       0, dev_name(dev), ssi);
-	if (ret < 0) {
-		dev_err_probe(dev, ret, "irq request error (int_req)\n");
-		goto err_release_dma_chs;
-	}
+	if (ret < 0)
+		return dev_err_probe(dev, ret, "irq request error (int_req)\n");
 
 	if (!rz_ssi_is_dma_enabled(ssi)) {
 		/* Tx and Rx interrupts (pio only) */
@@ -1236,43 +1131,19 @@ static int rz_ssi_probe(struct platform_device *pdev)
 	}
 
 	ssi->rstc = devm_reset_control_get_exclusive(dev, NULL);
-	if (IS_ERR(ssi->rstc)) {
-		ret = PTR_ERR(ssi->rstc);
-		goto err_release_dma_chs;
-	}
+	if (IS_ERR(ssi->rstc))
+		return dev_err_probe(dev, PTR_ERR(ssi->rstc), "Failed to get reset\n");
 
 	/* Default 0 for power saving. Can be overridden via sysfs. */
 	pm_runtime_set_autosuspend_delay(dev, 0);
 	pm_runtime_use_autosuspend(dev);
 	ret = devm_pm_runtime_enable(dev);
-	if (ret < 0) {
-		dev_err(dev, "Failed to enable runtime PM!\n");
-		goto err_release_dma_chs;
-	}
-
-	ret = devm_snd_soc_register_component(dev, &rz_ssi_soc_component,
-					      rz_ssi_soc_dai,
-					      ARRAY_SIZE(rz_ssi_soc_dai));
-	if (ret < 0) {
-		dev_err(dev, "failed to register snd component\n");
-		goto err_release_dma_chs;
-	}
-
-	return 0;
-
-err_release_dma_chs:
-	rz_ssi_release_dma_channels(ssi);
-
-	return ret;
-}
-
-static void rz_ssi_remove(struct platform_device *pdev)
-{
-	struct rz_ssi_priv *ssi = dev_get_drvdata(&pdev->dev);
-
-	rz_ssi_release_dma_channels(ssi);
+	if (ret < 0)
+		return dev_err_probe(dev, ret, "Failed to enable runtime PM!\n");
 
-	reset_control_assert(ssi->rstc);
+	return devm_snd_soc_register_component(dev, component_driver,
+					       rz_ssi_soc_dai,
+					       ARRAY_SIZE(rz_ssi_soc_dai));
 }
 
 static const struct of_device_id rz_ssi_of_match[] = {
@@ -1307,7 +1178,6 @@ static struct platform_driver rz_ssi_driver = {
 		.pm = pm_ptr(&rz_ssi_pm_ops),
 	},
 	.probe		= rz_ssi_probe,
-	.remove		= rz_ssi_remove,
 };
 
 module_platform_driver(rz_ssi_driver);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v5 17/17] dmaengine: sh: rz-dmac: Set the Link End (LE) bit on the last descriptor
  2026-05-12 12:12 [PATCH v5 00/17] Renesas: dmaengine and ASoC fixes Claudiu Beznea
                   ` (15 preceding siblings ...)
  2026-05-12 12:12 ` [PATCH v5 16/17] ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs Claudiu Beznea
@ 2026-05-12 12:12 ` Claudiu Beznea
  2026-05-14  5:22   ` sashiko-bot
  16 siblings, 1 reply; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-12 12:12 UTC (permalink / raw)
  To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur
  Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc, Claudiu Beznea

On an RZ/G2L-based system, it has been observed that when the DMA channels
for all enabled IPs are active (TX and RX for one serial IP, TX and RX for
one audio IP, and TX and RX for one SPI IP), shortly after all of them are
started, the system can become irrecoverably blocked. In one debug session
the system did not block, and the DMA HW registers were inspected. It was
found that the DER (Descriptor Error) bit in the CHSTAT register for one of
the SPI DMA channels was set.

According to the RZ/G2L HW Manual, Rev. 1.30, chapter 14.4.7 Channel
Status Register n/nS (CHSTAT_n/nS), description of the DER bit, the DER
bit is set when the LV (Link Valid) value loaded with a descriptor in link
mode is 0. This means that the DMA engine has loaded an invalid
descriptor (as defined in Table 14.14, Header Area, of the same manual).

The same chapter states that when a descriptor error occurs, the transfer
is stopped, but no DMA error interrupt is generated.

Set the LE bit on the last descriptor of a transfer. This informs the DMA
engine that this is the final descriptor for the transfer.

Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---

Changes in v5:
- none

Changes in v4:
- none

Changes in v3:
- none

Changes in v2:
- none

 drivers/dma/sh/rz-dmac.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index b0a8590f4a9d..19095a5492bc 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -200,6 +200,7 @@ struct rz_dmac {
 
 /* LINK MODE DESCRIPTOR */
 #define HEADER_LV			BIT(0)
+#define HEADER_LE			BIT(1)
 #define HEADER_WBD			BIT(2)
 
 #define RZ_DMAC_MAX_CHAN_DESCRIPTORS	16
@@ -382,7 +383,7 @@ static void rz_dmac_prepare_desc_for_memcpy(struct rz_dmac_chan *channel)
 	lmdesc->chcfg = chcfg;
 	lmdesc->chitvl = 0;
 	lmdesc->chext = 0;
-	lmdesc->header = HEADER_LV;
+	lmdesc->header = HEADER_LV | HEADER_LE;
 
 	rz_dmac_set_dma_req_no(dmac, channel->index, dmac->info->default_dma_req_no);
 
@@ -425,7 +426,7 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
 		lmdesc->chext = 0;
 		if (i == (sg_len - 1)) {
 			lmdesc->chcfg = (channel->chcfg & ~CHCFG_DEM);
-			lmdesc->header = HEADER_LV;
+			lmdesc->header = HEADER_LV | HEADER_LE;
 		} else {
 			lmdesc->chcfg = channel->chcfg;
 			lmdesc->header = HEADER_LV;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up
  2026-05-12 12:12 ` [PATCH v5 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up Claudiu Beznea
@ 2026-05-12 20:28   ` Frank Li
  2026-05-13 21:44   ` sashiko-bot
  1 sibling, 0 replies; 52+ messages in thread
From: Frank Li @ 2026-05-12 20:28 UTC (permalink / raw)
  To: Claudiu Beznea
  Cc: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur,
	claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc, stable

On Tue, May 12, 2026 at 03:12:02PM +0300, Claudiu Beznea wrote:
> Once the interrupt is requested, the interrupt handler may run immediately.
> Since the IRQ handler can access channel->ch_base, which is initialized
> only after requesting the IRQ, this may lead to invalid memory access.
> Likewise, the IRQ thread may access uninitialized data (the ld_free,
> ld_queue, and ld_active lists), which may also lead to issues.
>
> Request the interrupts only after everything is set up. To keep the error
> path simpler, use dmam_alloc_coherent() instead of dma_alloc_coherent().
>
> Fixes: 5000d37042a6 ("dmaengine: sh: Add DMAC driver for RZ/G2L SoC")
> Cc: stable@vger.kernel.org
> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> ---

Reviewed-by: Frank Li <Frank.Li@nxp.com>

>
> Changes in v5:
> - none
>
> Changes in v4:
> - none, this patch is new
>
>  drivers/dma/sh/rz-dmac.c | 88 +++++++++++++++-------------------------
>  1 file changed, 33 insertions(+), 55 deletions(-)
>
> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> index 625ff29024de..9f206a33dcc6 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
> @@ -981,25 +981,6 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac,
>  	channel->index = index;
>  	channel->mid_rid = -EINVAL;
>
> -	/* Request the channel interrupt. */
> -	scnprintf(pdev_irqname, sizeof(pdev_irqname), "ch%u", index);
> -	irq = platform_get_irq_byname(pdev, pdev_irqname);
> -	if (irq < 0)
> -		return irq;
> -
> -	irqname = devm_kasprintf(dmac->dev, GFP_KERNEL, "%s:%u",
> -				 dev_name(dmac->dev), index);
> -	if (!irqname)
> -		return -ENOMEM;
> -
> -	ret = devm_request_threaded_irq(dmac->dev, irq, rz_dmac_irq_handler,
> -					rz_dmac_irq_handler_thread, 0,
> -					irqname, channel);
> -	if (ret) {
> -		dev_err(dmac->dev, "failed to request IRQ %u (%d)\n", irq, ret);
> -		return ret;
> -	}
> -
>  	/* Set io base address for each channel */
>  	if (index < 8) {
>  		channel->ch_base = dmac->base + CHANNEL_0_7_OFFSET +
> @@ -1012,9 +993,9 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac,
>  	}
>
>  	/* Allocate descriptors */
> -	lmdesc = dma_alloc_coherent(&pdev->dev,
> -				    sizeof(struct rz_lmdesc) * DMAC_NR_LMDESC,
> -				    &channel->lmdesc.base_dma, GFP_KERNEL);
> +	lmdesc = dmam_alloc_coherent(&pdev->dev,
> +				     sizeof(struct rz_lmdesc) * DMAC_NR_LMDESC,
> +				     &channel->lmdesc.base_dma, GFP_KERNEL);
>  	if (!lmdesc) {
>  		dev_err(&pdev->dev, "Can't allocate memory (lmdesc)\n");
>  		return -ENOMEM;
> @@ -1030,7 +1011,24 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac,
>  	INIT_LIST_HEAD(&channel->ld_free);
>  	INIT_LIST_HEAD(&channel->ld_active);
>
> -	return 0;
> +	/* Request the channel interrupt. */
> +	scnprintf(pdev_irqname, sizeof(pdev_irqname), "ch%u", index);
> +	irq = platform_get_irq_byname(pdev, pdev_irqname);
> +	if (irq < 0)
> +		return irq;
> +
> +	irqname = devm_kasprintf(dmac->dev, GFP_KERNEL, "%s:%u",
> +				 dev_name(dmac->dev), index);
> +	if (!irqname)
> +		return -ENOMEM;
> +
> +	ret = devm_request_threaded_irq(dmac->dev, irq, rz_dmac_irq_handler,
> +					rz_dmac_irq_handler_thread, 0,
> +					irqname, channel);
> +	if (ret)
> +		dev_err(dmac->dev, "failed to request IRQ %u (%d)\n", irq, ret);
> +
> +	return ret;
>  }
>
>  static void rz_dmac_put_device(void *_dev)
> @@ -1099,7 +1097,6 @@ static int rz_dmac_probe(struct platform_device *pdev)
>  	const char *irqname = "error";
>  	struct dma_device *engine;
>  	struct rz_dmac *dmac;
> -	int channel_num;
>  	int ret;
>  	int irq;
>  	u8 i;
> @@ -1132,18 +1129,6 @@ static int rz_dmac_probe(struct platform_device *pdev)
>  			return PTR_ERR(dmac->ext_base);
>  	}
>
> -	/* Register interrupt handler for error */
> -	irq = platform_get_irq_byname_optional(pdev, irqname);
> -	if (irq > 0) {
> -		ret = devm_request_irq(&pdev->dev, irq, rz_dmac_irq_handler, 0,
> -				       irqname, NULL);
> -		if (ret) {
> -			dev_err(&pdev->dev, "failed to request IRQ %u (%d)\n",
> -				irq, ret);
> -			return ret;
> -		}
> -	}
> -
>  	/* Initialize the channels. */
>  	INIT_LIST_HEAD(&dmac->engine.channels);
>
> @@ -1169,6 +1154,18 @@ static int rz_dmac_probe(struct platform_device *pdev)
>  			goto err;
>  	}
>
> +	/* Register interrupt handler for error */
> +	irq = platform_get_irq_byname_optional(pdev, irqname);
> +	if (irq > 0) {
> +		ret = devm_request_irq(&pdev->dev, irq, rz_dmac_irq_handler, 0,
> +				       irqname, NULL);
> +		if (ret) {
> +			dev_err(&pdev->dev, "failed to request IRQ %u (%d)\n",
> +				irq, ret);
> +			goto err;
> +		}
> +	}
> +
>  	/* Register the DMAC as a DMA provider for DT. */
>  	ret = of_dma_controller_register(pdev->dev.of_node, rz_dmac_of_xlate,
>  					 NULL);
> @@ -1210,16 +1207,6 @@ static int rz_dmac_probe(struct platform_device *pdev)
>  dma_register_err:
>  	of_dma_controller_free(pdev->dev.of_node);
>  err:
> -	channel_num = i ? i - 1 : 0;
> -	for (i = 0; i < channel_num; i++) {
> -		struct rz_dmac_chan *channel = &dmac->channels[i];
> -
> -		dma_free_coherent(&pdev->dev,
> -				  sizeof(struct rz_lmdesc) * DMAC_NR_LMDESC,
> -				  channel->lmdesc.base,
> -				  channel->lmdesc.base_dma);
> -	}
> -
>  	reset_control_assert(dmac->rstc);
>  err_pm_runtime_put:
>  	pm_runtime_put(&pdev->dev);
> @@ -1232,18 +1219,9 @@ static int rz_dmac_probe(struct platform_device *pdev)
>  static void rz_dmac_remove(struct platform_device *pdev)
>  {
>  	struct rz_dmac *dmac = platform_get_drvdata(pdev);
> -	unsigned int i;
>
>  	dma_async_device_unregister(&dmac->engine);
>  	of_dma_controller_free(pdev->dev.of_node);
> -	for (i = 0; i < dmac->n_channels; i++) {
> -		struct rz_dmac_chan *channel = &dmac->channels[i];
> -
> -		dma_free_coherent(&pdev->dev,
> -				  sizeof(struct rz_lmdesc) * DMAC_NR_LMDESC,
> -				  channel->lmdesc.base,
> -				  channel->lmdesc.base_dma);
> -	}
>  	reset_control_assert(dmac->rstc);
>  	pm_runtime_put(&pdev->dev);
>  	pm_runtime_disable(&pdev->dev);
> --
> 2.43.0
>

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 02/17] dmaengine: sh: rz-dmac: Fix incorrect NULL check on list_first_entry()
  2026-05-12 12:12 ` [PATCH v5 02/17] dmaengine: sh: rz-dmac: Fix incorrect NULL check on list_first_entry() Claudiu Beznea
@ 2026-05-12 20:35   ` Frank Li
  2026-05-13 13:31     ` Claudiu Beznea
  2026-05-13 22:00   ` sashiko-bot
  1 sibling, 1 reply; 52+ messages in thread
From: Frank Li @ 2026-05-12 20:35 UTC (permalink / raw)
  To: Claudiu Beznea
  Cc: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur,
	claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc, stable

On Tue, May 12, 2026 at 03:12:03PM +0300, Claudiu Beznea wrote:
> The list passed as argument to list_first_entry() is expected to be not
> empty.

Little confused,

#define list_first_entry_or_null(ptr, type, member) ({ \
	struct list_head *head__ = (ptr); \
	struct list_head *pos__ = READ_ONCE(head__->next); \
	pos__ != head__ ? list_entry(pos__, type, member) : NULL; \
})


both list passed to list_first_entry() or list_first_entry_or_null() must
be not NULL.

The return value is difference.

Fix incorrect NULL check for list_first_entry()

list_first_entry() does not return NULL when the list is empty,
making the existing NULL check invalid. Use list_first_entry_or_null()
instead.

Frank

> Use list_first_entry_or_null() to avoid dereferencing invalid
> memory.
>
> Fixes: 21323b118c16 ("dmaengine: sh: rz-dmac: Add device_tx_status() callback")
> Cc: stable@vger.kernel.org
> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> ---

>
> Changes in v5:
> - none
>
> Changes in v4:
> - none, this patch is new
>
>  drivers/dma/sh/rz-dmac.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> index 9f206a33dcc6..6d80cb668957 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
> @@ -723,8 +723,8 @@ static u32 rz_dmac_chan_get_residue(struct rz_dmac_chan *channel,
>  	u32 crla, crtb, i;
>
>  	/* Get current processing virtual descriptor */
> -	current_desc = list_first_entry(&channel->ld_active,
> -					struct rz_dmac_desc, node);
> +	current_desc = list_first_entry_or_null(&channel->ld_active,
> +						struct rz_dmac_desc, node);
>  	if (!current_desc)
>  		return 0;
>
> --
> 2.43.0
>

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 03/17] dmaengine: sh: rz-dmac: Use list_first_entry_or_null()
  2026-05-12 12:12 ` [PATCH v5 03/17] dmaengine: sh: rz-dmac: Use list_first_entry_or_null() Claudiu Beznea
@ 2026-05-12 20:38   ` Frank Li
  2026-05-13 22:18   ` sashiko-bot
  1 sibling, 0 replies; 52+ messages in thread
From: Frank Li @ 2026-05-12 20:38 UTC (permalink / raw)
  To: Claudiu Beznea
  Cc: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur,
	claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc

On Tue, May 12, 2026 at 03:12:04PM +0300, Claudiu Beznea wrote:
> Use list_first_entry_or_null() instead of open-coding it with a
> list_empty() check and list_first_entry(). This simplifies the code.
>
> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> ---

Reviewed-by: Frank Li <Frank.Li@nxp.com>

>
> Changes in v5:
> - none
>
> Changes in v4:
> - none
>
> Changes in v3:
> - none, this patch is new
>
>  drivers/dma/sh/rz-dmac.c | 10 ++++------
>  1 file changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> index 6d80cb668957..1717b407ab9e 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
> @@ -503,11 +503,10 @@ rz_dmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
>  		__func__, channel->index, &src, &dest, len);
>
>  	scoped_guard(spinlock_irqsave, &channel->vc.lock) {
> -		if (list_empty(&channel->ld_free))
> +		desc = list_first_entry_or_null(&channel->ld_free, struct rz_dmac_desc, node);
> +		if (!desc)
>  			return NULL;
>
> -		desc = list_first_entry(&channel->ld_free, struct rz_dmac_desc, node);
> -
>  		desc->type = RZ_DMAC_DESC_MEMCPY;
>  		desc->src = src;
>  		desc->dest = dest;
> @@ -533,11 +532,10 @@ rz_dmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
>  	int i = 0;
>
>  	scoped_guard(spinlock_irqsave, &channel->vc.lock) {
> -		if (list_empty(&channel->ld_free))
> +		desc = list_first_entry_or_null(&channel->ld_free, struct rz_dmac_desc, node);
> +		if (!desc)
>  			return NULL;
>
> -		desc = list_first_entry(&channel->ld_free, struct rz_dmac_desc, node);
> -
>  		for_each_sg(sgl, sg, sg_len, i)
>  			dma_length += sg_dma_len(sg);
>
> --
> 2.43.0
>

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 04/17] dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw()
  2026-05-12 12:12 ` [PATCH v5 04/17] dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw() Claudiu Beznea
@ 2026-05-12 20:42   ` Frank Li
  0 siblings, 0 replies; 52+ messages in thread
From: Frank Li @ 2026-05-12 20:42 UTC (permalink / raw)
  To: Claudiu Beznea
  Cc: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur,
	claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc

On Tue, May 12, 2026 at 03:12:05PM +0300, Claudiu Beznea wrote:
> Use rz_dmac_disable_hw() instead of open codding it. This unifies the

Nit: typo codding

> code and prepares it for the addition of suspend to RAM and cyclic DMA.
>
> The rz_dmac_disable_hw() from rz_dmac_chan_probe() was moved after
> vchan_init() as it initializes the channel->vc.chan.device used in
> rz_dmac_disable_hw().
>
> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> ---

Reviewed-by: Frank Li <Frank.Li@nxp.com>
>
> Changes in v5:
> - none
>
> Changes in v4:
> - in rz_dmac_chan_probe(): moved rz_dmac_disable_hw() after the
>   vchan_init(&channel->vc, &dmac->engine) call as this is the one which
>   initializes data structures used by the debug code from
>   rz_dmac_disable_hw(); updated the patch description to reflect this
>
> Changes in v3:
> - none, this patch is new
>
>  drivers/dma/sh/rz-dmac.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> index 1717b407ab9e..40ddf534c094 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
> @@ -873,7 +873,7 @@ static void rz_dmac_irq_handle_channel(struct rz_dmac_chan *channel)
>  			channel->index, chstat);
>
>  		scoped_guard(spinlock_irqsave, &channel->vc.lock)
> -			rz_dmac_ch_writel(channel, CHCTRL_DEFAULT, CHCTRL, 1);
> +			rz_dmac_disable_hw(channel);
>  		return;
>  	}
>
> @@ -1000,15 +1000,15 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac,
>  	}
>  	rz_lmdesc_setup(channel, lmdesc);
>
> -	/* Initialize register for each channel */
> -	rz_dmac_ch_writel(channel, CHCTRL_DEFAULT, CHCTRL, 1);
> -
>  	channel->vc.desc_free = rz_dmac_virt_desc_free;
>  	vchan_init(&channel->vc, &dmac->engine);
>  	INIT_LIST_HEAD(&channel->ld_queue);
>  	INIT_LIST_HEAD(&channel->ld_free);
>  	INIT_LIST_HEAD(&channel->ld_active);
>
> +	/* Initialize register for each channel */
> +	rz_dmac_disable_hw(channel);
> +
>  	/* Request the channel interrupt. */
>  	scnprintf(pdev_irqname, sizeof(pdev_irqname), "ch%u", index);
>  	irq = platform_get_irq_byname(pdev, pdev_irqname);
> --
> 2.43.0
>

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 05/17] dmaengine: sh: rz-dmac: Add helper to compute the lmdesc address
  2026-05-12 12:12 ` [PATCH v5 05/17] dmaengine: sh: rz-dmac: Add helper to compute the lmdesc address Claudiu Beznea
@ 2026-05-12 20:44   ` Frank Li
  0 siblings, 0 replies; 52+ messages in thread
From: Frank Li @ 2026-05-12 20:44 UTC (permalink / raw)
  To: Claudiu Beznea
  Cc: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur,
	claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc

On Tue, May 12, 2026 at 03:12:06PM +0300, Claudiu Beznea wrote:
> Add a helper function to compute the lmdesc address. This makes the
> code easier to understand, and the helper will be used in subsequent
> patches.


Add a helper function rz_dmac_lmdesc_addr() to compute the lmdesc address
to make code easier to understand, and ...

Reviewed-by: Frank Li <Frank.Li@nxp.com>

>
> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> ---
>
> Changes in v5:
> - none
>
> Changes in v4:
> - none
>
> Changes in v3:
> - none, this patch is new
>
>  drivers/dma/sh/rz-dmac.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> index 40ddf534c094..c48858b68dee 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
> @@ -259,6 +259,12 @@ static void rz_lmdesc_setup(struct rz_dmac_chan *channel,
>   * Descriptors preparation
>   */
>
> +static u32 rz_dmac_lmdesc_addr(struct rz_dmac_chan *channel, struct rz_lmdesc *lmdesc)
> +{
> +	return channel->lmdesc.base_dma +
> +	       (sizeof(struct rz_lmdesc) * (lmdesc - channel->lmdesc.base));
> +}
> +
>  static void rz_dmac_lmdesc_recycle(struct rz_dmac_chan *channel)
>  {
>  	struct rz_lmdesc *lmdesc = channel->lmdesc.head;
> @@ -284,9 +290,7 @@ static void rz_dmac_enable_hw(struct rz_dmac_chan *channel)
>
>  	rz_dmac_lmdesc_recycle(channel);
>
> -	nxla = channel->lmdesc.base_dma +
> -		(sizeof(struct rz_lmdesc) * (channel->lmdesc.head -
> -					     channel->lmdesc.base));
> +	nxla = rz_dmac_lmdesc_addr(channel, channel->lmdesc.head);
>
>  	chstat = rz_dmac_ch_readl(channel, CHSTAT, 1);
>  	if (!(chstat & CHSTAT_EN)) {
> --
> 2.43.0
>

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 06/17] dmaengine: sh: rz-dmac: Save the start LM descriptor
  2026-05-12 12:12 ` [PATCH v5 06/17] dmaengine: sh: rz-dmac: Save the start LM descriptor Claudiu Beznea
@ 2026-05-12 20:48   ` Frank Li
  2026-05-13 13:33     ` Claudiu Beznea
  2026-05-13 23:52   ` sashiko-bot
  1 sibling, 1 reply; 52+ messages in thread
From: Frank Li @ 2026-05-12 20:48 UTC (permalink / raw)
  To: Claudiu Beznea
  Cc: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur,
	claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc

On Tue, May 12, 2026 at 03:12:07PM +0300, Claudiu Beznea wrote:
> Save the start LM descriptor to avoid looping through the entire

where the looping you try to avoid?

Frank

> channel's LM descriptor list when computing the residue. This avoids
> unnecessary iterations.
>
> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> ---
>
> Changes in v5:
> - none
>
> Changes in v4:
> - none
>
> Changes in v3:
> - none, this patch is new
>
>  drivers/dma/sh/rz-dmac.c | 11 ++++++++---
>  1 file changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> index c48858b68dee..d3926ecd63ac 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
> @@ -58,6 +58,7 @@ struct rz_dmac_desc {
>  	/* For slave sg */
>  	struct scatterlist *sg;
>  	unsigned int sgcount;
> +	struct rz_lmdesc *start_lmdesc;
>  };
>
>  #define to_rz_dmac_desc(d)	container_of(d, struct rz_dmac_desc, vd)
> @@ -343,6 +344,8 @@ static void rz_dmac_prepare_desc_for_memcpy(struct rz_dmac_chan *channel)
>  	struct rz_dmac_desc *d = channel->desc;
>  	u32 chcfg = CHCFG_MEM_COPY;
>
> +	d->start_lmdesc = lmdesc;
> +
>  	/* prepare descriptor */
>  	lmdesc->sa = d->src;
>  	lmdesc->da = d->dest;
> @@ -377,6 +380,7 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
>  	}
>
>  	lmdesc = channel->lmdesc.tail;
> +	d->start_lmdesc = lmdesc;
>
>  	for (i = 0, sg = sgl; i < sg_len; i++, sg = sg_next(sg)) {
>  		if (d->direction == DMA_DEV_TO_MEM) {
> @@ -693,9 +697,10 @@ rz_dmac_get_next_lmdesc(struct rz_lmdesc *base, struct rz_lmdesc *lmdesc)
>  	return next;
>  }
>
> -static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel, u32 crla)
> +static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel,
> +						 struct rz_dmac_desc *desc, u32 crla)
>  {
> -	struct rz_lmdesc *lmdesc = channel->lmdesc.head;
> +	struct rz_lmdesc *lmdesc = desc->start_lmdesc;
>  	struct dma_chan *chan = &channel->vc.chan;
>  	struct rz_dmac *dmac = to_rz_dmac(chan->device);
>  	u32 residue = 0, i = 0;
> @@ -794,7 +799,7 @@ static u32 rz_dmac_chan_get_residue(struct rz_dmac_chan *channel,
>  	 * Calculate number of bytes transferred in processing virtual descriptor.
>  	 * One virtual descriptor can have many lmdesc.
>  	 */
> -	return crtb + rz_dmac_calculate_residue_bytes_in_vd(channel, crla);
> +	return crtb + rz_dmac_calculate_residue_bytes_in_vd(channel, current_desc, crla);
>  }
>
>  static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
> --
> 2.43.0
>

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 07/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is enabled
  2026-05-12 12:12 ` [PATCH v5 07/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is enabled Claudiu Beznea
@ 2026-05-12 20:49   ` Frank Li
  2026-05-13 23:59   ` sashiko-bot
  1 sibling, 0 replies; 52+ messages in thread
From: Frank Li @ 2026-05-12 20:49 UTC (permalink / raw)
  To: Claudiu Beznea
  Cc: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur,
	claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc

On Tue, May 12, 2026 at 03:12:08PM +0300, Claudiu Beznea wrote:
> Add a helper to check if the channel is enabled. This will be reused in

Nit: Add a helper rz_dmac_chan_is_enabled() ...

Reviewed-by: Frank Li <Frank.Li@nxp.com>
> subsequent patches.
>
> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> ---
>
> Changes in v5:
> - none
>
> Changes in v4:
> - none
>
> Changes in v3:
> - none, this patch is new
>
>  drivers/dma/sh/rz-dmac.c | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> index d3926ecd63ac..c7337cf27136 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
> @@ -279,6 +279,13 @@ static void rz_dmac_lmdesc_recycle(struct rz_dmac_chan *channel)
>  	channel->lmdesc.head = lmdesc;
>  }
>
> +static bool rz_dmac_chan_is_enabled(struct rz_dmac_chan *chan)
> +{
> +	u32 val = rz_dmac_ch_readl(chan, CHSTAT, 1);
> +
> +	return !!(val & CHSTAT_EN);
> +}
> +
>  static void rz_dmac_enable_hw(struct rz_dmac_chan *channel)
>  {
>  	struct dma_chan *chan = &channel->vc.chan;
> @@ -840,8 +847,7 @@ static int rz_dmac_device_pause(struct dma_chan *chan)
>
>  	guard(spinlock_irqsave)(&channel->vc.lock);
>
> -	val = rz_dmac_ch_readl(channel, CHSTAT, 1);
> -	if (!(val & CHSTAT_EN))
> +	if (!rz_dmac_chan_is_enabled(channel))
>  		return 0;
>
>  	rz_dmac_ch_writel(channel, CHCTRL_SETSUS, CHCTRL, 1);
> --
> 2.43.0
>

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 08/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is paused
  2026-05-12 12:12 ` [PATCH v5 08/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is paused Claudiu Beznea
@ 2026-05-12 20:57   ` Frank Li
  0 siblings, 0 replies; 52+ messages in thread
From: Frank Li @ 2026-05-12 20:57 UTC (permalink / raw)
  To: Claudiu Beznea
  Cc: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur,
	claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc

On Tue, May 12, 2026 at 03:12:09PM +0300, Claudiu Beznea wrote:
> Add a helper to check if the channel is paused. This will be reused in

Add a helper rz_dmac_chan_is_paused() ...

Reviewed-by: Frank Li <Frank.Li@nxp.com>

> subsequent patches.
>
> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> ---
>
> Changes in v5:
> - none
>
> Changes in v4:
> - none
>
> Changes in v3:
> - none, this patch is new
>
>  drivers/dma/sh/rz-dmac.c | 12 ++++++++----
>  1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> index c7337cf27136..042f85e58a79 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
> @@ -286,6 +286,13 @@ static bool rz_dmac_chan_is_enabled(struct rz_dmac_chan *chan)
>  	return !!(val & CHSTAT_EN);
>  }
>
> +static bool rz_dmac_chan_is_paused(struct rz_dmac_chan *chan)
> +{
> +	u32 val = rz_dmac_ch_readl(chan, CHSTAT, 1);
> +
> +	return !!(val & CHSTAT_SUS);
> +}
> +
>  static void rz_dmac_enable_hw(struct rz_dmac_chan *channel)
>  {
>  	struct dma_chan *chan = &channel->vc.chan;
> @@ -822,12 +829,9 @@ static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
>  		return status;
>
>  	scoped_guard(spinlock_irqsave, &channel->vc.lock) {
> -		u32 val;
> -
>  		residue = rz_dmac_chan_get_residue(channel, cookie);
>
> -		val = rz_dmac_ch_readl(channel, CHSTAT, 1);
> -		if (val & CHSTAT_SUS)
> +		if (rz_dmac_chan_is_paused(channel))
>  			status = DMA_PAUSED;
>  	}
>
> --
> 2.43.0
>

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 09/17] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing
  2026-05-12 12:12 ` [PATCH v5 09/17] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing Claudiu Beznea
@ 2026-05-12 21:38   ` Frank Li
  2026-05-13 13:34     ` Claudiu Beznea
  2026-05-14  0:42   ` sashiko-bot
  1 sibling, 1 reply; 52+ messages in thread
From: Frank Li @ 2026-05-12 21:38 UTC (permalink / raw)
  To: Claudiu Beznea
  Cc: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur,
	claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc

On Tue, May 12, 2026 at 03:12:10PM +0300, Claudiu Beznea wrote:
> The driver used a mix of virt-dma APIs and driver specific logic to
> process descriptors. It maintained three internal queues: ld_free,
> ld_queue, and ld_active as follows:
> - ld_free: stores the descriptors pre-allocated at probe time
> - ld_queue: stores descriptors after they are taken from ld_free and
>   prepared. At the same time, vchan_tx_prep() queues them to
>   vc->desc_allocated. The vc->desc_allocated list is then checked in
>   rz_dmac_issue_pending() and rz_dmac_irq_handler_thread() before
>   starting a new transfer via rz_dmac_xfer_desc(). In turn,
>   rz_dmac_xfer_desc() grabs the next descriptor from vc->desc_issued and
>   submits it for transfer
> - ld_active: stores the descriptors currently being transferred
>
> The interrupt handler moved a completed descriptor to ld_free before
> invoking its completion callback. Once returned to ld_free, the
> descriptor can be reused to prepare a new transfer. In theory, this
> means the descriptor could be re-prepared before its completion
> callback is called.
>
> Commit fully back the driver by the virt-dma APIs. With this, only ld_free
> need to be kept to track how many free descriptors are available. This
> is now done as follows:
> - the prepare stage removes the first descriptor from the ld_free and
>   prepares it
> - the completion calls for it vc->desc_free() (rz_dmac_virt_desc_free())
>   which re-adds the descriptor at the end of ld_free
>
> With this, the critical areas in prepare callbacks were minimized to only
> getting the descriptor from the ld_free list.

Do you plan remove ld_free also?

>
> This change introduces struct rz_dmac_chan::desc to keep track of the

Remove "this change", just Introduce ...

Frank
> currently transferred descriptor. It is cleared in
> rz_dmac_terminate_all(), referenced from rz_dmac_issue_pending() to
> determine whether a new transfer can be started, and from
> rz_dmac_irq_handler_thread() once a descriptor has completed. Finally,
> the rz_dmac_device_synchronize() was updated with vchan_synchronize()
> call to ensure the terminated descriptor is freed and the tasklet is
> killed.
>
> With this, residue computation is also simplified, as it can now be
> handled entirely through the virt-dma APIs.
>
> The spin_lock/unlock operations from rz_dmac_irq_handler_thread() were
> replaced by guard as the final code after rework is simpler this way.
>
> As subsequent commits will set the Link End bit on the last descriptor
> of a transfer, rz_dmac_enable_hw() is also adjusted as part of the full
> conversion to virt-dma APIs. It no longer checks the channel enable
> status itself; instead, its callers verify whether the channel is
> enabled and whether the previous transfer has completed before starting
> a new one.
>
> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> ---
>

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 10/17] dmaengine: sh: rz-dmac: Refactor pause/resume code
  2026-05-12 12:12 ` [PATCH v5 10/17] dmaengine: sh: rz-dmac: Refactor pause/resume code Claudiu Beznea
@ 2026-05-12 21:43   ` Frank Li
  2026-05-13 13:35     ` Claudiu Beznea
  2026-05-14  0:57   ` sashiko-bot
  1 sibling, 1 reply; 52+ messages in thread
From: Frank Li @ 2026-05-12 21:43 UTC (permalink / raw)
  To: Claudiu Beznea
  Cc: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur,
	claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc

On Tue, May 12, 2026 at 03:12:11PM +0300, Claudiu Beznea wrote:
> Subsequent patches will add suspend/resume and cyclic DMA support to the
> rz-dmac driver. This support needs to work on SoCs where power to most
> components (including DMA) is turned off during system suspend. For this,
> some channels (for example cyclic ones) may need to be paused and resumed
> manually by the DMA driver during system suspend/resume.
>
> Refactor the pause/resume support so the same code can be reused in the
> system suspend/resume path.
>
> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> ---
>
> Changes in v5:
> - none
>
> Changes in v4:
> - reset channel->status in rz_dmac_free_chan_resources() and
>   rz_dmac_terminate_all()
>
> Changes in v3:
> - none, this patch new new
>
>  drivers/dma/sh/rz-dmac.c | 73 ++++++++++++++++++++++++++++++++++------
>  1 file changed, 62 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> index 53ee9fe65261..2bf796dcc5f6 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
> @@ -18,6 +18,7 @@
>  #include <linux/irqchip/irq-renesas-rzv2h.h>
>  #include <linux/irqchip/irq-renesas-rzt2h.h>
>  #include <linux/list.h>
> +#include <linux/lockdep.h>
>  #include <linux/module.h>
>  #include <linux/of.h>
>  #include <linux/of_dma.h>
> @@ -63,6 +64,14 @@ struct rz_dmac_desc {
>
>  #define to_rz_dmac_desc(d)	container_of(d, struct rz_dmac_desc, vd)
>
> +/**
> + * enum rz_dmac_chan_status: RZ DMAC channel status
> + * @RZ_DMAC_CHAN_STATUS_PAUSED: Channel is paused though DMA engine callbacks
> + */
> +enum rz_dmac_chan_status {
> +	RZ_DMAC_CHAN_STATUS_PAUSED,
> +};
> +

Not sure why use BIT() for each status? suppose only one certain state

Frank
>  struct rz_dmac_chan {
>  	struct virt_dma_chan vc;
>  	void __iomem *ch_base;
> @@ -74,6 +83,8 @@ struct rz_dmac_chan {
>  	dma_addr_t src_per_address;
>  	dma_addr_t dst_per_address;
>
> +	unsigned long status;
> +
>  	u32 chcfg;
>  	u32 chctrl;
>  	int mid_rid;
> @@ -491,6 +502,8 @@ static void rz_dmac_free_chan_resources(struct dma_chan *chan)
>  		channel->mid_rid = -EINVAL;
>  	}
>
> +	channel->status = 0;
> +
>  	spin_unlock_irqrestore(&channel->vc.lock, flags);
>
>  	vchan_free_chan_resources(&channel->vc);
> @@ -589,6 +602,9 @@ static int rz_dmac_terminate_all(struct dma_chan *chan)
>  	}
>
>  	vchan_get_all_descriptors(&channel->vc, &head);
> +
> +	channel->status = 0;
> +
>  	spin_unlock_irqrestore(&channel->vc.lock, flags);
>  	vchan_dma_desc_free_list(&channel->vc, &head);
>
> @@ -795,35 +811,70 @@ static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
>  	return status;
>  }
>
> -static int rz_dmac_device_pause(struct dma_chan *chan)
> +static int rz_dmac_device_pause_set(struct rz_dmac_chan *channel,
> +				    unsigned long set_bitmask)
>  {
> -	struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
> +	int ret = 0;
>  	u32 val;
>
> -	guard(spinlock_irqsave)(&channel->vc.lock);
> +	lockdep_assert_held(&channel->vc.lock);
>
>  	if (!rz_dmac_chan_is_enabled(channel))
>  		return 0;
>
> +	if (rz_dmac_chan_is_paused(channel))
> +		goto set_bit;
> +
>  	rz_dmac_ch_writel(channel, CHCTRL_SETSUS, CHCTRL, 1);
> -	return read_poll_timeout_atomic(rz_dmac_ch_readl, val,
> -					(val & CHSTAT_SUS), 1, 1024,
> -					false, channel, CHSTAT, 1);
> +	ret = read_poll_timeout_atomic(rz_dmac_ch_readl, val,
> +				       (val & CHSTAT_SUS), 1, 1024, false,
> +				       channel, CHSTAT, 1);
> +
> +set_bit:
> +	channel->status |= set_bitmask;
> +
> +	return ret;
>  }
>
> -static int rz_dmac_device_resume(struct dma_chan *chan)
> +static int rz_dmac_device_pause(struct dma_chan *chan)
>  {
>  	struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
> -	u32 val;
>
>  	guard(spinlock_irqsave)(&channel->vc.lock);
>
> +	return rz_dmac_device_pause_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED));
> +}
> +
> +static int rz_dmac_device_resume_set(struct rz_dmac_chan *channel,
> +				     unsigned long clear_bitmask)
> +{
> +	int ret = 0;
> +	u32 val;
> +
> +	lockdep_assert_held(&channel->vc.lock);
> +
>  	/* Do not check CHSTAT_SUS but rely on HW capabilities. */
>
>  	rz_dmac_ch_writel(channel, CHCTRL_CLRSUS, CHCTRL, 1);
> -	return read_poll_timeout_atomic(rz_dmac_ch_readl, val,
> -					!(val & CHSTAT_SUS), 1, 1024,
> -					false, channel, CHSTAT, 1);
> +	ret = read_poll_timeout_atomic(rz_dmac_ch_readl, val,
> +				       !(val & CHSTAT_SUS), 1, 1024, false,
> +				       channel, CHSTAT, 1);
> +
> +	channel->status &= ~clear_bitmask;
> +
> +	return ret;
> +}
> +
> +static int rz_dmac_device_resume(struct dma_chan *chan)
> +{
> +	struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
> +
> +	guard(spinlock_irqsave)(&channel->vc.lock);
> +
> +	if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_PAUSED)))
> +		return 0;
> +
> +	return rz_dmac_device_resume_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED));
>  }
>
>  /*
> --
> 2.43.0
>

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 11/17] dmaengine: sh: rz-dmac: Drop the update of channel->chctrl with CHCTRL_SETEN
  2026-05-12 12:12 ` [PATCH v5 11/17] dmaengine: sh: rz-dmac: Drop the update of channel->chctrl with CHCTRL_SETEN Claudiu Beznea
@ 2026-05-12 21:55   ` Frank Li
  0 siblings, 0 replies; 52+ messages in thread
From: Frank Li @ 2026-05-12 21:55 UTC (permalink / raw)
  To: Claudiu Beznea
  Cc: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur,
	claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc

On Tue, May 12, 2026 at 03:12:12PM +0300, Claudiu Beznea wrote:
> The CHCTRL_SETEN bit is explicitly set in rz_dmac_enable_hw(). Updating
> struct rz_dmac_chan::chctrl with this bit in
> rz_dmac_prepare_desc_for_memcpy() and rz_dmac_prepare_descs_for_slave_sg()
> is unnecessary in the current code base. Moreover, it conflicts with the
> configuration sequence that will be used for cyclic DMA channels during
> suspend to RAM. Cyclic DMA support will be introduced in subsequent
> commits.
>
> This is a preparatory commit for cyclic DMA suspend to RAM support.
>
> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> ---

Reviewed-by: Frank Li <Frank.Li@nxp.com>

>
> Changes in v5:
> - none
>
> Changes in v4:
> - set channel->chctrl = 0 in rz_dmac_prepare_descs_for_slave_sg()
>
> Changes in v3:
> - none
>
> Changes in v2:
> - fixed typos in patch title and patch description
>
>  drivers/dma/sh/rz-dmac.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> index 2bf796dcc5f6..2de519b581b6 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
> @@ -377,7 +377,7 @@ static void rz_dmac_prepare_desc_for_memcpy(struct rz_dmac_chan *channel)
>  	rz_dmac_set_dma_req_no(dmac, channel->index, dmac->info->default_dma_req_no);
>
>  	channel->chcfg = chcfg;
> -	channel->chctrl = CHCTRL_STG | CHCTRL_SETEN;
> +	channel->chctrl = CHCTRL_STG;
>  }
>
>  static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
> @@ -428,7 +428,7 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
>
>  	rz_dmac_set_dma_req_no(dmac, channel->index, channel->mid_rid);
>
> -	channel->chctrl = CHCTRL_SETEN;
> +	channel->chctrl = 0;
>  }
>
>  static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
> --
> 2.43.0
>

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 12/17] dmaengine: sh: rz-dmac: Add cyclic DMA support
  2026-05-12 12:12 ` [PATCH v5 12/17] dmaengine: sh: rz-dmac: Add cyclic DMA support Claudiu Beznea
@ 2026-05-12 22:00   ` Frank Li
  2026-05-13 13:38     ` Claudiu Beznea
  2026-05-14  1:43   ` sashiko-bot
  1 sibling, 1 reply; 52+ messages in thread
From: Frank Li @ 2026-05-12 22:00 UTC (permalink / raw)
  To: Claudiu Beznea
  Cc: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur,
	claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc

On Tue, May 12, 2026 at 03:12:13PM +0300, Claudiu Beznea wrote:
> Add cyclic DMA support to the RZ DMAC driver. A per-channel status bit is
> introduced to mark cyclic channels and is set during the DMA prepare
> callback. The IRQ handler checks this status bit and calls
> vchan_cyclic_callback() accordingly.
>
> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> ---
>
> Changes in v5:
> - none
>
> Changes in v4:
> - drop the nxla update logic in rz_dmac_lmdesc_recycle() as this is
>   not needed for any kind of transfers
> - drop the update of channel->status = 0 from rz_dmac_free_chan_resources()
>   and rz_dmac_terminate_all() as this was moved in patch 09/17
>
> Changes in v3:
> - updated rz_dmac_lmdesc_recycle() to restore the lmdesc->nxla
> - in rz_dmac_prepare_descs_for_cyclic() update directly the
>   desc->start_lmdesc with the descriptor pointer insted of the
>   descriptor address
> - used rz_dmac_lmdesc_addr() to compute the descritor address
> - set channel->status = 0 in rz_dmac_free_chan_resources()
> - in rz_dmac_prep_dma_cyclic() check for invalid periods or buffer len
>   and limit the critical area protected by spinlock
> - set channel->status = 0 in rz_dmac_terminate_all()
> - updated rz_dmac_calculate_residue_bytes_in_vd() to use
>   rz_dmac_lmdesc_addr()
> - dropped goto in rz_dmac_irq_handler_thread() as it is not needed
>   anymore; dropped also the local variable desc
>
> Changes in v2:
> - none
>
>  drivers/dma/sh/rz-dmac.c | 136 +++++++++++++++++++++++++++++++++++++--
>  1 file changed, 130 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> index 2de519b581b6..d6ad070be705 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
> @@ -35,6 +35,7 @@
>  enum  rz_dmac_prep_type {
>  	RZ_DMAC_DESC_MEMCPY,
>  	RZ_DMAC_DESC_SLAVE_SG,
> +	RZ_DMAC_DESC_CYCLIC,
>  };
>
>  struct rz_lmdesc {
> @@ -67,9 +68,11 @@ struct rz_dmac_desc {
>  /**
>   * enum rz_dmac_chan_status: RZ DMAC channel status
>   * @RZ_DMAC_CHAN_STATUS_PAUSED: Channel is paused though DMA engine callbacks
> + * @RZ_DMAC_CHAN_STATUS_CYCLIC: Channel is cyclic
>   */
>  enum rz_dmac_chan_status {
>  	RZ_DMAC_CHAN_STATUS_PAUSED,
> +	RZ_DMAC_CHAN_STATUS_CYCLIC,

suggest add new field bool iscycle in rz_dmac_chan.

>  };
>
>  struct rz_dmac_chan {
> @@ -191,6 +194,7 @@ struct rz_dmac {
>
>  /* LINK MODE DESCRIPTOR */
>  #define HEADER_LV			BIT(0)
> +#define HEADER_WBD			BIT(2)
>
>  #define RZ_DMAC_MAX_CHAN_DESCRIPTORS	16
>  #define RZ_DMAC_MAX_CHANNELS		16
> @@ -431,6 +435,57 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
>  	channel->chctrl = 0;
>  }
>
...
>
> +static struct dma_async_tx_descriptor *
> +rz_dmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
> +			size_t buf_len, size_t period_len,
> +			enum dma_transfer_direction direction,
> +			unsigned long flags)
> +{
> +	struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
> +	struct rz_dmac_desc *desc;
> +	size_t periods;
> +
> +	if (!is_slave_direction(direction))
> +		return NULL;
> +
> +	if (!period_len || !buf_len)
> +		return NULL;
> +
> +	periods = buf_len / period_len;
> +	if (!periods || periods > DMAC_NR_LMDESC)
> +		return NULL;
> +
> +	scoped_guard(spinlock_irqsave, &channel->vc.lock) {
> +		if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC))
> +			return NULL;
> +
> +		desc = list_first_entry_or_null(&channel->ld_free, struct rz_dmac_desc, node);

sugest use dma_pool manage desc, so ld_free can be removed.

Frank

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 13/17] dmaengine: sh: rz-dmac: Add runtime PM support
  2026-05-12 12:12 ` [PATCH v5 13/17] dmaengine: sh: rz-dmac: Add runtime PM support Claudiu Beznea
@ 2026-05-12 22:03   ` Frank Li
  2026-05-13 13:39     ` Claudiu Beznea
  2026-05-14  2:08   ` sashiko-bot
  1 sibling, 1 reply; 52+ messages in thread
From: Frank Li @ 2026-05-12 22:03 UTC (permalink / raw)
  To: Claudiu Beznea
  Cc: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur,
	claudiu.beznea, dmaengine, linux-kernel, linux-sound,
	linux-renesas-soc

On Tue, May 12, 2026 at 03:12:14PM +0300, Claudiu Beznea wrote:
> Protect the driver exposed APIs with runtime PM suspend/resume calls
> before accessing HW registers. As the current driver leaves runtime PM
> enabled in probe, the purpose of the changes in this patch is to avoid
> accessing HW registers after a failed system suspend leaves the runtime
> PM state of the device improperly reinitialized.
>
> In that case, the driver remains bound to the device, the APIs are still
> exposed, and any access to HW registers without runtime resuming the
> device may lead to synchronous aborts.
>
> This patch prepares the driver for suspend-to-RAM support.
>
> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> ---
>
> Changes in v5:
> - none, this patch is new
>
>  drivers/dma/sh/rz-dmac.c | 48 ++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 48 insertions(+)
>
> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> index d6ad070be705..df91657fd5e3 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
> @@ -488,7 +488,15 @@ static void rz_dmac_prepare_descs_for_cyclic(struct rz_dmac_chan *channel)
>
>  static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
>  {
> +	struct dma_chan *ch = &chan->vc.chan;
> +	struct rz_dmac *dmac = to_rz_dmac(ch->device);
>  	struct virt_dma_desc *vd;
> +	int ret;
> +
> +	PM_RUNTIME_ACQUIRE_IF_ENABLED(dmac->dev, pm);
> +	ret = PM_RUNTIME_ACQUIRE_ERR(&pm);
> +	if (ret)
> +		return;

According vnod comment *_prep() call may be called in atomic context
(complete callback). but runtime_pm may sleep.

Frank

>

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 02/17] dmaengine: sh: rz-dmac: Fix incorrect NULL check on list_first_entry()
  2026-05-12 20:35   ` Frank Li
@ 2026-05-13 13:31     ` Claudiu Beznea
  0 siblings, 0 replies; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-13 13:31 UTC (permalink / raw)
  To: Frank Li, Claudiu Beznea
  Cc: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur, dmaengine,
	linux-kernel, linux-sound, linux-renesas-soc, stable

Hi, Frank,

On 5/12/26 23:35, Frank Li wrote:
> On Tue, May 12, 2026 at 03:12:03PM +0300, Claudiu Beznea wrote:
>> The list passed as argument to list_first_entry() is expected to be not
>> empty.
> 
> Little confused,
> 
> #define list_first_entry_or_null(ptr, type, member) ({ \
> 	struct list_head *head__ = (ptr); \
> 	struct list_head *pos__ = READ_ONCE(head__->next); \
> 	pos__ != head__ ? list_entry(pos__, type, member) : NULL; \
> })
> 
> 
> both list passed to list_first_entry() or list_first_entry_or_null() must
> be not NULL.

The intention was to to express that checking the pointer returned by 
list_first_entry() may lead to problems. I'll adjust the description to be more 
clear.

-- 
Thank you,
Claudiu


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 06/17] dmaengine: sh: rz-dmac: Save the start LM descriptor
  2026-05-12 20:48   ` Frank Li
@ 2026-05-13 13:33     ` Claudiu Beznea
  0 siblings, 0 replies; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-13 13:33 UTC (permalink / raw)
  To: Frank Li, Claudiu Beznea
  Cc: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur, dmaengine,
	linux-kernel, linux-sound, linux-renesas-soc

Hi, Frank,

On 5/12/26 23:48, Frank Li wrote:
> On Tue, May 12, 2026 at 03:12:07PM +0300, Claudiu Beznea wrote:
>> Save the start LM descriptor to avoid looping through the entire
> 
> where the looping you try to avoid?

Here: https://elixir.bootlin.com/linux/v7.1-rc3/source/drivers/dma/sh/rz-dmac.c#L701

The point was to avoid starting from the head pointer and use directly the 
descriptor of interests:

-static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel, 
u32 crla)
+static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel,
+						 struct rz_dmac_desc *desc, u32 crla)
  {
-	struct rz_lmdesc *lmdesc = channel->lmdesc.head;
+	struct rz_lmdesc *lmdesc = desc->start_lmdesc;

Maybe "loop" was not the right word to use in the patch description. I'll adjust it.

-- 
Thank you,
Claudiu


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 09/17] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing
  2026-05-12 21:38   ` Frank Li
@ 2026-05-13 13:34     ` Claudiu Beznea
  0 siblings, 0 replies; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-13 13:34 UTC (permalink / raw)
  To: Frank Li, Claudiu Beznea
  Cc: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur, dmaengine,
	linux-kernel, linux-sound, linux-renesas-soc

Hi, Frank,

On 5/13/26 00:38, Frank Li wrote:
> On Tue, May 12, 2026 at 03:12:10PM +0300, Claudiu Beznea wrote:
>> The driver used a mix of virt-dma APIs and driver specific logic to
>> process descriptors. It maintained three internal queues: ld_free,
>> ld_queue, and ld_active as follows:
>> - ld_free: stores the descriptors pre-allocated at probe time
>> - ld_queue: stores descriptors after they are taken from ld_free and
>>    prepared. At the same time, vchan_tx_prep() queues them to
>>    vc->desc_allocated. The vc->desc_allocated list is then checked in
>>    rz_dmac_issue_pending() and rz_dmac_irq_handler_thread() before
>>    starting a new transfer via rz_dmac_xfer_desc(). In turn,
>>    rz_dmac_xfer_desc() grabs the next descriptor from vc->desc_issued and
>>    submits it for transfer
>> - ld_active: stores the descriptors currently being transferred
>>
>> The interrupt handler moved a completed descriptor to ld_free before
>> invoking its completion callback. Once returned to ld_free, the
>> descriptor can be reused to prepare a new transfer. In theory, this
>> means the descriptor could be re-prepared before its completion
>> callback is called.
>>
>> Commit fully back the driver by the virt-dma APIs. With this, only ld_free
>> need to be kept to track how many free descriptors are available. This
>> is now done as follows:
>> - the prepare stage removes the first descriptor from the ld_free and
>>    prepares it
>> - the completion calls for it vc->desc_free() (rz_dmac_virt_desc_free())
>>    which re-adds the descriptor at the end of ld_free
>>
>> With this, the critical areas in prepare callbacks were minimized to only
>> getting the descriptor from the ld_free list.
> 
> Do you plan remove ld_free also?

I thought about it. But I prefer to keep it aside from this set as it is already 
big enough and I haven't notice any possible issue with it.

> 
>>
>> This change introduces struct rz_dmac_chan::desc to keep track of the
> 
> Remove "this change", just Introduce ...

OK

-- 
Thank you,
Claudiu


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 10/17] dmaengine: sh: rz-dmac: Refactor pause/resume code
  2026-05-12 21:43   ` Frank Li
@ 2026-05-13 13:35     ` Claudiu Beznea
  0 siblings, 0 replies; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-13 13:35 UTC (permalink / raw)
  To: Frank Li, Claudiu Beznea
  Cc: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur, dmaengine,
	linux-kernel, linux-sound, linux-renesas-soc

Hi, Frank,

On 5/13/26 00:43, Frank Li wrote:
> On Tue, May 12, 2026 at 03:12:11PM +0300, Claudiu Beznea wrote:
>> Subsequent patches will add suspend/resume and cyclic DMA support to the
>> rz-dmac driver. This support needs to work on SoCs where power to most
>> components (including DMA) is turned off during system suspend. For this,
>> some channels (for example cyclic ones) may need to be paused and resumed
>> manually by the DMA driver during system suspend/resume.
>>
>> Refactor the pause/resume support so the same code can be reused in the
>> system suspend/resume path.
>>
>> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
>> ---
>>
>> Changes in v5:
>> - none
>>
>> Changes in v4:
>> - reset channel->status in rz_dmac_free_chan_resources() and
>>    rz_dmac_terminate_all()
>>
>> Changes in v3:
>> - none, this patch new new
>>
>>   drivers/dma/sh/rz-dmac.c | 73 ++++++++++++++++++++++++++++++++++------
>>   1 file changed, 62 insertions(+), 11 deletions(-)
>>
>> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
>> index 53ee9fe65261..2bf796dcc5f6 100644
>> --- a/drivers/dma/sh/rz-dmac.c
>> +++ b/drivers/dma/sh/rz-dmac.c
>> @@ -18,6 +18,7 @@
>>   #include <linux/irqchip/irq-renesas-rzv2h.h>
>>   #include <linux/irqchip/irq-renesas-rzt2h.h>
>>   #include <linux/list.h>
>> +#include <linux/lockdep.h>
>>   #include <linux/module.h>
>>   #include <linux/of.h>
>>   #include <linux/of_dma.h>
>> @@ -63,6 +64,14 @@ struct rz_dmac_desc {
>>
>>   #define to_rz_dmac_desc(d)	container_of(d, struct rz_dmac_desc, vd)
>>
>> +/**
>> + * enum rz_dmac_chan_status: RZ DMAC channel status
>> + * @RZ_DMAC_CHAN_STATUS_PAUSED: Channel is paused though DMA engine callbacks
>> + */
>> +enum rz_dmac_chan_status {
>> +	RZ_DMAC_CHAN_STATUS_PAUSED,
>> +};
>> +
> 
> Not sure why use BIT() for each status? suppose only one certain state

Later (in the next patches), a channel could be paused (or paused internally) 
and cyclic at the same time. This way we can keep a single member in struct 
rz_dmac_chan for all these and execute a single instruction when clearing the 
status bit (e.g. in rz_dmac_free_chan_resources(), rz_dmac_terminate_all()).

I consider this more compact than having individual state variables for all these.

-- 
Thank you,
Claudiu


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 12/17] dmaengine: sh: rz-dmac: Add cyclic DMA support
  2026-05-12 22:00   ` Frank Li
@ 2026-05-13 13:38     ` Claudiu Beznea
  0 siblings, 0 replies; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-13 13:38 UTC (permalink / raw)
  To: Frank Li, Claudiu Beznea
  Cc: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur, dmaengine,
	linux-kernel, linux-sound, linux-renesas-soc

Hi, Frank,

On 5/13/26 01:00, Frank Li wrote:
> On Tue, May 12, 2026 at 03:12:13PM +0300, Claudiu Beznea wrote:
>> Add cyclic DMA support to the RZ DMAC driver. A per-channel status bit is
>> introduced to mark cyclic channels and is set during the DMA prepare
>> callback. The IRQ handler checks this status bit and calls
>> vchan_cyclic_callback() accordingly.
>>
>> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
>> ---
>>
>> Changes in v5:
>> - none
>>
>> Changes in v4:
>> - drop the nxla update logic in rz_dmac_lmdesc_recycle() as this is
>>    not needed for any kind of transfers
>> - drop the update of channel->status = 0 from rz_dmac_free_chan_resources()
>>    and rz_dmac_terminate_all() as this was moved in patch 09/17
>>
>> Changes in v3:
>> - updated rz_dmac_lmdesc_recycle() to restore the lmdesc->nxla
>> - in rz_dmac_prepare_descs_for_cyclic() update directly the
>>    desc->start_lmdesc with the descriptor pointer insted of the
>>    descriptor address
>> - used rz_dmac_lmdesc_addr() to compute the descritor address
>> - set channel->status = 0 in rz_dmac_free_chan_resources()
>> - in rz_dmac_prep_dma_cyclic() check for invalid periods or buffer len
>>    and limit the critical area protected by spinlock
>> - set channel->status = 0 in rz_dmac_terminate_all()
>> - updated rz_dmac_calculate_residue_bytes_in_vd() to use
>>    rz_dmac_lmdesc_addr()
>> - dropped goto in rz_dmac_irq_handler_thread() as it is not needed
>>    anymore; dropped also the local variable desc
>>
>> Changes in v2:
>> - none
>>
>>   drivers/dma/sh/rz-dmac.c | 136 +++++++++++++++++++++++++++++++++++++--
>>   1 file changed, 130 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
>> index 2de519b581b6..d6ad070be705 100644
>> --- a/drivers/dma/sh/rz-dmac.c
>> +++ b/drivers/dma/sh/rz-dmac.c
>> @@ -35,6 +35,7 @@
>>   enum  rz_dmac_prep_type {
>>   	RZ_DMAC_DESC_MEMCPY,
>>   	RZ_DMAC_DESC_SLAVE_SG,
>> +	RZ_DMAC_DESC_CYCLIC,
>>   };
>>
>>   struct rz_lmdesc {
>> @@ -67,9 +68,11 @@ struct rz_dmac_desc {
>>   /**
>>    * enum rz_dmac_chan_status: RZ DMAC channel status
>>    * @RZ_DMAC_CHAN_STATUS_PAUSED: Channel is paused though DMA engine callbacks
>> + * @RZ_DMAC_CHAN_STATUS_CYCLIC: Channel is cyclic
>>    */
>>   enum rz_dmac_chan_status {
>>   	RZ_DMAC_CHAN_STATUS_PAUSED,
>> +	RZ_DMAC_CHAN_STATUS_CYCLIC,
> 
> suggest add new field bool iscycle in rz_dmac_chan.

I would prefer as it was proposed in this patch, if all good with everybody. In 
this way everything status related is packed in a single variable, struct 
rz_dmac_chan::status, and only a single cleanup operation is needed when the 
transactions are terminated.

> 
>>   };
>>
>>   struct rz_dmac_chan {
>> @@ -191,6 +194,7 @@ struct rz_dmac {
>>
>>   /* LINK MODE DESCRIPTOR */
>>   #define HEADER_LV			BIT(0)
>> +#define HEADER_WBD			BIT(2)
>>
>>   #define RZ_DMAC_MAX_CHAN_DESCRIPTORS	16
>>   #define RZ_DMAC_MAX_CHANNELS		16
>> @@ -431,6 +435,57 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
>>   	channel->chctrl = 0;
>>   }
>>
> ...
>>
>> +static struct dma_async_tx_descriptor *
>> +rz_dmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
>> +			size_t buf_len, size_t period_len,
>> +			enum dma_transfer_direction direction,
>> +			unsigned long flags)
>> +{
>> +	struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
>> +	struct rz_dmac_desc *desc;
>> +	size_t periods;
>> +
>> +	if (!is_slave_direction(direction))
>> +		return NULL;
>> +
>> +	if (!period_len || !buf_len)
>> +		return NULL;
>> +
>> +	periods = buf_len / period_len;
>> +	if (!periods || periods > DMAC_NR_LMDESC)
>> +		return NULL;
>> +
>> +	scoped_guard(spinlock_irqsave, &channel->vc.lock) {
>> +		if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC))
>> +			return NULL;
>> +
>> +		desc = list_first_entry_or_null(&channel->ld_free, struct rz_dmac_desc, node);
> 
> sugest use dma_pool manage desc, so ld_free can be removed.

Sure, but I would like to keep it aside from this set as it already big enough 
and I haven't noticed any potential issues with it.

-- 
Thank you,
Claudiu


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 13/17] dmaengine: sh: rz-dmac: Add runtime PM support
  2026-05-12 22:03   ` Frank Li
@ 2026-05-13 13:39     ` Claudiu Beznea
  2026-05-13 19:56       ` Frank Li
  0 siblings, 1 reply; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-13 13:39 UTC (permalink / raw)
  To: Frank Li, Claudiu Beznea
  Cc: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
	prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur, dmaengine,
	linux-kernel, linux-sound, linux-renesas-soc

Hi, Frank,

On 5/13/26 01:03, Frank Li wrote:
> On Tue, May 12, 2026 at 03:12:14PM +0300, Claudiu Beznea wrote:
>> Protect the driver exposed APIs with runtime PM suspend/resume calls
>> before accessing HW registers. As the current driver leaves runtime PM
>> enabled in probe, the purpose of the changes in this patch is to avoid
>> accessing HW registers after a failed system suspend leaves the runtime
>> PM state of the device improperly reinitialized.
>>
>> In that case, the driver remains bound to the device, the APIs are still
>> exposed, and any access to HW registers without runtime resuming the
>> device may lead to synchronous aborts.
>>
>> This patch prepares the driver for suspend-to-RAM support.
>>
>> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
>> ---
>>
>> Changes in v5:
>> - none, this patch is new
>>
>>   drivers/dma/sh/rz-dmac.c | 48 ++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 48 insertions(+)
>>
>> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
>> index d6ad070be705..df91657fd5e3 100644
>> --- a/drivers/dma/sh/rz-dmac.c
>> +++ b/drivers/dma/sh/rz-dmac.c
>> @@ -488,7 +488,15 @@ static void rz_dmac_prepare_descs_for_cyclic(struct rz_dmac_chan *channel)
>>
>>   static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
>>   {
>> +	struct dma_chan *ch = &chan->vc.chan;
>> +	struct rz_dmac *dmac = to_rz_dmac(ch->device);
>>   	struct virt_dma_desc *vd;
>> +	int ret;
>> +
>> +	PM_RUNTIME_ACQUIRE_IF_ENABLED(dmac->dev, pm);
>> +	ret = PM_RUNTIME_ACQUIRE_ERR(&pm);
>> +	if (ret)
>> +		return;
> 
> According vnod comment *_prep() call may be called in atomic context
> (complete callback). but runtime_pm may sleep.

That's why the pm_runtime_irq_safe() was called in probe, to allow it being 
called in atomic context.

The series was tested with CONFIG_LOCKDEP=y and CONFIG_DEBUG_ATOMIC_SLEEP=y no 
issue was identified.

-- 
Thank you,
Claudiu


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 13/17] dmaengine: sh: rz-dmac: Add runtime PM support
  2026-05-13 13:39     ` Claudiu Beznea
@ 2026-05-13 19:56       ` Frank Li
  2026-05-14  9:20         ` Claudiu Beznea
  0 siblings, 1 reply; 52+ messages in thread
From: Frank Li @ 2026-05-13 19:56 UTC (permalink / raw)
  To: Claudiu Beznea
  Cc: Claudiu Beznea, vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai,
	biju.das.jz, prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur, dmaengine,
	linux-kernel, linux-sound, linux-renesas-soc

On Wed, May 13, 2026 at 04:39:12PM +0300, Claudiu Beznea wrote:
> Hi, Frank,
>
> On 5/13/26 01:03, Frank Li wrote:
> > On Tue, May 12, 2026 at 03:12:14PM +0300, Claudiu Beznea wrote:
> > > Protect the driver exposed APIs with runtime PM suspend/resume calls
> > > before accessing HW registers. As the current driver leaves runtime PM
> > > enabled in probe, the purpose of the changes in this patch is to avoid
> > > accessing HW registers after a failed system suspend leaves the runtime
> > > PM state of the device improperly reinitialized.
> > >
> > > In that case, the driver remains bound to the device, the APIs are still
> > > exposed, and any access to HW registers without runtime resuming the
> > > device may lead to synchronous aborts.
> > >
> > > This patch prepares the driver for suspend-to-RAM support.
> > >
> > > Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> > > ---
> > >
> > > Changes in v5:
> > > - none, this patch is new
> > >
> > >   drivers/dma/sh/rz-dmac.c | 48 ++++++++++++++++++++++++++++++++++++++++
> > >   1 file changed, 48 insertions(+)
> > >
> > > diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> > > index d6ad070be705..df91657fd5e3 100644
> > > --- a/drivers/dma/sh/rz-dmac.c
> > > +++ b/drivers/dma/sh/rz-dmac.c
> > > @@ -488,7 +488,15 @@ static void rz_dmac_prepare_descs_for_cyclic(struct rz_dmac_chan *channel)
> > >
> > >   static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
> > >   {
> > > +	struct dma_chan *ch = &chan->vc.chan;
> > > +	struct rz_dmac *dmac = to_rz_dmac(ch->device);
> > >   	struct virt_dma_desc *vd;
> > > +	int ret;
> > > +
> > > +	PM_RUNTIME_ACQUIRE_IF_ENABLED(dmac->dev, pm);
> > > +	ret = PM_RUNTIME_ACQUIRE_ERR(&pm);
> > > +	if (ret)
> > > +		return;
> >
> > According vnod comment *_prep() call may be called in atomic context
> > (complete callback). but runtime_pm may sleep.
>
> That's why the pm_runtime_irq_safe() was called in probe, to allow it being
> called in atomic context.
>
> The series was tested with CONFIG_LOCKDEP=y and CONFIG_DEBUG_ATOMIC_SLEEP=y
> no issue was identified.

I am not sure how magic it makes pm_runtime_get_sync() work under atomic
context, suppose runtime callback involve clk_(un)prep() and power domain,
if you call pm_runtime_irq_safe() in probe, it may makes all parent resource
on when probe. At least it should defer to alloc chan.

some platform's dependent is simple, which may just use MMIO to gate clock.

Frank
>
> --
> Thank you,
> Claudiu
>

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up
  2026-05-12 12:12 ` [PATCH v5 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up Claudiu Beznea
  2026-05-12 20:28   ` Frank Li
@ 2026-05-13 21:44   ` sashiko-bot
  1 sibling, 0 replies; 52+ messages in thread
From: sashiko-bot @ 2026-05-13 21:44 UTC (permalink / raw)
  To: Claudiu Beznea; +Cc: vkoul, wsa+renesas, dmaengine, Frank.Li

Thank you for your contribution! Sashiko AI review found 3 potential issue(s) to consider:
- [High] Asynchronous pm_runtime_put() used immediately before pm_runtime_disable() leaves hardware in incorrect state.
- [High] Hardware is placed in reset and powered down while devm-managed interrupts are still active, risking a kernel crash.
- [High] Missing tasklet_kill() during probe error and removal paths leads to a Use-After-Free.
--

commit db01e5ee3e302bfa4e29d74e8df1953929f696eb
Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>

dmaengine: sh: rz-dmac: Move interrupt request after everything is set up

This patch fixes potential invalid memory access during probe by moving the
interrupt request to the end of the channel initialization, ensuring all
data structures are fully set up before the interrupt handler can run. It
also switches to dmam_alloc_coherent() to simplify the error and removal
paths.

> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> index 625ff29024de0..9f206a33dcc6b 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c

[ ... ]

> @@ -1210,16 +1207,6 @@ static int rz_dmac_probe(struct platform_device *pdev)
>  dma_register_err:
>  	of_dma_controller_free(pdev->dev.of_node);
>  err:
>  	reset_control_assert(dmac->rstc);
>  err_pm_runtime_put:
>  	pm_runtime_put(&pdev->dev);
> @@ -1232,18 +1219,9 @@ static int rz_dmac_probe(struct platform_device *pdev)
>  static void rz_dmac_remove(struct platform_device *pdev)
>  {
>  	struct rz_dmac *dmac = platform_get_drvdata(pdev);
>  
>  	dma_async_device_unregister(&dmac->engine);
>  	of_dma_controller_free(pdev->dev.of_node);
>  	reset_control_assert(dmac->rstc);
>  	pm_runtime_put(&pdev->dev);
>  	pm_runtime_disable(&pdev->dev);
>  }

While reviewing these changes, I noticed a few pre-existing regressions that
remain after this patch.

Does calling pm_runtime_put() asynchronously right before pm_runtime_disable()
leave the hardware active?

pm_runtime_put() queues an idle request, but the subsequent
pm_runtime_disable() cancels pending async operations via
__pm_runtime_barrier(). Should this use pm_runtime_put_sync() instead to
ensure the device actually suspends and doesn't leak power resources?

Also, the hardware reset is asserted (reset_control_assert) and PM is
disabled before returning. Since the interrupts are registered with
devm_request_irq() and devm_request_threaded_irq(), they will be freed by
devres only after returning.

If a spurious or delayed interrupt fires during this window, could the IRQ
handler access hardware registers while the IP block is in reset and unclocked,
leading to a crash?

Finally, vchan_init() initializes a tasklet (channel->vc.task) for each
channel. Since the channels array is allocated with devm_kcalloc(), it is
automatically freed by devres on removal or probe failure.

However, there is no explicit tasklet_kill() during removal or probe error
handling. Although devres frees the IRQ, free_irq() only syncs the handler
and thread, not pending tasklets. If the tasklet is scheduled before the IRQ
is freed, could it remain on the system softirq queue and execute after
devres frees the channel memory, causing a use-after-free?

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260512121219.216159-1-claudiu.beznea.uj@bp.renesas.com?part=1

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 02/17] dmaengine: sh: rz-dmac: Fix incorrect NULL check on list_first_entry()
  2026-05-12 12:12 ` [PATCH v5 02/17] dmaengine: sh: rz-dmac: Fix incorrect NULL check on list_first_entry() Claudiu Beznea
  2026-05-12 20:35   ` Frank Li
@ 2026-05-13 22:00   ` sashiko-bot
  1 sibling, 0 replies; 52+ messages in thread
From: sashiko-bot @ 2026-05-13 22:00 UTC (permalink / raw)
  To: Claudiu Beznea; +Cc: vkoul, dmaengine, wsa+renesas, Frank.Li

Thank you for your contribution! Sashiko AI review found 1 potential issue(s) to consider:
- [High] Prematurely returning 0 when `ld_active` is empty skips searching `ld_queue`, falsely reporting queued descriptors as `DMA_COMPLETE`.
--

commit e2ab802dbc50a8c1bda80f9138ff1923d4e471e9
Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>

dmaengine: sh: rz-dmac: Fix incorrect NULL check on list_first_entry()

This commit replaces list_first_entry() with list_first_entry_or_null()
when retrieving the active descriptor to avoid dereferencing invalid
memory when the list is empty.

> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> index 9f206a33dcc6b..6d80cb668957f 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
> @@ -723,8 +723,8 @@ static u32 rz_dmac_chan_get_residue(struct rz_dmac_chan *channel,
>  	u32 crla, crtb, i;
>  
>  	/* Get current processing virtual descriptor */
> -	current_desc = list_first_entry(&channel->ld_active,
> -					struct rz_dmac_desc, node);
> +	current_desc = list_first_entry_or_null(&channel->ld_active,
> +						struct rz_dmac_desc, node);
>  	if (!current_desc)
>  		return 0;

If a client queries the status of a pending transfer before it starts,
channel->ld_active will be empty.

Does returning 0 here bypass the subsequent search of channel->ld_queue?

If the search of channel->ld_queue is skipped, will rz_dmac_tx_status()
erroneously override the status and report it as complete for an unstarted
transfer?

Could this regression cause clients to prematurely free memory buffers and
lead to a use-after-free when the hardware later executes the transfer?

Prior to this patch, the out-of-bounds pointer from list_first_entry()
bypassed the early return, resulting in a garbage cookie value that typically
didn't match the requested cookie.

The logic would then safely fall through, correctly search channel->ld_queue,
and return desc->len for pending transfers.

Commit 42ad33b3ab87bf57f49fbcb8d0351a26fc1ff69d ("dmaengine: sh: rz-dmac: Use
virt-dma APIs for channel descriptor processing") later refactors descriptor
processing and handles the residue calculation via vchan_find_desc(), which
resolves this issue.

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260512121219.216159-1-claudiu.beznea.uj@bp.renesas.com?part=2

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 03/17] dmaengine: sh: rz-dmac: Use list_first_entry_or_null()
  2026-05-12 12:12 ` [PATCH v5 03/17] dmaengine: sh: rz-dmac: Use list_first_entry_or_null() Claudiu Beznea
  2026-05-12 20:38   ` Frank Li
@ 2026-05-13 22:18   ` sashiko-bot
  1 sibling, 0 replies; 52+ messages in thread
From: sashiko-bot @ 2026-05-13 22:18 UTC (permalink / raw)
  To: Claudiu Beznea; +Cc: Frank.Li, dmaengine, wsa+renesas, vkoul

Thank you for your contribution! Sashiko AI review found 1 potential issue(s) to consider:
- [High] Pre-existing severe race condition exposing uninitialized descriptors to hardware execution queues, causing transfers to permanently hang.
--

commit 47f89a64a0228eb1a1e5b15d4e627cac741a15c7
Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>

dmaengine: sh: rz-dmac: Use list_first_entry_or_null()

This commit simplifies descriptor allocation by replacing an open-coded
list_empty() check and list_first_entry() call with a single call to
list_first_entry_or_null().

> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> index 6d80cb668957f..1717b407ab9e9 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
> @@ -503,11 +503,10 @@ rz_dmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
>  		__func__, channel->index, &src, &dest, len);
>  
>  	scoped_guard(spinlock_irqsave, &channel->vc.lock) {
> -		if (list_empty(&channel->ld_free))
> +		desc = list_first_entry_or_null(&channel->ld_free, struct rz_dmac_desc, node);
> +		if (!desc)
>  			return NULL;
>  
> -		desc = list_first_entry(&channel->ld_free, struct rz_dmac_desc, node);
> -
>  		desc->type = RZ_DMAC_DESC_MEMCPY;
>  		desc->src = src;
>  		desc->dest = dest;

This isn't a bug introduced by this commit, but there appears to be a
pre-existing race condition in the functionally modified block.

At the end of the scoped_guard block for channel->vc.lock, the descriptor is
moved from ld_free to ld_queue, and then the lock is dropped. Outside the
lock, vchan_tx_prep() is called.

If rz_dmac_irq_handler_thread() runs in the exact window between dropping the
lock and vchan_tx_prep(), could it prematurely pull the descriptor from
ld_queue?

It would then call rz_dmac_xfer_desc(), which checks vchan_next_desc().
Because the descriptor hasn't been fully initialized or submitted to virt-dma
yet, vchan_next_desc() would return NULL, causing rz_dmac_xfer_desc() to
return 0 without programming the hardware.

The IRQ handler would then erroneously move the descriptor to ld_active. Later,
when the client finally calls rz_dmac_issue_pending(), it would find ld_queue
empty and do nothing.

Would this cause the transfer to remain stuck in ld_active and never execute?

Note that this issue is fixed later in the patch series by commit "dmaengine:
sh: rz-dmac: Use virt-dma APIs for channel descriptor processing".

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260512121219.216159-1-claudiu.beznea.uj@bp.renesas.com?part=3

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 06/17] dmaengine: sh: rz-dmac: Save the start LM descriptor
  2026-05-12 12:12 ` [PATCH v5 06/17] dmaengine: sh: rz-dmac: Save the start LM descriptor Claudiu Beznea
  2026-05-12 20:48   ` Frank Li
@ 2026-05-13 23:52   ` sashiko-bot
  1 sibling, 0 replies; 52+ messages in thread
From: sashiko-bot @ 2026-05-13 23:52 UTC (permalink / raw)
  To: Claudiu Beznea; +Cc: Frank.Li, wsa+renesas, dmaengine, vkoul

Thank you for your contribution! Sashiko AI review found 4 potential issue(s) to consider:
- [High] Uninitialized pointer dereference causing a kernel panic in `rz_dmac_calculate_residue_bytes_in_vd`.
- [High] Pre-existing bug: `rz_dmac_prepare_desc_for_memcpy` does not advance the hardware ring buffer tail, causing active descriptor corruption.
- [High] Pre-existing bug: Fundamentally broken residue calculation in `rz_dmac_calculate_residue_bytes_in_vd()`.
- [Medium] The patch fails to prevent the unnecessary loop iterations it claims to fix (Placebo Fix).
--

commit 4d414b3c13389eee6de9985a5483dae47daf52b4
Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>

dmaengine: sh: rz-dmac: Save the start LM descriptor

Save the start LM descriptor to avoid looping through the entire
channel's LM descriptor list when computing the residue. This avoids
unnecessary iterations.

> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> index c48858b68dee6..d3926ecd63ac1 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c

[ ... ]

> @@ -343,6 +344,8 @@ static void rz_dmac_prepare_desc_for_memcpy(struct rz_dmac_chan *channel)
>  	struct rz_dmac_desc *d = channel->desc;
>  	u32 chcfg = CHCFG_MEM_COPY;
>  
> +	d->start_lmdesc = lmdesc;
> +
>  	/* prepare descriptor */
>  	lmdesc->sa = d->src;
>  	lmdesc->da = d->dest;

This isn't a bug introduced by this commit, but I noticed
rz_dmac_prepare_desc_for_memcpy() does not appear to advance the hardware ring
buffer tail.

If multiple memcpy transfers are queued, will each new preparation overwrite
the exact same slot in the DMA hardware's ring buffer memory?

If a new memcpy transfer is prepared while a previous one is currently
executing from that slot, could this physically overwrite the active hardware
descriptor and lead to unpredictable hardware behavior and memory corruption?

[ ... ]

> @@ -693,9 +697,10 @@ rz_dmac_get_next_lmdesc(struct rz_lmdesc *base, struct rz_lmdesc *lmdesc)
>  	return next;
>  }
>  
> -static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel, u32 crla)
> +static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel,
> +						 struct rz_dmac_desc *desc, u32 crla)
>  {
> -	struct rz_lmdesc *lmdesc = channel->lmdesc.head;
> +	struct rz_lmdesc *lmdesc = desc->start_lmdesc;

Can desc->start_lmdesc be uninitialized here and cause a panic?

If a client driver prepares and submits a descriptor but delays calling
dma_async_issue_pending(), the descriptor enters ld_queue but not desc_issued.
If a prior transfer completes, rz_dmac_irq_handler_thread() attempts to
process ld_queue by calling rz_dmac_xfer_desc().

Because the new descriptor is not in desc_issued, vchan_next_desc() returns
NULL, and rz_dmac_xfer_desc() returns 0 without executing the prepare
functions, leaving start_lmdesc as NULL.

The IRQ handler then interprets the 0 return as success, moving this
unprepared descriptor to ld_active. A subsequent call to
dmaengine_tx_status() triggers rz_dmac_calculate_residue_bytes_in_vd(), which
would dereference the NULL start_lmdesc.

I see this is fixed later in the patch series by commit "dmaengine: sh:
rz-dmac: Use virt-dma APIs for channel descriptor processing" which correctly
handles vchan_next_desc() returning NULL, but could this cause bisectability
issues here?

>  	struct dma_chan *chan = &channel->vc.chan;
>  	struct rz_dmac *dmac = to_rz_dmac(chan->device);
>  	u32 residue = 0, i = 0;

Does the while (lmdesc->nxla != crla) loop below this still iterate through
the whole list, missing the intended optimization mentioned in the commit
message?

If the hardware is actively processing the first descriptor, crla is the
address of that first descriptor. Since lmdesc is initialized to
desc->start_lmdesc (which is also the first descriptor), lmdesc->nxla will
point to the second descriptor.

This will not match crla, so it appears the loop will still iterate through
all 64 elements of the ring buffer to find the descriptor preceding the
first one.

Also, a pre-existing issue unrelated to this patch: is the underlying residue
arithmetic correct for multi-segment transfers?

When processing a multi-segment scatter-gather list, the function attempts to
find the descriptor preceding the currently executing one. Once found, it
iterates forward using:

    while (lmdesc->chcfg & CHCFG_DEM)

Because it starts from the descriptor preceding the current one, if crla is
the second descriptor (L2), the loop starts at L1, adds L1->tb, and terminates
because L2's CHCFG_DEM is 0.

It then adds crtb (the bytes of the current segment). Does the final returned
value double-count the already transferred L1 and completely miss the sizes of
all future segments in the virtual transfer?

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260512121219.216159-1-claudiu.beznea.uj@bp.renesas.com?part=6

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 07/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is enabled
  2026-05-12 12:12 ` [PATCH v5 07/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is enabled Claudiu Beznea
  2026-05-12 20:49   ` Frank Li
@ 2026-05-13 23:59   ` sashiko-bot
  1 sibling, 0 replies; 52+ messages in thread
From: sashiko-bot @ 2026-05-13 23:59 UTC (permalink / raw)
  To: Claudiu Beznea; +Cc: dmaengine, wsa+renesas, vkoul, Frank.Li

Thank you for your contribution! Sashiko AI review found 3 potential issue(s) to consider:
- [Low] Incomplete refactoring: the `rz_dmac_chan_is_enabled` helper is not used in `rz_dmac_enable_hw()`.
- [Low] Parameter naming convention mismatch in `rz_dmac_chan_is_enabled()`.
- [Low] Redundant double-negation operator (`!!`) used with a `bool` return type.
--

commit c927e2bf558812607e323c4466ef5f913dc31b55
Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>

dmaengine: sh: rz-dmac: Add helper to check if the channel is enabled

This commit adds a helper function to check if a DMA channel is enabled.
The new helper is then used to refactor the device pause operation.

> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> index d3926ecd63ac1..c7337cf271369 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
> @@ -279,6 +279,13 @@ static void rz_dmac_lmdesc_recycle(struct rz_dmac_chan *channel)
>  	channel->lmdesc.head = lmdesc;
>  }
>  
> +static bool rz_dmac_chan_is_enabled(struct rz_dmac_chan *chan)

This isn't a bug, but could we use "channel" instead of "chan" for the
parameter name?

Throughout the rest of the driver, "chan" seems reserved for
struct dma_chan *, while "channel" is used for struct rz_dmac_chan *.

> +{
> +	u32 val = rz_dmac_ch_readl(chan, CHSTAT, 1);
> +
> +	return !!(val & CHSTAT_EN);

Is the double negation necessary here?

Since the function returns a bool, the compiler will automatically convert
any non-zero value to true, so returning val & CHSTAT_EN should be
sufficient.

> +}
> +
>  static void rz_dmac_enable_hw(struct rz_dmac_chan *channel)

Could the new helper be used here in rz_dmac_enable_hw() as well?

Looking at rz_dmac_enable_hw(), it has an identical open-coded check:

static void rz_dmac_enable_hw(struct rz_dmac_chan *channel)
{
        ...
        chstat = rz_dmac_ch_readl(channel, CHSTAT, 1);
        if (!(chstat & CHSTAT_EN)) {
        ...

Using the helper there would allow removing the local chstat variable and
improve consistency.

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260512121219.216159-1-claudiu.beznea.uj@bp.renesas.com?part=7

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 09/17] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing
  2026-05-12 12:12 ` [PATCH v5 09/17] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing Claudiu Beznea
  2026-05-12 21:38   ` Frank Li
@ 2026-05-14  0:42   ` sashiko-bot
  1 sibling, 0 replies; 52+ messages in thread
From: sashiko-bot @ 2026-05-14  0:42 UTC (permalink / raw)
  To: Claudiu Beznea; +Cc: vkoul, wsa+renesas, Frank.Li, dmaengine

Thank you for your contribution! Sashiko AI review found 1 potential issue(s) to consider:
- [Medium] Active descriptor (`channel->desc`) is leaked in `rz_dmac_free_chan_resources()` if the channel is not explicitly terminated.
--

commit 215fec8fbe90c070b8a33e790b492a28153d2d70
Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>

dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing

This commit refactors the rz-dmac driver to fully utilize virt-dma APIs for
descriptor management. It replaces the internal ld_queue and ld_active lists
with virt-dma equivalents, and introduces channel->desc to track the currently
processing descriptor.

> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> index 042f85e58a799..53ee9fe652614 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
[ ... ]
> @@ -494,8 +485,6 @@ static void rz_dmac_free_chan_resources(struct dma_chan *chan)
>  	rz_lmdesc_setup(channel, channel->lmdesc.base);
>  
>  	rz_dmac_disable_hw(channel);
> -	list_splice_tail_init(&channel->ld_active, &channel->ld_free);
> -	list_splice_tail_init(&channel->ld_queue, &channel->ld_free);

Does this leak the active descriptor if the channel is freed without being
explicitly terminated?

The original code moved the active descriptors back to ld_free for cleanup.
With the new code using channel->desc for the active transfer, if a client
releases the channel without calling dmaengine_terminate_sync(),
channel->desc could remain non-NULL.

Since channel->desc is detached from all virt-dma lists and the driver's
ld_free list, neither vchan_free_chan_resources() nor the subsequent kfree
loop over ld_free will free it, preventing channel->descs_allocated from
decrementing correctly.

>  
>  	if (channel->mid_rid >= 0) {
>  		clear_bit(channel->mid_rid, dmac->modules);

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260512121219.216159-1-claudiu.beznea.uj@bp.renesas.com?part=9

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 10/17] dmaengine: sh: rz-dmac: Refactor pause/resume code
  2026-05-12 12:12 ` [PATCH v5 10/17] dmaengine: sh: rz-dmac: Refactor pause/resume code Claudiu Beznea
  2026-05-12 21:43   ` Frank Li
@ 2026-05-14  0:57   ` sashiko-bot
  1 sibling, 0 replies; 52+ messages in thread
From: sashiko-bot @ 2026-05-14  0:57 UTC (permalink / raw)
  To: Claudiu Beznea; +Cc: dmaengine, Frank.Li, vkoul, wsa+renesas

Thank you for your contribution! Sashiko AI review found 2 potential issue(s) to consider:
- [High] Hardware is prematurely resumed when multiple pause sources are active.
- [High] Software status bitmask is incorrectly updated even if the hardware resume operation times out.
--

commit 6c32a478012b8f47e162dfe05b689182e89ba6c2
Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>

dmaengine: sh: rz-dmac: Refactor pause/resume code

This commit refactors the pause/resume functionality to introduce _set
variants using a channel->status bitmask. This prepares the driver for
suspend/resume and cyclic DMA support where multiple pause sources might
exist.

> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
[ ... ]
> @@ -795,35 +811,70 @@ static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
[ ... ]
> +static int rz_dmac_device_resume_set(struct rz_dmac_chan *channel,
> +				     unsigned long clear_bitmask)
> +{
> +	int ret = 0;
> +	u32 val;
> +
> +	lockdep_assert_held(&channel->vc.lock);
> +
> +	/* Do not check CHSTAT_SUS but rely on HW capabilities. */
> +
> +	rz_dmac_ch_writel(channel, CHCTRL_CLRSUS, CHCTRL, 1);

If there are multiple active pause sources (for example, paused by the DMA
client and by system suspend), will writing CHCTRL_CLRSUS here resume the
hardware prematurely?

Since this function is designed to clear a specific bitmask from the channel
status, should the hardware resume operation only be performed if there are
no other pause bits remaining in channel->status?

> +	ret = read_poll_timeout_atomic(rz_dmac_ch_readl, val,
> +				       !(val & CHSTAT_SUS), 1, 1024, false,
> +				       channel, CHSTAT, 1);
> +
> +	channel->status &= ~clear_bitmask;

If the hardware resume operation times out and returns an error, is it
expected to clear the status bitmask anyway?

If a caller receives an error and attempts to retry the resume operation,
the channel->status bit will already be cleared. Does this prevent the retry
from succeeding, leaving the hardware stuck in a paused state?

> +
> +	return ret;
> +}

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260512121219.216159-1-claudiu.beznea.uj@bp.renesas.com?part=10

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 12/17] dmaengine: sh: rz-dmac: Add cyclic DMA support
  2026-05-12 12:12 ` [PATCH v5 12/17] dmaengine: sh: rz-dmac: Add cyclic DMA support Claudiu Beznea
  2026-05-12 22:00   ` Frank Li
@ 2026-05-14  1:43   ` sashiko-bot
  1 sibling, 0 replies; 52+ messages in thread
From: sashiko-bot @ 2026-05-14  1:43 UTC (permalink / raw)
  To: Claudiu Beznea; +Cc: dmaengine, Frank.Li, wsa+renesas, vkoul

Thank you for your contribution! Sashiko AI review found 3 potential issue(s) to consider:
- [High] Modifying global channel state during cyclic descriptor preparation causes race conditions and state corruption.
- [High] Cyclic residue calculation omits the last descriptor, resulting in under-reported residue.
- [High] Stale hardware configuration fields leak from previous transfers into cyclic transfers.
--

commit 206116040d966fa7859b2dc81e454bdbed7acd43
Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>

    dmaengine: sh: rz-dmac: Add cyclic DMA support

This commit adds cyclic DMA support to the RZ DMAC driver. A per-channel
status bit is introduced to mark cyclic channels and is set during the DMA
prepare callback.

> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> index 2de519b581b60..d6ad070be705c 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c

[ ... ]

> @@ -431,6 +435,57 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
>  	channel->chctrl = 0;
>  }
>  
> +static void rz_dmac_prepare_descs_for_cyclic(struct rz_dmac_chan *channel)
> +{
> +	struct dma_chan *chan = &channel->vc.chan;
> +	struct rz_dmac *dmac = to_rz_dmac(chan->device);
> +	struct rz_dmac_desc *d = channel->desc;
> +	size_t period_len = d->sgcount;
> +	struct rz_lmdesc *lmdesc;
> +	size_t buf_len = d->len;
> +	size_t periods = buf_len / period_len;
> +
> +	lockdep_assert_held(&channel->vc.lock);
> +
> +	channel->chcfg |= CHCFG_SEL(channel->index) | CHCFG_DMS;

Are stale hardware configuration fields leaking from previous transfers here?

If a channel was previously used for a memory-to-memory transfer,
channel->chctrl might retain the CHCTRL_STG (Software Trigger Generation) bit.
If it was used for slave_sg, channel->chcfg might retain the CHCFG_DEM
(Descriptor End Mask) bit.

Because channel->chcfg is modified via OR-assignment and channel->chctrl
is not initialized to 0, could the stale CHCTRL_STG bit cause the peripheral
transfer to continuously trigger via software rather than synchronizing with
the hardware DREQ signal? Additionally, could a stale CHCFG_DEM bit mask
every period interrupt and prevent vchan_cyclic_callback() from being invoked?

[ ... ]

> @@ -586,6 +645,55 @@ rz_dmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
>  	return vchan_tx_prep(&channel->vc, &desc->vd, flags);
>  }
>  
> +static struct dma_async_tx_descriptor *
> +rz_dmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
> +			size_t buf_len, size_t period_len,
> +			enum dma_transfer_direction direction,
> +			unsigned long flags)
> +{
> +	struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
> +	struct rz_dmac_desc *desc;
> +	size_t periods;
> +
> +	if (!is_slave_direction(direction))
> +		return NULL;
> +
> +	if (!period_len || !buf_len)
> +		return NULL;
> +
> +	periods = buf_len / period_len;
> +	if (!periods || periods > DMAC_NR_LMDESC)
> +		return NULL;
> +
> +	scoped_guard(spinlock_irqsave, &channel->vc.lock) {
> +		if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC))
> +			return NULL;
> +
> +		desc = list_first_entry_or_null(&channel->ld_free, struct rz_dmac_desc, node);
> +		if (!desc)
> +			return NULL;
> +
> +		list_del(&desc->node);
> +
> +		channel->status |= BIT(RZ_DMAC_CHAN_STATUS_CYCLIC);

Can this global channel state modification during descriptor preparation cause
race conditions?

The DMA engine API permits descriptors to be prepared concurrently with active
transfers, or to be freed without being issued. If a slave_sg transfer is
running when prep_dma_cyclic() is called, the global channel status immediately
becomes CYCLIC.

When the active slave_sg transfer triggers an interrupt, wouldn't
rz_dmac_irq_handler_thread() see this flag and incorrectly execute the cyclic
path (vchan_cyclic_callback) instead of vchan_cookie_complete()?

Would it be safer to determine the execution mode by checking the currently
active descriptor's type (e.g., desc->type == RZ_DMAC_DESC_CYCLIC) instead of
updating a global channel flag during preparation?

[ ... ]

> @@ -733,9 +841,18 @@ static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel,
>  	}
>  
>  	/* Calculate residue from next lmdesc to end of virtual desc */
> -	while (lmdesc->chcfg & CHCFG_DEM) {
> -		residue += lmdesc->tb;
> -		lmdesc = rz_dmac_get_next_lmdesc(channel->lmdesc.base, lmdesc);
> +	if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)) {
> +		u32 start_lmdesc_addr = rz_dmac_lmdesc_addr(channel, desc->start_lmdesc);
> +
> +		while (lmdesc->nxla != start_lmdesc_addr) {

Does this omit the residue from the last descriptor in the cyclic ring?

The last descriptor is configured such that its nxla field points to
start_lmdesc_addr in order to loop back. Because the loop condition evaluates
to false exactly when lmdesc reaches this last descriptor, it terminates
before adding lmdesc->tb (the transfer byte count) to the residue.

Could this cause the residue to be permanently under-reported by one full
period length, affecting subsystems like ALSA that rely on strict monotonic
residue reporting?

> +			residue += lmdesc->tb;
> +			lmdesc = rz_dmac_get_next_lmdesc(channel->lmdesc.base, lmdesc);
> +		}
> +	} else {
> +		while (lmdesc->chcfg & CHCFG_DEM) {
> +			residue += lmdesc->tb;
> +			lmdesc = rz_dmac_get_next_lmdesc(channel->lmdesc.base, lmdesc);
> +		}
>  	}

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260512121219.216159-1-claudiu.beznea.uj@bp.renesas.com?part=12

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 13/17] dmaengine: sh: rz-dmac: Add runtime PM support
  2026-05-12 12:12 ` [PATCH v5 13/17] dmaengine: sh: rz-dmac: Add runtime PM support Claudiu Beznea
  2026-05-12 22:03   ` Frank Li
@ 2026-05-14  2:08   ` sashiko-bot
  1 sibling, 0 replies; 52+ messages in thread
From: sashiko-bot @ 2026-05-14  2:08 UTC (permalink / raw)
  To: Claudiu Beznea; +Cc: Frank.Li, vkoul, dmaengine, wsa+renesas

Thank you for your contribution! Sashiko AI review found 4 potential issue(s) to consider:
- [High] Early returns on runtime PM failure bypass critical software cleanup, causing resource leaks and Use-After-Free
- [High] `rz_dmac_tx_status()` returns a negative integer instead of a valid `enum dma_status`
- [Medium] Silent DMA queue stall in `rz_dmac_xfer_desc()` on PM failure
- [Medium] `rz_dmac_tx_status()` incurs unnecessary PM locking overhead for software-only status queries
--

commit 537bc713380d43d1dfa0d61269accb53a834436d
Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>

dmaengine: sh: rz-dmac: Add runtime PM support

This commit protects driver exposed APIs with runtime PM suspend/resume
calls before accessing HW registers. The changes avoid accessing HW
registers after a failed system suspend leaves the runtime PM state
improperly reinitialized, preparing the driver for suspend-to-RAM support.

> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> index d6ad070be705c..df91657fd5e3d 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
> @@ -488,7 +488,15 @@ static void rz_dmac_prepare_descs_for_cyclic(struct rz_dmac_chan *channel)
>  
>  static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
>  {
> +	struct dma_chan *ch = &chan->vc.chan;
> +	struct rz_dmac *dmac = to_rz_dmac(ch->device);
>  	struct virt_dma_desc *vd;
> +	int ret;
> +
> +	PM_RUNTIME_ACQUIRE_IF_ENABLED(dmac->dev, pm);
> +	ret = PM_RUNTIME_ACQUIRE_ERR(&pm);
> +	if (ret)
> +		return;

Does this return early without pulling the descriptor from the queue? If PM
acquisition fails, it seems the descriptor is left permanently stranded on the
virtual channel's issued list, and the channel queue could silently stall.

[ ... ]

> @@ -549,6 +557,12 @@ static void rz_dmac_free_chan_resources(struct dma_chan *chan)
>  	struct rz_dmac *dmac = to_rz_dmac(chan->device);
>  	struct rz_dmac_desc *desc, *_desc;
>  	unsigned long flags;
> +	int ret;
> +
> +	PM_RUNTIME_ACQUIRE_IF_ENABLED(dmac->dev, pm);
> +	ret = PM_RUNTIME_ACQUIRE_ERR(&pm);
> +	if (ret)
> +		return;

If the runtime PM acquisition fails, does returning early bypass all subsequent
software state cleanup like vchan_free_chan_resources() and freeing allocated
DMA descriptors? Even if the hardware cannot be accessed, shouldn't the
software-allocated resources still be freed to avoid memory leaks?

[ ... ]

> @@ -697,8 +711,15 @@ rz_dmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
>  static int rz_dmac_terminate_all(struct dma_chan *chan)
>  {
>  	struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
> +	struct rz_dmac *dmac = to_rz_dmac(chan->device);
>  	unsigned long flags;
>  	LIST_HEAD(head);
> +	int ret;
> +
> +	PM_RUNTIME_ACQUIRE_IF_ENABLED(dmac->dev, pm);
> +	ret = PM_RUNTIME_ACQUIRE_ERR(&pm);
> +	if (ret)
> +		return ret;

Similarly, does returning early here skip clearing the virt-dma lists? Failing
to do so could leave orphaned descriptors active, which might lead to
use-after-free issues when the client subsequently frees its DMA buffers.

[ ... ]

> @@ -909,8 +935,15 @@ static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
>  					 struct dma_tx_state *txstate)
>  {
>  	struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
> +	struct rz_dmac *dmac = to_rz_dmac(chan->device);
>  	enum dma_status status;
>  	u32 residue;
> +	int ret;
> +
> +	PM_RUNTIME_ACQUIRE_IF_ENABLED(dmac->dev, pm);
> +	ret = PM_RUNTIME_ACQUIRE_ERR(&pm);
> +	if (ret)
> +		return ret;

Is it valid to return a negative integer error code directly as an
enum dma_status? Client drivers might check the status against specific
enumeration constants like DMA_ERROR, and could fail to recognize this
raw error code.

Also, does this add unnecessary PM locking overhead on the hot path for
software-only status queries? Since dma_cookie_status() might resolve the
status entirely in software, and hardware residue is only queried if the
status is DMA_IN_PROGRESS and txstate is not NULL, could the PM acquisition
be deferred until after verifying that reading the hardware residue is
actually necessary?

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260512121219.216159-1-claudiu.beznea.uj@bp.renesas.com?part=13

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support
  2026-05-12 12:12 ` [PATCH v5 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support Claudiu Beznea
@ 2026-05-14  3:04   ` sashiko-bot
  0 siblings, 0 replies; 52+ messages in thread
From: sashiko-bot @ 2026-05-14  3:04 UTC (permalink / raw)
  To: Claudiu Beznea; +Cc: vkoul, dmaengine, Frank.Li, wsa+renesas

Thank you for your contribution! Sashiko AI review found 6 potential issue(s) to consider:
- [High] Returning -EAGAIN from `prepare` callback causes PM core busy-wait livelock
- [High] Race condition with non-cyclic channels during system suspend
- [High] Spurious timeouts on resume for short non-cyclic DMA transfers
- [High] Data loss for cyclic channels due to incomplete context restoration
- [Low] Cleanup framework violation by mixing goto-based error handling and scope-based guard()
- [Low] PM callback conditional compilation subsystem guide violation
--

commit 536ec6e9c07fa47423de7d20ebc829e45cbcedfb
Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Subject: dmaengine: sh: rz-dmac: Add suspend to RAM support

This commit adds suspend to RAM support for the Renesas RZ/G3S SoC by
saving and restoring the DMA IP registers. It adds specific suspend and
resume handling for cyclic channels, and waits for non-cyclic channels
to finish before suspending.

> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> index df91657fd5e3d..b0a8590f4a9da 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
[ ... ]
> @@ -1002,20 +1008,57 @@ static int rz_dmac_device_pause(struct dma_chan *chan)
>  	return rz_dmac_device_pause_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED));
>  }
>  
[ ... ]
>  static int rz_dmac_device_resume_set(struct rz_dmac_chan *channel,
>  				     unsigned long clear_bitmask)
>  {
> -	int ret = 0;
>  	u32 val;
> +	int ret;
>  
>  	lockdep_assert_held(&channel->vc.lock);
>  
[ ... ]
> +	rz_dmac_ch_writel(channel, CHCTRL_CLRSUS | CHCTRL_SETEN, CHCTRL, 1);
>  
> -	rz_dmac_ch_writel(channel, CHCTRL_CLRSUS, CHCTRL, 1);
>  	ret = read_poll_timeout_atomic(rz_dmac_ch_readl, val,
> -				       !(val & CHSTAT_SUS), 1, 1024, false,
> -				       channel, CHSTAT, 1);
> +				       ((val & (CHSTAT_SUS | CHSTAT_EN)) == CHSTAT_EN),
> +				       1, 1024, false, channel, CHSTAT, 1);

Can this condition time out spuriously for short non-cyclic DMA transfers? If
a transfer finishes successfully before the first poll read, the hardware
would clear CHSTAT_EN to 0. The polling condition would then evaluate to
false, causing the macro to incorrectly return -ETIMEDOUT.

>  
>  	channel->status &= ~clear_bitmask;
>  
[ ... ]
> @@ -1406,6 +1459,151 @@ static void rz_dmac_remove(struct platform_device *pdev)
>  	pm_runtime_disable(&pdev->dev);
>  }
>  
> +static int rz_dmac_suspend_prepare(struct device *dev)
> +{
> +	struct rz_dmac *dmac = dev_get_drvdata(dev);
> +	int ret;
> +
> +	PM_RUNTIME_ACQUIRE_IF_ENABLED(dmac->dev, pm);
> +	ret = PM_RUNTIME_ACQUIRE_ERR(&pm);
> +	if (ret)
> +		return ret;
> +
> +	for (unsigned int i = 0; i < dmac->n_channels; i++) {
> +		struct rz_dmac_chan *channel = &dmac->channels[i];
> +
> +		guard(spinlock_irqsave)(&channel->vc.lock);
> +
> +		/* Wait for transfer completion, except in cyclic case. */
> +		if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC))
> +			continue;
> +
> +		if (rz_dmac_chan_is_enabled(channel))
> +			return -EAGAIN;

Could returning -EAGAIN here cause a CPU livelock? In dpm_prepare(), an
-EAGAIN error is cleared and retried continuously in a while
(!list_empty(&dpm_list)) loop. Since the device is not advanced on error,
the PM core might loop infinitely without any backoff until the transfer
completes.

Also, is there a race condition with non-cyclic channels after this callback
returns 0? Since the DMA controller's prepare callback runs before consumer
peripherals, an active consumer could immediately submit new non-cyclic
transfers before the actual suspend phase begins.

> +	}
> +
> +	return 0;
> +}
> +
[ ... ]
> +static int rz_dmac_suspend(struct device *dev)
> +{
> +	struct rz_dmac *dmac = dev_get_drvdata(dev);
> +	int ret = 0;
> +
> +	for (unsigned int i = 0; i < dmac->n_channels; i++) {
> +		struct rz_dmac_chan *channel = &dmac->channels[i];
> +
> +		guard(spinlock_irqsave)(&channel->vc.lock);

This isn't a bug, but does this function mix scope-based guard() cleanup
with goto-based error handling? The cleanup guidelines suggest that
functions should either convert all resources to scope-based cleanup or
convert none of them to avoid confusing ownership semantics.

> +
> +		if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)))
> +			continue;
> +
> +		ret = rz_dmac_device_pause_internal(channel);
> +		if (ret) {
> +			dev_err(dev, "Failed to suspend channel %s\n",
> +				dma_chan_name(&channel->vc.chan));
> +			break;
> +		}
> +
> +		channel->pm_state.nxla = rz_dmac_ch_readl(channel, NXLA, 1);

Does saving only the NXLA register result in data loss for cyclic channels?
When a cyclic channel is paused, it stops at its current descriptor. Without
saving and restoring the Current Link Address and Current Transfer Byte
count, resuming the channel might force it to skip the remaining bytes of the
interrupted descriptor and start fetching the next one.

> +	}
> +
> +	if (ret)
> +		goto suspend_recover;
> +
> +	ret = reset_control_assert(dmac->rstc);

Since non-cyclic channels are unconditionally skipped in the loop above and
the hardware reset is asserted here, could newly queued non-cyclic requests
from consumers be violently interrupted mid-transfer?

> +	if (ret)
> +		goto suspend_recover;
> +
> +	ret = pm_runtime_put_sync(dev);
> +	if (ret < 0)
> +		goto reset_deassert;
> +
> +	return 0;
> +
> +reset_deassert:
> +	reset_control_deassert(dmac->rstc);
> +suspend_recover:
> +	rz_dmac_suspend_recover(dmac);
> +	return ret;
> +}
> +
[ ... ]
> +static const struct dev_pm_ops rz_dmac_pm_ops = {
> +	.prepare = rz_dmac_suspend_prepare,

This isn't a bug, but should rz_dmac_suspend_prepare be wrapped in
pm_sleep_ptr() here? The power management subsystem guidelines recommend
using pm_sleep_ptr() for individual callbacks within the ops structure to
prevent dead code and unused variable warnings when CONFIG_PM_SLEEP is
disabled.

> +	SYSTEM_SLEEP_PM_OPS(rz_dmac_suspend, rz_dmac_resume)
> +};
> +
>  static const struct rz_dmac_info rz_dmac_v2h_info = {
>  	.icu_register_dma_req = rzv2h_icu_register_dma_req,
>  	.default_dma_req_no = RZV2H_ICU_DMAC_REQ_NO_DEFAULT,
> @@ -1432,6 +1630,7 @@ static struct platform_driver rz_dmac_driver = {
>  	.driver		= {
>  		.name	= "rz-dmac",
>  		.of_match_table = of_rz_dmac_match,
> +		.pm	= pm_sleep_ptr(&rz_dmac_pm_ops),

This isn't a bug, but should pm_ptr() be used for the dev_pm_ops structure
pointer instead of pm_sleep_ptr()? The power management guidelines recommend
using pm_ptr() for the structure pointer, while reserving pm_sleep_ptr() for
the individual sleep callbacks inside the structure.

>  	},
>  	.probe		= rz_dmac_probe,
>  	.remove		= rz_dmac_remove,

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260512121219.216159-1-claudiu.beznea.uj@bp.renesas.com?part=14

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 15/17] ASoC: renesas: rz-ssi: Add pause support
  2026-05-12 12:12 ` [PATCH v5 15/17] ASoC: renesas: rz-ssi: Add pause support Claudiu Beznea
@ 2026-05-14  3:54   ` sashiko-bot
  0 siblings, 0 replies; 52+ messages in thread
From: sashiko-bot @ 2026-05-14  3:54 UTC (permalink / raw)
  To: Claudiu Beznea; +Cc: wsa+renesas, dmaengine, vkoul, Frank.Li

Thank you for your contribution! Sashiko AI review found 7 potential issue(s) to consider:
- [Critical] Out-of-bounds DMA Access on Resume due to missing bounds check
- [Critical] Catastrophic DMA release on active channels during PAUSE_RELEASE failure
- [Critical] Use-After-Free during STOP/PAUSE in Full-Duplex mode
- [High] Full-Duplex Auto-Restart Loop on Pause
- [High] Data Loss on Resume due to skipped period
- [High] Race condition on asynchronous DMA termination
- [High] Hardware registers modified without lock protection
--

commit e6b2a93963fa9285cbb11812697caca72ab79258
Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>

ASoC: renesas: rz-ssi: Add pause support

Add pause support as a preparatory step to switch to PCM dmaengine APIs.

> diff --git a/sound/soc/renesas/rz-ssi.c b/sound/soc/renesas/rz-ssi.c
> index 71e434cfe07b9..d4e1dded3a9c1 100644
> --- a/sound/soc/renesas/rz-ssi.c
> +++ b/sound/soc/renesas/rz-ssi.c
> @@ -847,6 +847,7 @@ static int rz_ssi_dai_trigger(struct snd_pcm_substream *substream, int cmd,
>  
>  	switch (cmd) {
>  	case SNDRV_PCM_TRIGGER_RESUME:
> +	case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
>  		ret = rz_ssi_trigger_resume(ssi, strm);

If a pause is triggered right when strm->completed_dma_buf_pos is at
runtime->buffer_size - runtime->period_size, could rz_ssi_trigger_resume()
set strm->dma_buffer_pos exactly to runtime->buffer_size?

rz_ssi_trigger_resume() sets:
    strm->dma_buffer_pos = strm->completed_dma_buf_pos + runtime->period_size;

If this happens, will the next DMA transfer read or write out of bounds past
the end of the allocated ALSA ring buffer?

Also, in half-duplex mode where PAUSE_PUSH successfully aborts the current
period, does adding runtime->period_size here permanently discard the
unplayed/uncaptured period and create a gap in the audio data?

Both of these regressions are noted as fixed later in the series by
commit "ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs".

Since rz_ssi_stop() returns early in full-duplex mode without terminating
the active DMA channel, what happens if PAUSE_RELEASE is issued before the
active descriptors complete?

Execution will fall through to the SNDRV_PCM_TRIGGER_START case and call
rz_ssi_dma_slave_config() on the still-active DMA channel. If the dmaengine
driver rejects this configuration and returns -EBUSY, the fallback to PIO
will call rz_ssi_release_dma_channels(ssi).

Since rz_ssi_release_dma_channels() unconditionally releases both playback
and capture DMA channels, could this cause a use-after-free or panic if the
opposite stream is actively mid-transfer?

This regression is also fixed later in the series by commit "ASoC: renesas:
rz-ssi: Use generic PCM dmaengine APIs".

In half-duplex mode, PAUSE_PUSH terminates the DMA via
dmaengine_terminate_async(). During PAUSE_RELEASE, is there a
synchronization point (like dmaengine_synchronize()) to ensure the
termination has completed before new descriptors are queued?

If the asynchronous termination is still executing when the new descriptors
are queued, could they be inadvertently purged?

This is noted as fixed later in the series by commit "ASoC: renesas: rz-ssi:
Use generic PCM dmaengine APIs".

>  		if (ret)
>  			return ret;

[ ... ]

> @@ -888,6 +889,7 @@ static int rz_ssi_dai_trigger(struct snd_pcm_substream *substream, int cmd,
>  		break;
>  
>  	case SNDRV_PCM_TRIGGER_SUSPEND:
> +	case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
>  		rz_ssi_stop(ssi, strm);

This isn't a bug introduced by this patch, but adding PAUSE exposes it
further. In rz_ssi_stop(), if one stream is halted but the other is still
running (full-duplex), the function returns early without calling
dmaengine_terminate_async().

If the ALSA subsystem subsequently closes the stream and frees the ring
buffer while the DMA channel continues executing already-queued descriptors,
could this lead to a use-after-free?

This is fixed later in the series by commit "ASoC: renesas: rz-ssi: Use
generic PCM dmaengine APIs".

In full-duplex mode, pausing one stream skips disabling the hardware and does
not terminate the DMA channel. When the DMA channel eventually starves the
hardware FIFO, it triggers an underflow/overflow interrupt.

Since the interrupt handler unconditionally queues new descriptors and calls
rz_ssi_start() (which sets strm->running = 1), will this completely override
the ALSA PAUSE state and allow the stream to continue processing buffers in
an endless interrupt storm?

This logic flaw survives the next patch "ASoC: renesas: rz-ssi: Use generic
PCM dmaengine APIs".

Also not a bug introduced by this patch, but adding PAUSE exposes more paths
invoking these functions. rz_ssi_start() and rz_ssi_stop() perform non-atomic
read-modify-write operations on shared hardware registers (SSICR and SSIFCR)
without acquiring ssi->lock.

If playback and capture triggers happen simultaneously on different CPUs,
could race conditions cause both CPUs to read stale state and fail to enable
SSICR_TEN | SSICR_REN?

This race condition survives the next commit "ASoC: renesas: rz-ssi: Use
generic PCM dmaengine APIs".

>  		break;

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260512121219.216159-1-claudiu.beznea.uj@bp.renesas.com?part=15

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 16/17] ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs
  2026-05-12 12:12 ` [PATCH v5 16/17] ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs Claudiu Beznea
@ 2026-05-14  4:52   ` sashiko-bot
  0 siblings, 0 replies; 52+ messages in thread
From: sashiko-bot @ 2026-05-14  4:52 UTC (permalink / raw)
  To: Claudiu Beznea; +Cc: wsa+renesas, Frank.Li, vkoul, dmaengine

Thank you for your contribution! Sashiko AI review found 5 potential issue(s) to consider:
- [Critical] Concurrency data race in rz_ssi_interrupt() leads to NULL pointer dereference.
- [High] Error recovery logic leaves DMA channels permanently paused if a stream is stopped.
- [High] Incomplete state cleanup in full-duplex mode causes permanent hardware stall of active streams.
- [Medium] Deletion of the remove callback causes hardware state and reset leak on driver unbind.
- [Medium] dmaengine_resume() is called on unpaused DMA channels during full-duplex error recovery, violating DMA API contracts.
--

commit 3df78c9407acc2ae86671be88fbd3bf33a9d7d4f
Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>

ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs

This commit migrates the Renesas RZ/G2L and RZ/G3S SSI driver to use generic
PCM dmaengine APIs instead of custom DMA handling. It also removes the
explicit remove callback, increases buffer sizes to reduce underruns, and
simplifies driver logic.

> diff --git a/sound/soc/renesas/rz-ssi.c b/sound/soc/renesas/rz-ssi.c
> index d4e1dded3a9c1..5406f9a3547f6 100644
> --- a/sound/soc/renesas/rz-ssi.c
> +++ b/sound/soc/renesas/rz-ssi.c
[ ... ]
> @@ -606,12 +591,6 @@ static irqreturn_t rz_ssi_interrupt(int irq, void *data)
>  	if (irq == ssi->irq_int) { /* error or idle */
>  		bool is_stopped = !!(ssisr & (SSISR_RUIRQ | SSISR_ROIRQ |
>  					      SSISR_TUIRQ | SSISR_TOIRQ));
>  
>  		if (ssi->capture.substream && is_stopped) {
>  			if (ssisr & SSISR_RUIRQ)
>  				strm_capture->uerr_num++;
>  			if (ssisr & SSISR_ROIRQ)
>  				strm_capture->oerr_num++;
>  
>  			rz_ssi_stop(ssi, strm_capture);
>  		}
>  
>  		if (ssi->playback.substream && is_stopped) {
>  			if (ssisr & SSISR_TUIRQ)
>  				strm_playback->uerr_num++;
>  			if (ssisr & SSISR_TOIRQ)
>  				strm_playback->oerr_num++;
>  
>  			rz_ssi_stop(ssi, strm_playback);
>  		}
>  
> +		if (!rz_ssi_is_stream_running(&ssi->playback) &&
> +		    !rz_ssi_is_stream_running(&ssi->capture) &&
> +		    rz_ssi_is_dma_enabled(ssi)) {
> +			if (ssi->dmas[SNDRV_PCM_STREAM_PLAYBACK])
> +				dmaengine_pause(ssi->dmas[SNDRV_PCM_STREAM_PLAYBACK]);
> +			if (ssi->dmas[SNDRV_PCM_STREAM_CAPTURE] &&
> +			    /* Avoid calling pause twice in case of half duplex. */
> +			    ssi->dmas[SNDRV_PCM_STREAM_PLAYBACK] !=
> +			    ssi->dmas[SNDRV_PCM_STREAM_CAPTURE])
> +				dmaengine_pause(ssi->dmas[SNDRV_PCM_STREAM_CAPTURE]);
> +		}
> +
>  		/* Clear all flags */
>  		rz_ssi_reg_mask_setl(ssi, SSISR, SSISR_TOIRQ | SSISR_TUIRQ |
>  				     SSISR_ROIRQ | SSISR_RUIRQ, 0);
>  
>  		/* Add/remove more data */
>  		if (ssi->capture.substream && is_stopped) {
> -			for (i = 0; i < count; i++)
> +			if (rz_ssi_is_dma_enabled(ssi))
> +				dmaengine_resume(ssi->dmas[SNDRV_PCM_STREAM_CAPTURE]);

Can this sequence lead to a NULL pointer dereference?

When a stream is stopped and closed, ALSA calls rz_ssi_dai_trigger(STOP)
(which clears the substream pointer) followed by rz_ssi_shutdown(), which
clears ssi->dmas[stream]:

sound/soc/renesas/rz-ssi.c:rz_ssi_shutdown() {
    ...
    ssi->dmas[substream->stream] = NULL;
}

If an in-flight interrupt handler is executing concurrently, the driver
disables device-level IRQs but never calls synchronize_irq(). The handler
could observe a non-NULL substream, but by the time it calls
dmaengine_resume(ssi->dmas[stream]), the teardown code on another CPU
could have nullified the pointer.

Can this error recovery logic leave the DMA channel permanently paused?

During an under/overrun error, the code above calls dmaengine_pause() on
both channels if neither stream is running. However, dmaengine_resume()
is only conditionally called if substream is non-NULL.

If one stream was intentionally stopped by the user (meaning its substream
is NULL), its DMA channel is left indefinitely paused. Since the ALSA
generic dmaengine PCM framework caches and reuses DMA channels, the channel
will remain paused on the next trigger(START), resulting in a silent audio
failure.

Does calling dmaengine_resume() here violate the DMA API contract?

In full-duplex mode, if an underrun/overrun occurs on one stream, the check
for !rz_ssi_is_stream_running() for both streams evaluates to false, so
dmaengine_pause() is completely skipped.

However, dmaengine_resume() is unconditionally called for the failing
stream. Calling resume on a channel that is currently active and was never
paused could cause state machine corruption in the DMA controller driver.

>  		}
>  
>  		if (ssi->playback.substream && is_stopped) {
> -			for (i = 0; i < count; i++)
> +			if (rz_ssi_is_dma_enabled(ssi))
> +				dmaengine_resume(ssi->dmas[SNDRV_PCM_STREAM_PLAYBACK]);
>  		}
>  
>  		/* Resume */
>  		if (ssi->playback.substream && is_stopped)
>  			rz_ssi_start(ssi, &ssi->playback);
>  		if (ssi->capture.substream && is_stopped)
>  			rz_ssi_start(ssi, &ssi->capture);
>  	}

This isn't a bug introduced by this patch, but does the full-duplex error
recovery code cause a permanent hardware stall?

When operating in full-duplex mode, stopping one stream triggers an
underrun/overrun interrupt. The interrupt handler calls rz_ssi_stop() for
the remaining active stream and then rz_ssi_start() to recover it.

Because tx_active and rx_active remain true, rz_ssi_start() enters the
one_stream_triggered branch and sets it to true, without re-enabling the
SSICR_TEN and SSICR_REN hardware bits. The surviving stream is left
permanently stalled in hardware until both streams are closed.

[ ... ]
> -static void rz_ssi_remove(struct platform_device *pdev)
> -{
> -	struct rz_ssi_priv *ssi = dev_get_drvdata(&pdev->dev);
> -
> -	rz_ssi_release_dma_channels(ssi);
> -
> -	reset_control_assert(ssi->rstc);
> -}

Does dropping the remove callback leak the hardware state and reset line on
driver unbind?

The driver relies on devm_pm_runtime_enable() to handle PM cleanup.
However, when the driver unbinds, pm_runtime_disable() cancels any pending
asynchronous suspends but does not force the device into a suspended state
if it was currently RPM_ACTIVE.

Consequently, rz_ssi_runtime_suspend() is bypassed, the reset line is never
asserted, and the PM domain clocks may remain enabled.

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260512121219.216159-1-claudiu.beznea.uj@bp.renesas.com?part=16

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 17/17] dmaengine: sh: rz-dmac: Set the Link End (LE) bit on the last descriptor
  2026-05-12 12:12 ` [PATCH v5 17/17] dmaengine: sh: rz-dmac: Set the Link End (LE) bit on the last descriptor Claudiu Beznea
@ 2026-05-14  5:22   ` sashiko-bot
  0 siblings, 0 replies; 52+ messages in thread
From: sashiko-bot @ 2026-05-14  5:22 UTC (permalink / raw)
  To: Claudiu Beznea; +Cc: vkoul, Frank.Li, dmaengine, wsa+renesas

Thank you for your contribution! Sashiko AI review found 1 potential issue(s) to consider:
- [High] Missing bounds check on `sg_len` in `rz_dmac_prep_slave_sg()` leading to hardware descriptor ring overwrite and silent data corruption.
--

commit a98e7dbe1a1cc544a92a491414810a97272d12ae
Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>

dmaengine: sh: rz-dmac: Set the Link End (LE) bit on the last descriptor

This commit modifies the RZ/G2L DMA driver to set the Link End (LE) bit
on the last descriptor of a transfer. This informs the DMA engine that
the transfer is finished, preventing it from loading invalid descriptors
and halting without raising an interrupt.

> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c

[ ... ]

> @@ -425,7 +426,7 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
>  		lmdesc->chext = 0;
>  		if (i == (sg_len - 1)) {
>  			lmdesc->chcfg = (channel->chcfg & ~CHCFG_DEM);
> -			lmdesc->header = HEADER_LV;
> +			lmdesc->header = HEADER_LV | HEADER_LE;
>  		} else {
>  			lmdesc->chcfg = channel->chcfg;
>  			lmdesc->header = HEADER_LV;

Does this code safely handle scatterlists that are larger than the hardware
descriptor ring?

If a DMA client submits a scatterlist with more than DMAC_NR_LMDESC (64)
segments, the initialization loop wraps around and overwrites the beginning
of the descriptor ring:

    if (++lmdesc >= (channel->lmdesc.base + DMAC_NR_LMDESC))
            lmdesc = channel->lmdesc.base;

Because this update unconditionally applies the HEADER_LE bit to the final
segment (i == sg_len - 1), an oversized scatterlist would cause the last
overwritten descriptor to receive the HEADER_LE bit.

When the DMA engine starts, it executes the first few overwritten segments
and prematurely halts when it encounters the misplaced Link End bit, dropping
the rest of the segments without raising an error.

Should rz_dmac_prep_slave_sg() check for sg_len > DMAC_NR_LMDESC and return
NULL to avoid data corruption, similar to how rz_dmac_prep_dma_cyclic()
explicitly checks if periods is greater than DMAC_NR_LMDESC?

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260512121219.216159-1-claudiu.beznea.uj@bp.renesas.com?part=17

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v5 13/17] dmaengine: sh: rz-dmac: Add runtime PM support
  2026-05-13 19:56       ` Frank Li
@ 2026-05-14  9:20         ` Claudiu Beznea
  0 siblings, 0 replies; 52+ messages in thread
From: Claudiu Beznea @ 2026-05-14  9:20 UTC (permalink / raw)
  To: Frank Li
  Cc: Claudiu Beznea, vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai,
	biju.das.jz, prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
	fabrizio.castro.jz, kuninori.morimoto.gx, long.luu.ur, dmaengine,
	linux-kernel, linux-sound, linux-renesas-soc

Hi, Frank,

On 5/13/26 22:56, Frank Li wrote:
> On Wed, May 13, 2026 at 04:39:12PM +0300, Claudiu Beznea wrote:
>> Hi, Frank,
>>
>> On 5/13/26 01:03, Frank Li wrote:
>>> On Tue, May 12, 2026 at 03:12:14PM +0300, Claudiu Beznea wrote:
>>>> Protect the driver exposed APIs with runtime PM suspend/resume calls
>>>> before accessing HW registers. As the current driver leaves runtime PM
>>>> enabled in probe, the purpose of the changes in this patch is to avoid
>>>> accessing HW registers after a failed system suspend leaves the runtime
>>>> PM state of the device improperly reinitialized.
>>>>
>>>> In that case, the driver remains bound to the device, the APIs are still
>>>> exposed, and any access to HW registers without runtime resuming the
>>>> device may lead to synchronous aborts.
>>>>
>>>> This patch prepares the driver for suspend-to-RAM support.
>>>>
>>>> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
>>>> ---
>>>>
>>>> Changes in v5:
>>>> - none, this patch is new
>>>>
>>>>    drivers/dma/sh/rz-dmac.c | 48 ++++++++++++++++++++++++++++++++++++++++
>>>>    1 file changed, 48 insertions(+)
>>>>
>>>> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
>>>> index d6ad070be705..df91657fd5e3 100644
>>>> --- a/drivers/dma/sh/rz-dmac.c
>>>> +++ b/drivers/dma/sh/rz-dmac.c
>>>> @@ -488,7 +488,15 @@ static void rz_dmac_prepare_descs_for_cyclic(struct rz_dmac_chan *channel)
>>>>
>>>>    static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
>>>>    {
>>>> +	struct dma_chan *ch = &chan->vc.chan;
>>>> +	struct rz_dmac *dmac = to_rz_dmac(ch->device);
>>>>    	struct virt_dma_desc *vd;
>>>> +	int ret;
>>>> +
>>>> +	PM_RUNTIME_ACQUIRE_IF_ENABLED(dmac->dev, pm);
>>>> +	ret = PM_RUNTIME_ACQUIRE_ERR(&pm);
>>>> +	if (ret)
>>>> +		return;
>>>
>>> According vnod comment *_prep() call may be called in atomic context
>>> (complete callback). but runtime_pm may sleep.
>>
>> That's why the pm_runtime_irq_safe() was called in probe, to allow it being
>> called in atomic context.
>>
>> The series was tested with CONFIG_LOCKDEP=y and CONFIG_DEBUG_ATOMIC_SLEEP=y
>> no issue was identified.
> 
> I am not sure how magic it makes pm_runtime_get_sync() work under atomic
> context, suppose runtime callback involve clk_(un)prep() and power domain,
> if you call pm_runtime_irq_safe() in probe, it may makes all parent resource
> on when probe. At least it should defer to alloc chan.

The rz-dmac driver is used on platforms using drivers/clk/renesas/rzg2l-cpg.c 
and drivers/clk/renesas/rzv2h-cpg.c clock drivers.

All the platforms that supports the rz-dmac driver register always-on clock 
power domains through the above mentioned clock drivers, see [1], [2].

The genpd registered by those drivers are passing GENPD_FLAG_PM_CLK flag to the 
pm_genpd_init(). In that case the start/stop APIs of the genpd are 
pm_clk_suspend/pm_clk_resume [3].

The clocks to the rz-dmac driver are module clocks, so they are handled by [4] 
or [5] which both have the enable and disable APIs implemented, thus the 
prepare/unprepare is not going to be called through the 
pm_clk_suspend()/pm_clk_resume() functions.

Also, there is no sleep in the enable/disable APIs of those clocks.

Since the registered clock power domain is always on and the rz-dmac is marked 
as IRQ safe the genpd_lock()/genpd_unlock() (and 
genpd_power_off()/genpd_power_on()) will not be called for rz-dmac driver. 
Anyhow, the rzg2l-cpg.c and rzv2h-cpg.c are not implementing any power on/off 
APIs (they are always on power domains).

The irq_safe_dev_in_sleep_domain() calls in 
genpd_runtime_suspend()/genpd_runtime_resume() is anyway blocking any access to 
the genpd_lock()/genpd_unlock() (which may call mutex operations thought he 
genpd_mtx_ops APIs) for the rz-dmac (due to the pm_domain_irq_safe() call in 
probe) .

Also, on pm_runtime_* APIs, documentation it is mentioned (e.g. [6]):

* This routine may be called in atomic context if the RPM_ASYNC flag is set, 

* or if pm_runtime_irq_safe() has been called.

Due to all these, I consider we are safe with this approach.

Thank you,
Claudiu

[1] 
https://elixir.bootlin.com/linux/v7.1-rc3/source/drivers/clk/renesas/rzg2l-cpg.c#L2013
[2] 
https://elixir.bootlin.com/linux/v7.1-rc3/source/drivers/clk/renesas/rzv2h-cpg.c#L1549
[3] https://elixir.bootlin.com/linux/v7.1-rc3/source/drivers/pmdomain/core.c#L2439
[4] 
https://elixir.bootlin.com/linux/v7.1-rc3/source/drivers/clk/renesas/rzg2l-cpg.c#L1560
[5] 
https://elixir.bootlin.com/linux/v7.1-rc3/source/drivers/clk/renesas/rzv2h-cpg.c#L1243

[6] 
https://elixir.bootlin.com/linux/v7.1-rc3/source/drivers/base/power/runtime.c#L1147


^ permalink raw reply	[flat|nested] 52+ messages in thread

end of thread, other threads:[~2026-05-14  9:20 UTC | newest]

Thread overview: 52+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-12 12:12 [PATCH v5 00/17] Renesas: dmaengine and ASoC fixes Claudiu Beznea
2026-05-12 12:12 ` [PATCH v5 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up Claudiu Beznea
2026-05-12 20:28   ` Frank Li
2026-05-13 21:44   ` sashiko-bot
2026-05-12 12:12 ` [PATCH v5 02/17] dmaengine: sh: rz-dmac: Fix incorrect NULL check on list_first_entry() Claudiu Beznea
2026-05-12 20:35   ` Frank Li
2026-05-13 13:31     ` Claudiu Beznea
2026-05-13 22:00   ` sashiko-bot
2026-05-12 12:12 ` [PATCH v5 03/17] dmaengine: sh: rz-dmac: Use list_first_entry_or_null() Claudiu Beznea
2026-05-12 20:38   ` Frank Li
2026-05-13 22:18   ` sashiko-bot
2026-05-12 12:12 ` [PATCH v5 04/17] dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw() Claudiu Beznea
2026-05-12 20:42   ` Frank Li
2026-05-12 12:12 ` [PATCH v5 05/17] dmaengine: sh: rz-dmac: Add helper to compute the lmdesc address Claudiu Beznea
2026-05-12 20:44   ` Frank Li
2026-05-12 12:12 ` [PATCH v5 06/17] dmaengine: sh: rz-dmac: Save the start LM descriptor Claudiu Beznea
2026-05-12 20:48   ` Frank Li
2026-05-13 13:33     ` Claudiu Beznea
2026-05-13 23:52   ` sashiko-bot
2026-05-12 12:12 ` [PATCH v5 07/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is enabled Claudiu Beznea
2026-05-12 20:49   ` Frank Li
2026-05-13 23:59   ` sashiko-bot
2026-05-12 12:12 ` [PATCH v5 08/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is paused Claudiu Beznea
2026-05-12 20:57   ` Frank Li
2026-05-12 12:12 ` [PATCH v5 09/17] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing Claudiu Beznea
2026-05-12 21:38   ` Frank Li
2026-05-13 13:34     ` Claudiu Beznea
2026-05-14  0:42   ` sashiko-bot
2026-05-12 12:12 ` [PATCH v5 10/17] dmaengine: sh: rz-dmac: Refactor pause/resume code Claudiu Beznea
2026-05-12 21:43   ` Frank Li
2026-05-13 13:35     ` Claudiu Beznea
2026-05-14  0:57   ` sashiko-bot
2026-05-12 12:12 ` [PATCH v5 11/17] dmaengine: sh: rz-dmac: Drop the update of channel->chctrl with CHCTRL_SETEN Claudiu Beznea
2026-05-12 21:55   ` Frank Li
2026-05-12 12:12 ` [PATCH v5 12/17] dmaengine: sh: rz-dmac: Add cyclic DMA support Claudiu Beznea
2026-05-12 22:00   ` Frank Li
2026-05-13 13:38     ` Claudiu Beznea
2026-05-14  1:43   ` sashiko-bot
2026-05-12 12:12 ` [PATCH v5 13/17] dmaengine: sh: rz-dmac: Add runtime PM support Claudiu Beznea
2026-05-12 22:03   ` Frank Li
2026-05-13 13:39     ` Claudiu Beznea
2026-05-13 19:56       ` Frank Li
2026-05-14  9:20         ` Claudiu Beznea
2026-05-14  2:08   ` sashiko-bot
2026-05-12 12:12 ` [PATCH v5 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support Claudiu Beznea
2026-05-14  3:04   ` sashiko-bot
2026-05-12 12:12 ` [PATCH v5 15/17] ASoC: renesas: rz-ssi: Add pause support Claudiu Beznea
2026-05-14  3:54   ` sashiko-bot
2026-05-12 12:12 ` [PATCH v5 16/17] ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs Claudiu Beznea
2026-05-14  4:52   ` sashiko-bot
2026-05-12 12:12 ` [PATCH v5 17/17] dmaengine: sh: rz-dmac: Set the Link End (LE) bit on the last descriptor Claudiu Beznea
2026-05-14  5:22   ` sashiko-bot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox