* [PATCH v4 00/17] Renesas: dmaengine and ASoC fixes
@ 2026-04-11 11:42 Claudiu
2026-04-11 11:42 ` [PATCH v4 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up Claudiu
` (16 more replies)
0 siblings, 17 replies; 37+ messages in thread
From: Claudiu @ 2026-04-11 11:42 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Hi,
This series addresses issues identified in the DMA engine and RZ SSI
drivers.
As described in the patch "dmaengine: sh: rz-dmac: Set the Link End (LE)
bit on the last descriptor", stress testing on the Renesas RZ/G2L SoC
showed that starting all available DMA channels could cause the system
to stall after several hours of operation. This issue was resolved by
setting the Link End bit on the last descriptor of a DMA transfer.
However, after applying that fix, the SSI audio driver began to suffer
from frequent overruns and underruns. This was caused by the way the SSI
driver emulated cyclic DMA transfers: at the start of playback/capture
it initially enqueued 4 DMA descriptors as single SG transfers, and upon
completion of each descriptor, a new one was enqueued. Since there was
no indication to the DMA hardware where the descriptor list ended
(though the LE bit), the DMA engine continued transferring until the
audio stream was stopped. From time to time, audio signal spikes were
observed in the recorded file with this approach.
To address these issue, cyclic DMA support was added to the DMA engine
driver, and the SSI audio driver was reworked to use this support via
the generic PCM dmaengine APIs.
Due to the behavior described above, no Fixes tags were added to the
patches in this series, and all patches should be merged through the
same tree.
In case this series will be merged this release cycle, best would
be to go though the DMA tree as the DMA changes are based on the series
at [1] which was merged on March 17th. Otherwise, any of the ASoC or DMA
tree should be good.
Thank you,
Claudiu
Changes in v4:
- collected tags
- addressed review comments got from sashiko.dev. For this:
- added patches:
-- dmaengine: sh: rz-dmac: Move interrupt request after everything is set up
-- dmaengine: sh: rz-dmac: Fix incorrect NULL check on list_first_entry()
Changes in v3:
- addressed review comments got from sashiko.dev. For this:
- added patches 1-9
- added patch "ASoC: renesas: rz-ssi: Add pause support"
- dropped patches:
-- dmaengine: sh: rz-dmac: Add enable status bit
-- dmaengine: sh: rz-dmac: Add pause status bit
Changes in v2:
- fixed typos in patch descriptions and patch titles
- updated "ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs"
to fix the PIO mode
- in patch "dmaengine: sh: rz-dmac: Add suspend to RAM support"
clear the RZ_DMAC_CHAN_STATUS_SYS_SUSPENDED status bit for
channel w/o RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL
- per-patch updates can be found in individual patches changelog
- rebased on top of next-20260319
- updated the cover letter
[1] https://lore.kernel.org/all/20260316133252.240348-1-claudiu.beznea.uj@bp.renesas.com/
Claudiu Beznea (17):
dmaengine: sh: rz-dmac: Move interrupt request after everything is set
up
dmaengine: sh: rz-dmac: Fix incorrect NULL check on list_first_entry()
dmaengine: sh: rz-dmac: Use list_first_entry_or_null()
dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw()
dmaengine: sh: rz-dmac: Do not disable the channel on error
dmaengine: sh: rz-dmac: Add helper to compute the lmdesc address
dmaengine: sh: rz-dmac: Save the start LM descriptor
dmaengine: sh: rz-dmac: Add helper to check if the channel is enabled
dmaengine: sh: rz-dmac: Add helper to check if the channel is paused
dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor
processing
dmaengine: sh: rz-dmac: Refactor pause/resume code
dmaengine: sh: rz-dmac: Drop the update of channel->chctrl with
CHCTRL_SETEN
dmaengine: sh: rz-dmac: Add cyclic DMA support
dmaengine: sh: rz-dmac: Add suspend to RAM support
ASoC: renesas: rz-ssi: Add pause support
ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs
dmaengine: sh: rz-dmac: Set the Link End (LE) bit on the last
descriptor
drivers/dma/sh/rz-dmac.c | 762 +++++++++++++++++++++++++------------
sound/soc/renesas/Kconfig | 1 +
sound/soc/renesas/rz-ssi.c | 388 +++++++------------
3 files changed, 652 insertions(+), 499 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v4 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up
2026-04-11 11:42 [PATCH v4 00/17] Renesas: dmaengine and ASoC fixes Claudiu
@ 2026-04-11 11:42 ` Claudiu
2026-04-11 12:17 ` Biju Das
2026-04-20 12:33 ` sashiko.dev review (Re: [PATCH v4 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up) Claudiu Beznea
2026-04-11 11:42 ` [PATCH v4 02/17] dmaengine: sh: rz-dmac: Fix incorrect NULL check on list_first_entry() Claudiu
` (15 subsequent siblings)
16 siblings, 2 replies; 37+ messages in thread
From: Claudiu @ 2026-04-11 11:42 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea, stable
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Once the interrupt is requested, the interrupt handler may run immediately.
Since the IRQ handler can access channel->ch_base, which is initialized
only after requesting the IRQ, this may lead to invalid memory access.
Likewise, the IRQ thread may access uninitialized data (the ld_free,
ld_queue, and ld_active lists), which may also lead to issues.
Request the interrupts only after everything is set up. To keep the error
path simpler, use dmam_alloc_coherent() instead of dma_alloc_coherent().
Fixes: 5000d37042a6 ("dmaengine: sh: Add DMAC driver for RZ/G2L SoC")
Cc: stable@vger.kernel.org
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v4:
- none, this patch is new
drivers/dma/sh/rz-dmac.c | 88 +++++++++++++++-------------------------
1 file changed, 33 insertions(+), 55 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 625ff29024de..9f206a33dcc6 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -981,25 +981,6 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac,
channel->index = index;
channel->mid_rid = -EINVAL;
- /* Request the channel interrupt. */
- scnprintf(pdev_irqname, sizeof(pdev_irqname), "ch%u", index);
- irq = platform_get_irq_byname(pdev, pdev_irqname);
- if (irq < 0)
- return irq;
-
- irqname = devm_kasprintf(dmac->dev, GFP_KERNEL, "%s:%u",
- dev_name(dmac->dev), index);
- if (!irqname)
- return -ENOMEM;
-
- ret = devm_request_threaded_irq(dmac->dev, irq, rz_dmac_irq_handler,
- rz_dmac_irq_handler_thread, 0,
- irqname, channel);
- if (ret) {
- dev_err(dmac->dev, "failed to request IRQ %u (%d)\n", irq, ret);
- return ret;
- }
-
/* Set io base address for each channel */
if (index < 8) {
channel->ch_base = dmac->base + CHANNEL_0_7_OFFSET +
@@ -1012,9 +993,9 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac,
}
/* Allocate descriptors */
- lmdesc = dma_alloc_coherent(&pdev->dev,
- sizeof(struct rz_lmdesc) * DMAC_NR_LMDESC,
- &channel->lmdesc.base_dma, GFP_KERNEL);
+ lmdesc = dmam_alloc_coherent(&pdev->dev,
+ sizeof(struct rz_lmdesc) * DMAC_NR_LMDESC,
+ &channel->lmdesc.base_dma, GFP_KERNEL);
if (!lmdesc) {
dev_err(&pdev->dev, "Can't allocate memory (lmdesc)\n");
return -ENOMEM;
@@ -1030,7 +1011,24 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac,
INIT_LIST_HEAD(&channel->ld_free);
INIT_LIST_HEAD(&channel->ld_active);
- return 0;
+ /* Request the channel interrupt. */
+ scnprintf(pdev_irqname, sizeof(pdev_irqname), "ch%u", index);
+ irq = platform_get_irq_byname(pdev, pdev_irqname);
+ if (irq < 0)
+ return irq;
+
+ irqname = devm_kasprintf(dmac->dev, GFP_KERNEL, "%s:%u",
+ dev_name(dmac->dev), index);
+ if (!irqname)
+ return -ENOMEM;
+
+ ret = devm_request_threaded_irq(dmac->dev, irq, rz_dmac_irq_handler,
+ rz_dmac_irq_handler_thread, 0,
+ irqname, channel);
+ if (ret)
+ dev_err(dmac->dev, "failed to request IRQ %u (%d)\n", irq, ret);
+
+ return ret;
}
static void rz_dmac_put_device(void *_dev)
@@ -1099,7 +1097,6 @@ static int rz_dmac_probe(struct platform_device *pdev)
const char *irqname = "error";
struct dma_device *engine;
struct rz_dmac *dmac;
- int channel_num;
int ret;
int irq;
u8 i;
@@ -1132,18 +1129,6 @@ static int rz_dmac_probe(struct platform_device *pdev)
return PTR_ERR(dmac->ext_base);
}
- /* Register interrupt handler for error */
- irq = platform_get_irq_byname_optional(pdev, irqname);
- if (irq > 0) {
- ret = devm_request_irq(&pdev->dev, irq, rz_dmac_irq_handler, 0,
- irqname, NULL);
- if (ret) {
- dev_err(&pdev->dev, "failed to request IRQ %u (%d)\n",
- irq, ret);
- return ret;
- }
- }
-
/* Initialize the channels. */
INIT_LIST_HEAD(&dmac->engine.channels);
@@ -1169,6 +1154,18 @@ static int rz_dmac_probe(struct platform_device *pdev)
goto err;
}
+ /* Register interrupt handler for error */
+ irq = platform_get_irq_byname_optional(pdev, irqname);
+ if (irq > 0) {
+ ret = devm_request_irq(&pdev->dev, irq, rz_dmac_irq_handler, 0,
+ irqname, NULL);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to request IRQ %u (%d)\n",
+ irq, ret);
+ goto err;
+ }
+ }
+
/* Register the DMAC as a DMA provider for DT. */
ret = of_dma_controller_register(pdev->dev.of_node, rz_dmac_of_xlate,
NULL);
@@ -1210,16 +1207,6 @@ static int rz_dmac_probe(struct platform_device *pdev)
dma_register_err:
of_dma_controller_free(pdev->dev.of_node);
err:
- channel_num = i ? i - 1 : 0;
- for (i = 0; i < channel_num; i++) {
- struct rz_dmac_chan *channel = &dmac->channels[i];
-
- dma_free_coherent(&pdev->dev,
- sizeof(struct rz_lmdesc) * DMAC_NR_LMDESC,
- channel->lmdesc.base,
- channel->lmdesc.base_dma);
- }
-
reset_control_assert(dmac->rstc);
err_pm_runtime_put:
pm_runtime_put(&pdev->dev);
@@ -1232,18 +1219,9 @@ static int rz_dmac_probe(struct platform_device *pdev)
static void rz_dmac_remove(struct platform_device *pdev)
{
struct rz_dmac *dmac = platform_get_drvdata(pdev);
- unsigned int i;
dma_async_device_unregister(&dmac->engine);
of_dma_controller_free(pdev->dev.of_node);
- for (i = 0; i < dmac->n_channels; i++) {
- struct rz_dmac_chan *channel = &dmac->channels[i];
-
- dma_free_coherent(&pdev->dev,
- sizeof(struct rz_lmdesc) * DMAC_NR_LMDESC,
- channel->lmdesc.base,
- channel->lmdesc.base_dma);
- }
reset_control_assert(dmac->rstc);
pm_runtime_put(&pdev->dev);
pm_runtime_disable(&pdev->dev);
--
2.43.0
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v4 02/17] dmaengine: sh: rz-dmac: Fix incorrect NULL check on list_first_entry()
2026-04-11 11:42 [PATCH v4 00/17] Renesas: dmaengine and ASoC fixes Claudiu
2026-04-11 11:42 ` [PATCH v4 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up Claudiu
@ 2026-04-11 11:42 ` Claudiu
2026-04-11 11:42 ` [PATCH v4 03/17] dmaengine: sh: rz-dmac: Use list_first_entry_or_null() Claudiu
` (14 subsequent siblings)
16 siblings, 0 replies; 37+ messages in thread
From: Claudiu @ 2026-04-11 11:42 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea, stable
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
The list passed as argument to list_first_entry() is expected to be not
empty. Use list_first_entry_or_null() to avoid dereferencing invalid
memory.
Fixes: 21323b118c16 ("dmaengine: sh: rz-dmac: Add device_tx_status() callback")
Cc: stable@vger.kernel.org
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v4:
- none, this patch is new
drivers/dma/sh/rz-dmac.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 9f206a33dcc6..6d80cb668957 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -723,8 +723,8 @@ static u32 rz_dmac_chan_get_residue(struct rz_dmac_chan *channel,
u32 crla, crtb, i;
/* Get current processing virtual descriptor */
- current_desc = list_first_entry(&channel->ld_active,
- struct rz_dmac_desc, node);
+ current_desc = list_first_entry_or_null(&channel->ld_active,
+ struct rz_dmac_desc, node);
if (!current_desc)
return 0;
--
2.43.0
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v4 03/17] dmaengine: sh: rz-dmac: Use list_first_entry_or_null()
2026-04-11 11:42 [PATCH v4 00/17] Renesas: dmaengine and ASoC fixes Claudiu
2026-04-11 11:42 ` [PATCH v4 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up Claudiu
2026-04-11 11:42 ` [PATCH v4 02/17] dmaengine: sh: rz-dmac: Fix incorrect NULL check on list_first_entry() Claudiu
@ 2026-04-11 11:42 ` Claudiu
2026-04-11 11:42 ` [PATCH v4 04/17] dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw() Claudiu
` (13 subsequent siblings)
16 siblings, 0 replies; 37+ messages in thread
From: Claudiu @ 2026-04-11 11:42 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Use list_first_entry_or_null() instead of open-coding it with a
list_empty() check and list_first_entry(). This simplifies the code.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v4:
- none
Changes in v3:
- none, this patch is new
drivers/dma/sh/rz-dmac.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 6d80cb668957..1717b407ab9e 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -503,11 +503,10 @@ rz_dmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
__func__, channel->index, &src, &dest, len);
scoped_guard(spinlock_irqsave, &channel->vc.lock) {
- if (list_empty(&channel->ld_free))
+ desc = list_first_entry_or_null(&channel->ld_free, struct rz_dmac_desc, node);
+ if (!desc)
return NULL;
- desc = list_first_entry(&channel->ld_free, struct rz_dmac_desc, node);
-
desc->type = RZ_DMAC_DESC_MEMCPY;
desc->src = src;
desc->dest = dest;
@@ -533,11 +532,10 @@ rz_dmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
int i = 0;
scoped_guard(spinlock_irqsave, &channel->vc.lock) {
- if (list_empty(&channel->ld_free))
+ desc = list_first_entry_or_null(&channel->ld_free, struct rz_dmac_desc, node);
+ if (!desc)
return NULL;
- desc = list_first_entry(&channel->ld_free, struct rz_dmac_desc, node);
-
for_each_sg(sgl, sg, sg_len, i)
dma_length += sg_dma_len(sg);
--
2.43.0
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v4 04/17] dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw()
2026-04-11 11:42 [PATCH v4 00/17] Renesas: dmaengine and ASoC fixes Claudiu
` (2 preceding siblings ...)
2026-04-11 11:42 ` [PATCH v4 03/17] dmaengine: sh: rz-dmac: Use list_first_entry_or_null() Claudiu
@ 2026-04-11 11:42 ` Claudiu
2026-04-20 12:34 ` sashiko.dev review (Re: [PATCH v4 04/17] dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw()) Claudiu Beznea
2026-04-11 11:42 ` [PATCH v4 05/17] dmaengine: sh: rz-dmac: Do not disable the channel on error Claudiu
` (12 subsequent siblings)
16 siblings, 1 reply; 37+ messages in thread
From: Claudiu @ 2026-04-11 11:42 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Use rz_dmac_disable_hw() instead of open codding it. This unifies the
code and prepares it for the addition of suspend to RAM and cyclic DMA.
The rz_dmac_disable_hw() from rz_dmac_chan_probe() was moved after
vchan_init() as it initializes the channel->vc.chan.device used in
rz_dmac_disable_hw().
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v4:
- in rz_dmac_chan_probe(): moved rz_dmac_disable_hw() after the
vchan_init(&channel->vc, &dmac->engine) call as this is the one which
initializes data structures used by the debug code from
rz_dmac_disable_hw(); updated the patch description to reflect this
Changes in v3:
- none, this patch is new
drivers/dma/sh/rz-dmac.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 1717b407ab9e..40ddf534c094 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -873,7 +873,7 @@ static void rz_dmac_irq_handle_channel(struct rz_dmac_chan *channel)
channel->index, chstat);
scoped_guard(spinlock_irqsave, &channel->vc.lock)
- rz_dmac_ch_writel(channel, CHCTRL_DEFAULT, CHCTRL, 1);
+ rz_dmac_disable_hw(channel);
return;
}
@@ -1000,15 +1000,15 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac,
}
rz_lmdesc_setup(channel, lmdesc);
- /* Initialize register for each channel */
- rz_dmac_ch_writel(channel, CHCTRL_DEFAULT, CHCTRL, 1);
-
channel->vc.desc_free = rz_dmac_virt_desc_free;
vchan_init(&channel->vc, &dmac->engine);
INIT_LIST_HEAD(&channel->ld_queue);
INIT_LIST_HEAD(&channel->ld_free);
INIT_LIST_HEAD(&channel->ld_active);
+ /* Initialize register for each channel */
+ rz_dmac_disable_hw(channel);
+
/* Request the channel interrupt. */
scnprintf(pdev_irqname, sizeof(pdev_irqname), "ch%u", index);
irq = platform_get_irq_byname(pdev, pdev_irqname);
--
2.43.0
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v4 05/17] dmaengine: sh: rz-dmac: Do not disable the channel on error
2026-04-11 11:42 [PATCH v4 00/17] Renesas: dmaengine and ASoC fixes Claudiu
` (3 preceding siblings ...)
2026-04-11 11:42 ` [PATCH v4 04/17] dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw() Claudiu
@ 2026-04-11 11:42 ` Claudiu
2026-04-11 12:30 ` Biju Das
2026-04-11 11:42 ` [PATCH v4 06/17] dmaengine: sh: rz-dmac: Add helper to compute the lmdesc address Claudiu
` (11 subsequent siblings)
16 siblings, 1 reply; 37+ messages in thread
From: Claudiu @ 2026-04-11 11:42 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Disabling the channel on error is pointless, as if other transfers are
queued, the IRQ thread will be woken up and will execute them anyway by
calling rz_dmac_xfer_desc().
rz_dmac_xfer_desc() re-enables the transfer. Before doing so, it sets
CHCTRL.SWRST, which clears CHSTAT.DER and CHSTAT.END anyway.
Skip disabling the DMA channel and just log the error instead.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v4:
- none
Changes in v3:
- none, this patch is new
drivers/dma/sh/rz-dmac.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 40ddf534c094..943c005f52bd 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -871,10 +871,6 @@ static void rz_dmac_irq_handle_channel(struct rz_dmac_chan *channel)
if (chstat & CHSTAT_ER) {
dev_err(dmac->dev, "DMAC err CHSTAT_%d = %08X\n",
channel->index, chstat);
-
- scoped_guard(spinlock_irqsave, &channel->vc.lock)
- rz_dmac_disable_hw(channel);
- return;
}
/*
--
2.43.0
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v4 06/17] dmaengine: sh: rz-dmac: Add helper to compute the lmdesc address
2026-04-11 11:42 [PATCH v4 00/17] Renesas: dmaengine and ASoC fixes Claudiu
` (4 preceding siblings ...)
2026-04-11 11:42 ` [PATCH v4 05/17] dmaengine: sh: rz-dmac: Do not disable the channel on error Claudiu
@ 2026-04-11 11:42 ` Claudiu
2026-04-11 11:42 ` [PATCH v4 07/17] dmaengine: sh: rz-dmac: Save the start LM descriptor Claudiu
` (10 subsequent siblings)
16 siblings, 0 replies; 37+ messages in thread
From: Claudiu @ 2026-04-11 11:42 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Add a helper function to compute the lmdesc address. This makes the
code easier to understand, and the helper will be used in subsequent
patches.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v4:
- none
Changes in v3:
- none, this patch is new
drivers/dma/sh/rz-dmac.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 943c005f52bd..6bea7c8c7053 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -259,6 +259,12 @@ static void rz_lmdesc_setup(struct rz_dmac_chan *channel,
* Descriptors preparation
*/
+static u32 rz_dmac_lmdesc_addr(struct rz_dmac_chan *channel, struct rz_lmdesc *lmdesc)
+{
+ return channel->lmdesc.base_dma +
+ (sizeof(struct rz_lmdesc) * (lmdesc - channel->lmdesc.base));
+}
+
static void rz_dmac_lmdesc_recycle(struct rz_dmac_chan *channel)
{
struct rz_lmdesc *lmdesc = channel->lmdesc.head;
@@ -284,9 +290,7 @@ static void rz_dmac_enable_hw(struct rz_dmac_chan *channel)
rz_dmac_lmdesc_recycle(channel);
- nxla = channel->lmdesc.base_dma +
- (sizeof(struct rz_lmdesc) * (channel->lmdesc.head -
- channel->lmdesc.base));
+ nxla = rz_dmac_lmdesc_addr(channel, channel->lmdesc.head);
chstat = rz_dmac_ch_readl(channel, CHSTAT, 1);
if (!(chstat & CHSTAT_EN)) {
--
2.43.0
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v4 07/17] dmaengine: sh: rz-dmac: Save the start LM descriptor
2026-04-11 11:42 [PATCH v4 00/17] Renesas: dmaengine and ASoC fixes Claudiu
` (5 preceding siblings ...)
2026-04-11 11:42 ` [PATCH v4 06/17] dmaengine: sh: rz-dmac: Add helper to compute the lmdesc address Claudiu
@ 2026-04-11 11:42 ` Claudiu
2026-04-11 12:34 ` Biju Das
2026-04-20 12:37 ` sashiko.dev review (Re: [PATCH v4 07/17] dmaengine: sh: rz-dmac: Save the start LM descriptor) Claudiu Beznea
2026-04-11 11:42 ` [PATCH v4 08/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is enabled Claudiu
` (9 subsequent siblings)
16 siblings, 2 replies; 37+ messages in thread
From: Claudiu @ 2026-04-11 11:42 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Save the start LM descriptor to avoid looping through the entire
channel's LM descriptor list when computing the residue. This avoids
unnecessary iterations.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v4:
- none
Changes in v3:
- none, this patch is new
drivers/dma/sh/rz-dmac.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 6bea7c8c7053..0f871c0a28bd 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -58,6 +58,7 @@ struct rz_dmac_desc {
/* For slave sg */
struct scatterlist *sg;
unsigned int sgcount;
+ struct rz_lmdesc *start_lmdesc;
};
#define to_rz_dmac_desc(d) container_of(d, struct rz_dmac_desc, vd)
@@ -343,6 +344,8 @@ static void rz_dmac_prepare_desc_for_memcpy(struct rz_dmac_chan *channel)
struct rz_dmac_desc *d = channel->desc;
u32 chcfg = CHCFG_MEM_COPY;
+ d->start_lmdesc = lmdesc;
+
/* prepare descriptor */
lmdesc->sa = d->src;
lmdesc->da = d->dest;
@@ -377,6 +380,7 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
}
lmdesc = channel->lmdesc.tail;
+ d->start_lmdesc = lmdesc;
for (i = 0, sg = sgl; i < sg_len; i++, sg = sg_next(sg)) {
if (d->direction == DMA_DEV_TO_MEM) {
@@ -693,9 +697,10 @@ rz_dmac_get_next_lmdesc(struct rz_lmdesc *base, struct rz_lmdesc *lmdesc)
return next;
}
-static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel, u32 crla)
+static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel,
+ struct rz_dmac_desc *desc, u32 crla)
{
- struct rz_lmdesc *lmdesc = channel->lmdesc.head;
+ struct rz_lmdesc *lmdesc = desc->start_lmdesc;
struct dma_chan *chan = &channel->vc.chan;
struct rz_dmac *dmac = to_rz_dmac(chan->device);
u32 residue = 0, i = 0;
@@ -794,7 +799,7 @@ static u32 rz_dmac_chan_get_residue(struct rz_dmac_chan *channel,
* Calculate number of bytes transferred in processing virtual descriptor.
* One virtual descriptor can have many lmdesc.
*/
- return crtb + rz_dmac_calculate_residue_bytes_in_vd(channel, crla);
+ return crtb + rz_dmac_calculate_residue_bytes_in_vd(channel, current_desc, crla);
}
static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
--
2.43.0
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v4 08/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is enabled
2026-04-11 11:42 [PATCH v4 00/17] Renesas: dmaengine and ASoC fixes Claudiu
` (6 preceding siblings ...)
2026-04-11 11:42 ` [PATCH v4 07/17] dmaengine: sh: rz-dmac: Save the start LM descriptor Claudiu
@ 2026-04-11 11:42 ` Claudiu
2026-04-11 11:42 ` [PATCH v4 09/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is paused Claudiu
` (8 subsequent siblings)
16 siblings, 0 replies; 37+ messages in thread
From: Claudiu @ 2026-04-11 11:42 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Add a helper to check if the channel is enabled. This will be reused in
subsequent patches.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v4:
- none
Changes in v3:
- none, this patch is new
drivers/dma/sh/rz-dmac.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 0f871c0a28bd..1a3c33d28c6c 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -279,6 +279,13 @@ static void rz_dmac_lmdesc_recycle(struct rz_dmac_chan *channel)
channel->lmdesc.head = lmdesc;
}
+static bool rz_dmac_chan_is_enabled(struct rz_dmac_chan *chan)
+{
+ u32 val = rz_dmac_ch_readl(chan, CHSTAT, 1);
+
+ return !!(val & CHSTAT_EN);
+}
+
static void rz_dmac_enable_hw(struct rz_dmac_chan *channel)
{
struct dma_chan *chan = &channel->vc.chan;
@@ -840,8 +847,7 @@ static int rz_dmac_device_pause(struct dma_chan *chan)
guard(spinlock_irqsave)(&channel->vc.lock);
- val = rz_dmac_ch_readl(channel, CHSTAT, 1);
- if (!(val & CHSTAT_EN))
+ if (!rz_dmac_chan_is_enabled(channel))
return 0;
rz_dmac_ch_writel(channel, CHCTRL_SETSUS, CHCTRL, 1);
--
2.43.0
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v4 09/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is paused
2026-04-11 11:42 [PATCH v4 00/17] Renesas: dmaengine and ASoC fixes Claudiu
` (7 preceding siblings ...)
2026-04-11 11:42 ` [PATCH v4 08/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is enabled Claudiu
@ 2026-04-11 11:42 ` Claudiu
2026-04-11 11:42 ` [PATCH v4 10/17] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing Claudiu
` (7 subsequent siblings)
16 siblings, 0 replies; 37+ messages in thread
From: Claudiu @ 2026-04-11 11:42 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Add a helper to check if the channel is paused. This will be reused in
subsequent patches.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v4:
- none
Changes in v3:
- none, this patch is new
drivers/dma/sh/rz-dmac.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 1a3c33d28c6c..f35ff5739718 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -286,6 +286,13 @@ static bool rz_dmac_chan_is_enabled(struct rz_dmac_chan *chan)
return !!(val & CHSTAT_EN);
}
+static bool rz_dmac_chan_is_paused(struct rz_dmac_chan *chan)
+{
+ u32 val = rz_dmac_ch_readl(chan, CHSTAT, 1);
+
+ return !!(val & CHSTAT_SUS);
+}
+
static void rz_dmac_enable_hw(struct rz_dmac_chan *channel)
{
struct dma_chan *chan = &channel->vc.chan;
@@ -822,12 +829,9 @@ static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
return status;
scoped_guard(spinlock_irqsave, &channel->vc.lock) {
- u32 val;
-
residue = rz_dmac_chan_get_residue(channel, cookie);
- val = rz_dmac_ch_readl(channel, CHSTAT, 1);
- if (val & CHSTAT_SUS)
+ if (rz_dmac_chan_is_paused(channel))
status = DMA_PAUSED;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v4 10/17] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing
2026-04-11 11:42 [PATCH v4 00/17] Renesas: dmaengine and ASoC fixes Claudiu
` (8 preceding siblings ...)
2026-04-11 11:42 ` [PATCH v4 09/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is paused Claudiu
@ 2026-04-11 11:42 ` Claudiu
2026-04-20 12:41 ` sashiko.dev review (Re: [PATCH v4 10/17] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing) Claudiu Beznea
2026-04-11 11:42 ` [PATCH v4 11/17] dmaengine: sh: rz-dmac: Refactor pause/resume code Claudiu
` (6 subsequent siblings)
16 siblings, 1 reply; 37+ messages in thread
From: Claudiu @ 2026-04-11 11:42 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
The driver used a mix of virt-dma APIs and driver specific logic to
process descriptors. It maintained three internal queues: ld_free,
ld_queue, and ld_active as follows:
- ld_free: stores the descriptors pre-allocated at probe time
- ld_queue: stores descriptors after they are taken from ld_free and
prepared. At the same time, vchan_tx_prep() queues them to
vc->desc_allocated. The vc->desc_allocated list is then checked in
rz_dmac_issue_pending() and rz_dmac_irq_handler_thread() before
starting a new transfer via rz_dmac_xfer_desc(). In turn,
rz_dmac_xfer_desc() grabs the next descriptor from vc->desc_issued and
submits it for transfer
- ld_active: stores the descriptors currently being transferred
The interrupt handler moved a completed descriptor to ld_free before
invoking its completion callback. Once returned to ld_free, the
descriptor can be reused to prepare a new transfer. In theory, this
means the descriptor could be re-prepared before its completion
callback is called.
Commit fully back the driver by the virt-dma APIs. With this, only ld_free
need to be kept to track how many free descriptors are available. This
is now done as follows:
- the prepare stage removes the first descriptor from the ld_free and
prepares it
- the completion calls for it vc->desc_free() (rz_dmac_virt_desc_free())
which re-adds the descriptor at the end of ld_free
With this, the critical areas in prepare callbacks were minimized to only
getting the descriptor from the ld_free list.
This change introduces struct rz_dmac_chan::desc to keep track of the
currently transferred descriptor. It is cleared in
rz_dmac_terminate_all(), referenced from rz_dmac_issue_pending() to
determine whether a new transfer can be started, and from
rz_dmac_irq_handler_thread() once a descriptor has completed. Finally,
the rz_dmac_device_synchronize() was updated with vchan_synchronize()
call to ensure the terminated descriptor is freed and the tasklet is
killed.
With this, residue computation is also simplified, as it can now be
handled entirely through the virt-dma APIs.
The spin_lock/unlock operations from rz_dmac_irq_handler_thread() were
replaced by guard as the final code after rework is simpler this way.
As subsequent commits will set the Link End bit on the last descriptor
of a transfer, rz_dmac_enable_hw() is also adjusted as part of the full
conversion to virt-dma APIs. It no longer checks the channel enable
status itself; instead, its callers verify whether the channel is
enabled and whether the previous transfer has completed before starting
a new one.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v4:
- in rz_dmac_tx_status(): return DMA_PAUSED if the channel is paused;
call rz_dmac_chan_get_residue() only if status is not complete
Changes in v3:
- none, this patch is new
drivers/dma/sh/rz-dmac.c | 233 +++++++++++++++------------------------
1 file changed, 86 insertions(+), 147 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index f35ff5739718..04eb1a7f1e62 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -79,8 +79,6 @@ struct rz_dmac_chan {
int mid_rid;
struct list_head ld_free;
- struct list_head ld_queue;
- struct list_head ld_active;
struct {
struct rz_lmdesc *base;
@@ -299,7 +297,6 @@ static void rz_dmac_enable_hw(struct rz_dmac_chan *channel)
struct rz_dmac *dmac = to_rz_dmac(chan->device);
u32 nxla;
u32 chctrl;
- u32 chstat;
dev_dbg(dmac->dev, "%s channel %d\n", __func__, channel->index);
@@ -307,14 +304,11 @@ static void rz_dmac_enable_hw(struct rz_dmac_chan *channel)
nxla = rz_dmac_lmdesc_addr(channel, channel->lmdesc.head);
- chstat = rz_dmac_ch_readl(channel, CHSTAT, 1);
- if (!(chstat & CHSTAT_EN)) {
- chctrl = (channel->chctrl | CHCTRL_SETEN);
- rz_dmac_ch_writel(channel, nxla, NXLA, 1);
- rz_dmac_ch_writel(channel, channel->chcfg, CHCFG, 1);
- rz_dmac_ch_writel(channel, CHCTRL_SWRST, CHCTRL, 1);
- rz_dmac_ch_writel(channel, chctrl, CHCTRL, 1);
- }
+ chctrl = (channel->chctrl | CHCTRL_SETEN);
+ rz_dmac_ch_writel(channel, nxla, NXLA, 1);
+ rz_dmac_ch_writel(channel, channel->chcfg, CHCFG, 1);
+ rz_dmac_ch_writel(channel, CHCTRL_SWRST, CHCTRL, 1);
+ rz_dmac_ch_writel(channel, chctrl, CHCTRL, 1);
}
static void rz_dmac_disable_hw(struct rz_dmac_chan *channel)
@@ -426,18 +420,20 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
channel->chctrl = CHCTRL_SETEN;
}
-static int rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
+static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
{
- struct rz_dmac_desc *d = chan->desc;
struct virt_dma_desc *vd;
vd = vchan_next_desc(&chan->vc);
- if (!vd)
- return 0;
+ if (!vd) {
+ chan->desc = NULL;
+ return;
+ }
list_del(&vd->node);
+ chan->desc = to_rz_dmac_desc(vd);
- switch (d->type) {
+ switch (chan->desc->type) {
case RZ_DMAC_DESC_MEMCPY:
rz_dmac_prepare_desc_for_memcpy(chan);
break;
@@ -445,14 +441,9 @@ static int rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
case RZ_DMAC_DESC_SLAVE_SG:
rz_dmac_prepare_descs_for_slave_sg(chan);
break;
-
- default:
- return -EINVAL;
}
rz_dmac_enable_hw(chan);
-
- return 0;
}
/*
@@ -494,8 +485,6 @@ static void rz_dmac_free_chan_resources(struct dma_chan *chan)
rz_lmdesc_setup(channel, channel->lmdesc.base);
rz_dmac_disable_hw(channel);
- list_splice_tail_init(&channel->ld_active, &channel->ld_free);
- list_splice_tail_init(&channel->ld_queue, &channel->ld_free);
if (channel->mid_rid >= 0) {
clear_bit(channel->mid_rid, dmac->modules);
@@ -504,13 +493,19 @@ static void rz_dmac_free_chan_resources(struct dma_chan *chan)
spin_unlock_irqrestore(&channel->vc.lock, flags);
+ vchan_free_chan_resources(&channel->vc);
+
+ spin_lock_irqsave(&channel->vc.lock, flags);
+
list_for_each_entry_safe(desc, _desc, &channel->ld_free, node) {
+ list_del(&desc->node);
kfree(desc);
channel->descs_allocated--;
}
INIT_LIST_HEAD(&channel->ld_free);
- vchan_free_chan_resources(&channel->vc);
+
+ spin_unlock_irqrestore(&channel->vc.lock, flags);
}
static struct dma_async_tx_descriptor *
@@ -529,15 +524,15 @@ rz_dmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
if (!desc)
return NULL;
- desc->type = RZ_DMAC_DESC_MEMCPY;
- desc->src = src;
- desc->dest = dest;
- desc->len = len;
- desc->direction = DMA_MEM_TO_MEM;
-
- list_move_tail(channel->ld_free.next, &channel->ld_queue);
+ list_del(&desc->node);
}
+ desc->type = RZ_DMAC_DESC_MEMCPY;
+ desc->src = src;
+ desc->dest = dest;
+ desc->len = len;
+ desc->direction = DMA_MEM_TO_MEM;
+
return vchan_tx_prep(&channel->vc, &desc->vd, flags);
}
@@ -558,22 +553,22 @@ rz_dmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
if (!desc)
return NULL;
- for_each_sg(sgl, sg, sg_len, i)
- dma_length += sg_dma_len(sg);
+ list_del(&desc->node);
+ }
- desc->type = RZ_DMAC_DESC_SLAVE_SG;
- desc->sg = sgl;
- desc->sgcount = sg_len;
- desc->len = dma_length;
- desc->direction = direction;
+ for_each_sg(sgl, sg, sg_len, i)
+ dma_length += sg_dma_len(sg);
- if (direction == DMA_DEV_TO_MEM)
- desc->src = channel->src_per_address;
- else
- desc->dest = channel->dst_per_address;
+ desc->type = RZ_DMAC_DESC_SLAVE_SG;
+ desc->sg = sgl;
+ desc->sgcount = sg_len;
+ desc->len = dma_length;
+ desc->direction = direction;
- list_move_tail(channel->ld_free.next, &channel->ld_queue);
- }
+ if (direction == DMA_DEV_TO_MEM)
+ desc->src = channel->src_per_address;
+ else
+ desc->dest = channel->dst_per_address;
return vchan_tx_prep(&channel->vc, &desc->vd, flags);
}
@@ -588,8 +583,11 @@ static int rz_dmac_terminate_all(struct dma_chan *chan)
rz_dmac_disable_hw(channel);
rz_lmdesc_setup(channel, channel->lmdesc.base);
- list_splice_tail_init(&channel->ld_active, &channel->ld_free);
- list_splice_tail_init(&channel->ld_queue, &channel->ld_free);
+ if (channel->desc) {
+ vchan_terminate_vdesc(&channel->desc->vd);
+ channel->desc = NULL;
+ }
+
vchan_get_all_descriptors(&channel->vc, &head);
spin_unlock_irqrestore(&channel->vc.lock, flags);
vchan_dma_desc_free_list(&channel->vc, &head);
@@ -600,25 +598,16 @@ static int rz_dmac_terminate_all(struct dma_chan *chan)
static void rz_dmac_issue_pending(struct dma_chan *chan)
{
struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
- struct rz_dmac *dmac = to_rz_dmac(chan->device);
- struct rz_dmac_desc *desc;
unsigned long flags;
spin_lock_irqsave(&channel->vc.lock, flags);
- if (!list_empty(&channel->ld_queue)) {
- desc = list_first_entry(&channel->ld_queue,
- struct rz_dmac_desc, node);
- channel->desc = desc;
- if (vchan_issue_pending(&channel->vc)) {
- if (rz_dmac_xfer_desc(channel) < 0)
- dev_warn(dmac->dev, "ch: %d couldn't issue DMA xfer\n",
- channel->index);
- else
- list_move_tail(channel->ld_queue.next,
- &channel->ld_active);
- }
- }
+ /*
+ * Issue the descriptor. If another transfer is already in progress, the
+ * issued descriptor will be handled after the current transfer finishes.
+ */
+ if (vchan_issue_pending(&channel->vc) && !channel->desc)
+ rz_dmac_xfer_desc(channel);
spin_unlock_irqrestore(&channel->vc.lock, flags);
}
@@ -676,13 +665,13 @@ static int rz_dmac_config(struct dma_chan *chan,
static void rz_dmac_virt_desc_free(struct virt_dma_desc *vd)
{
- /*
- * Place holder
- * Descriptor allocation is done during alloc_chan_resources and
- * get freed during free_chan_resources.
- * list is used to manage the descriptors and avoid any memory
- * allocation/free during DMA read/write.
- */
+ struct rz_dmac_chan *channel = to_rz_dmac_chan(vd->tx.chan);
+ struct virt_dma_chan *vc = to_virt_chan(vd->tx.chan);
+ struct rz_dmac_desc *desc = to_rz_dmac_desc(vd);
+
+ guard(spinlock_irqsave)(&vc->lock);
+
+ list_add_tail(&desc->node, &channel->ld_free);
}
static void rz_dmac_device_synchronize(struct dma_chan *chan)
@@ -692,6 +681,8 @@ static void rz_dmac_device_synchronize(struct dma_chan *chan)
u32 chstat;
int ret;
+ vchan_synchronize(&channel->vc);
+
ret = read_poll_timeout(rz_dmac_ch_readl, chstat, !(chstat & CHSTAT_EN),
100, 100000, false, channel, CHSTAT, 1);
if (ret < 0)
@@ -739,58 +730,22 @@ static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel,
static u32 rz_dmac_chan_get_residue(struct rz_dmac_chan *channel,
dma_cookie_t cookie)
{
- struct rz_dmac_desc *current_desc, *desc;
- enum dma_status status;
+ struct rz_dmac_desc *desc = NULL;
+ struct virt_dma_desc *vd;
u32 crla, crtb, i;
- /* Get current processing virtual descriptor */
- current_desc = list_first_entry_or_null(&channel->ld_active,
- struct rz_dmac_desc, node);
- if (!current_desc)
- return 0;
-
- /*
- * If the cookie corresponds to a descriptor that has been completed
- * there is no residue. The same check has already been performed by the
- * caller but without holding the channel lock, so the descriptor could
- * now be complete.
- */
- status = dma_cookie_status(&channel->vc.chan, cookie, NULL);
- if (status == DMA_COMPLETE)
- return 0;
-
- /*
- * If the cookie doesn't correspond to the currently processing virtual
- * descriptor then the descriptor hasn't been processed yet, and the
- * residue is equal to the full descriptor size. Also, a client driver
- * is possible to call this function before rz_dmac_irq_handler_thread()
- * runs. In this case, the running descriptor will be the next
- * descriptor, and will appear in the done list. So, if the argument
- * cookie matches the done list's cookie, we can assume the residue is
- * zero.
- */
- if (cookie != current_desc->vd.tx.cookie) {
- list_for_each_entry(desc, &channel->ld_free, node) {
- if (cookie == desc->vd.tx.cookie)
- return 0;
- }
-
- list_for_each_entry(desc, &channel->ld_queue, node) {
- if (cookie == desc->vd.tx.cookie)
- return desc->len;
- }
-
- list_for_each_entry(desc, &channel->ld_active, node) {
- if (cookie == desc->vd.tx.cookie)
- return desc->len;
- }
+ vd = vchan_find_desc(&channel->vc, cookie);
+ if (vd) {
+ /* Descriptor has been issued but not yet processed. */
+ desc = to_rz_dmac_desc(vd);
+ return desc->len;
+ } else if (channel->desc && channel->desc->vd.tx.cookie == cookie) {
+ /* Descriptor is currently processed. */
+ desc = channel->desc;
+ }
- /*
- * No descriptor found for the cookie, there's thus no residue.
- * This shouldn't happen if the calling driver passes a correct
- * cookie value.
- */
- WARN(1, "No descriptor for cookie!");
+ if (!desc) {
+ /* Descriptor was not found. May be already completed by now. */
return 0;
}
@@ -813,7 +768,7 @@ static u32 rz_dmac_chan_get_residue(struct rz_dmac_chan *channel,
* Calculate number of bytes transferred in processing virtual descriptor.
* One virtual descriptor can have many lmdesc.
*/
- return crtb + rz_dmac_calculate_residue_bytes_in_vd(channel, current_desc, crla);
+ return crtb + rz_dmac_calculate_residue_bytes_in_vd(channel, desc, crla);
}
static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
@@ -824,21 +779,17 @@ static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
enum dma_status status;
u32 residue;
- status = dma_cookie_status(chan, cookie, txstate);
- if (status == DMA_COMPLETE || !txstate)
- return status;
-
scoped_guard(spinlock_irqsave, &channel->vc.lock) {
+ status = dma_cookie_status(chan, cookie, txstate);
+ if (status == DMA_COMPLETE || !txstate)
+ return status;
+
residue = rz_dmac_chan_get_residue(channel, cookie);
- if (rz_dmac_chan_is_paused(channel))
+ if (status == DMA_IN_PROGRESS && rz_dmac_chan_is_paused(channel))
status = DMA_PAUSED;
}
- /* if there's no residue and no paused, the cookie is complete */
- if (!residue && status != DMA_PAUSED)
- return DMA_COMPLETE;
-
dma_set_residue(txstate, residue);
return status;
@@ -914,28 +865,18 @@ static irqreturn_t rz_dmac_irq_handler(int irq, void *dev_id)
static irqreturn_t rz_dmac_irq_handler_thread(int irq, void *dev_id)
{
struct rz_dmac_chan *channel = dev_id;
- struct rz_dmac_desc *desc = NULL;
- unsigned long flags;
+ struct rz_dmac_desc *desc;
- spin_lock_irqsave(&channel->vc.lock, flags);
+ guard(spinlock_irqsave)(&channel->vc.lock);
- if (list_empty(&channel->ld_active)) {
- /* Someone might have called terminate all */
- goto out;
- }
+ desc = channel->desc;
+ if (!desc)
+ return IRQ_HANDLED;
- desc = list_first_entry(&channel->ld_active, struct rz_dmac_desc, node);
vchan_cookie_complete(&desc->vd);
- list_move_tail(channel->ld_active.next, &channel->ld_free);
- if (!list_empty(&channel->ld_queue)) {
- desc = list_first_entry(&channel->ld_queue, struct rz_dmac_desc,
- node);
- channel->desc = desc;
- if (rz_dmac_xfer_desc(channel) == 0)
- list_move_tail(channel->ld_queue.next, &channel->ld_active);
- }
-out:
- spin_unlock_irqrestore(&channel->vc.lock, flags);
+ channel->desc = NULL;
+
+ rz_dmac_xfer_desc(channel);
return IRQ_HANDLED;
}
@@ -1017,9 +958,7 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac,
channel->vc.desc_free = rz_dmac_virt_desc_free;
vchan_init(&channel->vc, &dmac->engine);
- INIT_LIST_HEAD(&channel->ld_queue);
INIT_LIST_HEAD(&channel->ld_free);
- INIT_LIST_HEAD(&channel->ld_active);
/* Initialize register for each channel */
rz_dmac_disable_hw(channel);
--
2.43.0
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v4 11/17] dmaengine: sh: rz-dmac: Refactor pause/resume code
2026-04-11 11:42 [PATCH v4 00/17] Renesas: dmaengine and ASoC fixes Claudiu
` (9 preceding siblings ...)
2026-04-11 11:42 ` [PATCH v4 10/17] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing Claudiu
@ 2026-04-11 11:42 ` Claudiu
2026-04-20 12:42 ` sashiko.dev review (Re: [PATCH v4 11/17] dmaengine: sh: rz-dmac: Refactor pause/resume code) Claudiu Beznea
2026-04-11 11:42 ` [PATCH v4 12/17] dmaengine: sh: rz-dmac: Drop the update of channel->chctrl with CHCTRL_SETEN Claudiu
` (5 subsequent siblings)
16 siblings, 1 reply; 37+ messages in thread
From: Claudiu @ 2026-04-11 11:42 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Subsequent patches will add suspend/resume and cyclic DMA support to the
rz-dmac driver. This support needs to work on SoCs where power to most
components (including DMA) is turned off during system suspend. For this,
some channels (for example cyclic ones) may need to be paused and resumed
manually by the DMA driver during system suspend/resume.
Refactor the pause/resume support so the same code can be reused in the
system suspend/resume path.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v4:
- reset channel->status in rz_dmac_free_chan_resources() and
rz_dmac_terminate_all()
Changes in v3:
- none, this patch new new
drivers/dma/sh/rz-dmac.c | 73 ++++++++++++++++++++++++++++++++++------
1 file changed, 62 insertions(+), 11 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 04eb1a7f1e62..d009b7607d44 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -18,6 +18,7 @@
#include <linux/irqchip/irq-renesas-rzv2h.h>
#include <linux/irqchip/irq-renesas-rzt2h.h>
#include <linux/list.h>
+#include <linux/lockdep.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_dma.h>
@@ -63,6 +64,14 @@ struct rz_dmac_desc {
#define to_rz_dmac_desc(d) container_of(d, struct rz_dmac_desc, vd)
+/**
+ * enum rz_dmac_chan_status: RZ DMAC channel status
+ * @RZ_DMAC_CHAN_STATUS_PAUSED: Channel is paused though DMA engine callbacks
+ */
+enum rz_dmac_chan_status {
+ RZ_DMAC_CHAN_STATUS_PAUSED,
+};
+
struct rz_dmac_chan {
struct virt_dma_chan vc;
void __iomem *ch_base;
@@ -74,6 +83,8 @@ struct rz_dmac_chan {
dma_addr_t src_per_address;
dma_addr_t dst_per_address;
+ unsigned long status;
+
u32 chcfg;
u32 chctrl;
int mid_rid;
@@ -491,6 +502,8 @@ static void rz_dmac_free_chan_resources(struct dma_chan *chan)
channel->mid_rid = -EINVAL;
}
+ channel->status = 0;
+
spin_unlock_irqrestore(&channel->vc.lock, flags);
vchan_free_chan_resources(&channel->vc);
@@ -589,6 +602,9 @@ static int rz_dmac_terminate_all(struct dma_chan *chan)
}
vchan_get_all_descriptors(&channel->vc, &head);
+
+ channel->status = 0;
+
spin_unlock_irqrestore(&channel->vc.lock, flags);
vchan_dma_desc_free_list(&channel->vc, &head);
@@ -795,35 +811,70 @@ static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
return status;
}
-static int rz_dmac_device_pause(struct dma_chan *chan)
+static int rz_dmac_device_pause_set(struct rz_dmac_chan *channel,
+ unsigned long set_bitmask)
{
- struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
+ int ret = 0;
u32 val;
- guard(spinlock_irqsave)(&channel->vc.lock);
+ lockdep_assert_held(&channel->vc.lock);
if (!rz_dmac_chan_is_enabled(channel))
return 0;
+ if (rz_dmac_chan_is_paused(channel))
+ goto set_bit;
+
rz_dmac_ch_writel(channel, CHCTRL_SETSUS, CHCTRL, 1);
- return read_poll_timeout_atomic(rz_dmac_ch_readl, val,
- (val & CHSTAT_SUS), 1, 1024,
- false, channel, CHSTAT, 1);
+ ret = read_poll_timeout_atomic(rz_dmac_ch_readl, val,
+ (val & CHSTAT_SUS), 1, 1024, false,
+ channel, CHSTAT, 1);
+
+set_bit:
+ channel->status |= set_bitmask;
+
+ return ret;
}
-static int rz_dmac_device_resume(struct dma_chan *chan)
+static int rz_dmac_device_pause(struct dma_chan *chan)
{
struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
- u32 val;
guard(spinlock_irqsave)(&channel->vc.lock);
+ return rz_dmac_device_pause_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED));
+}
+
+static int rz_dmac_device_resume_set(struct rz_dmac_chan *channel,
+ unsigned long clear_bitmask)
+{
+ int ret = 0;
+ u32 val;
+
+ lockdep_assert_held(&channel->vc.lock);
+
/* Do not check CHSTAT_SUS but rely on HW capabilities. */
rz_dmac_ch_writel(channel, CHCTRL_CLRSUS, CHCTRL, 1);
- return read_poll_timeout_atomic(rz_dmac_ch_readl, val,
- !(val & CHSTAT_SUS), 1, 1024,
- false, channel, CHSTAT, 1);
+ ret = read_poll_timeout_atomic(rz_dmac_ch_readl, val,
+ !(val & CHSTAT_SUS), 1, 1024, false,
+ channel, CHSTAT, 1);
+
+ channel->status &= ~clear_bitmask;
+
+ return ret;
+}
+
+static int rz_dmac_device_resume(struct dma_chan *chan)
+{
+ struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
+
+ guard(spinlock_irqsave)(&channel->vc.lock);
+
+ if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_PAUSED)))
+ return 0;
+
+ return rz_dmac_device_resume_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED));
}
/*
--
2.43.0
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v4 12/17] dmaengine: sh: rz-dmac: Drop the update of channel->chctrl with CHCTRL_SETEN
2026-04-11 11:42 [PATCH v4 00/17] Renesas: dmaengine and ASoC fixes Claudiu
` (10 preceding siblings ...)
2026-04-11 11:42 ` [PATCH v4 11/17] dmaengine: sh: rz-dmac: Refactor pause/resume code Claudiu
@ 2026-04-11 11:42 ` Claudiu
2026-04-11 11:42 ` [PATCH v4 13/17] dmaengine: sh: rz-dmac: Add cyclic DMA support Claudiu
` (4 subsequent siblings)
16 siblings, 0 replies; 37+ messages in thread
From: Claudiu @ 2026-04-11 11:42 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
The CHCTRL_SETEN bit is explicitly set in rz_dmac_enable_hw(). Updating
struct rz_dmac_chan::chctrl with this bit in
rz_dmac_prepare_desc_for_memcpy() and rz_dmac_prepare_descs_for_slave_sg()
is unnecessary in the current code base. Moreover, it conflicts with the
configuration sequence that will be used for cyclic DMA channels during
suspend to RAM. Cyclic DMA support will be introduced in subsequent
commits.
This is a preparatory commit for cyclic DMA suspend to RAM support.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v4:
- set channel->chctrl = 0 in rz_dmac_prepare_descs_for_slave_sg()
Changes in v3:
- none
Changes in v2:
- fixed typos in patch title and patch description
drivers/dma/sh/rz-dmac.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index d009b7607d44..958ee45abc70 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -377,7 +377,7 @@ static void rz_dmac_prepare_desc_for_memcpy(struct rz_dmac_chan *channel)
rz_dmac_set_dma_req_no(dmac, channel->index, dmac->info->default_dma_req_no);
channel->chcfg = chcfg;
- channel->chctrl = CHCTRL_STG | CHCTRL_SETEN;
+ channel->chctrl = CHCTRL_STG;
}
static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
@@ -428,7 +428,7 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
rz_dmac_set_dma_req_no(dmac, channel->index, channel->mid_rid);
- channel->chctrl = CHCTRL_SETEN;
+ channel->chctrl = 0;
}
static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
--
2.43.0
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v4 13/17] dmaengine: sh: rz-dmac: Add cyclic DMA support
2026-04-11 11:42 [PATCH v4 00/17] Renesas: dmaengine and ASoC fixes Claudiu
` (11 preceding siblings ...)
2026-04-11 11:42 ` [PATCH v4 12/17] dmaengine: sh: rz-dmac: Drop the update of channel->chctrl with CHCTRL_SETEN Claudiu
@ 2026-04-11 11:42 ` Claudiu
2026-04-20 12:51 ` sashiko.dev review (Re: [PATCH v4 13/17] dmaengine: sh: rz-dmac: Add cyclic DMA support) Claudiu Beznea
2026-04-11 11:43 ` [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support Claudiu
` (3 subsequent siblings)
16 siblings, 1 reply; 37+ messages in thread
From: Claudiu @ 2026-04-11 11:42 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Add cyclic DMA support to the RZ DMAC driver. A per-channel status bit is
introduced to mark cyclic channels and is set during the DMA prepare
callback. The IRQ handler checks this status bit and calls
vchan_cyclic_callback() accordingly.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v4:
- drop the nxla update logic in rz_dmac_lmdesc_recycle() as this is
not needed for any kind of transfers
- drop the update of channel->status = 0 from rz_dmac_free_chan_resources()
and rz_dmac_terminate_all() as this was moved in patch 09/17
Changes in v3:
- updated rz_dmac_lmdesc_recycle() to restore the lmdesc->nxla
- in rz_dmac_prepare_descs_for_cyclic() update directly the
desc->start_lmdesc with the descriptor pointer insted of the
descriptor address
- used rz_dmac_lmdesc_addr() to compute the descritor address
- set channel->status = 0 in rz_dmac_free_chan_resources()
- in rz_dmac_prep_dma_cyclic() check for invalid periods or buffer len
and limit the critical area protected by spinlock
- set channel->status = 0 in rz_dmac_terminate_all()
- updated rz_dmac_calculate_residue_bytes_in_vd() to use
rz_dmac_lmdesc_addr()
- dropped goto in rz_dmac_irq_handler_thread() as it is not needed
anymore; dropped also the local variable desc
Changes in v2:
- none
drivers/dma/sh/rz-dmac.c | 136 +++++++++++++++++++++++++++++++++++++--
1 file changed, 130 insertions(+), 6 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 958ee45abc70..9a10430109e5 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -35,6 +35,7 @@
enum rz_dmac_prep_type {
RZ_DMAC_DESC_MEMCPY,
RZ_DMAC_DESC_SLAVE_SG,
+ RZ_DMAC_DESC_CYCLIC,
};
struct rz_lmdesc {
@@ -67,9 +68,11 @@ struct rz_dmac_desc {
/**
* enum rz_dmac_chan_status: RZ DMAC channel status
* @RZ_DMAC_CHAN_STATUS_PAUSED: Channel is paused though DMA engine callbacks
+ * @RZ_DMAC_CHAN_STATUS_CYCLIC: Channel is cyclic
*/
enum rz_dmac_chan_status {
RZ_DMAC_CHAN_STATUS_PAUSED,
+ RZ_DMAC_CHAN_STATUS_CYCLIC,
};
struct rz_dmac_chan {
@@ -191,6 +194,7 @@ struct rz_dmac {
/* LINK MODE DESCRIPTOR */
#define HEADER_LV BIT(0)
+#define HEADER_WBD BIT(2)
#define RZ_DMAC_MAX_CHAN_DESCRIPTORS 16
#define RZ_DMAC_MAX_CHANNELS 16
@@ -431,6 +435,57 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
channel->chctrl = 0;
}
+static void rz_dmac_prepare_descs_for_cyclic(struct rz_dmac_chan *channel)
+{
+ struct dma_chan *chan = &channel->vc.chan;
+ struct rz_dmac *dmac = to_rz_dmac(chan->device);
+ struct rz_dmac_desc *d = channel->desc;
+ size_t period_len = d->sgcount;
+ struct rz_lmdesc *lmdesc;
+ size_t buf_len = d->len;
+ size_t periods = buf_len / period_len;
+
+ lockdep_assert_held(&channel->vc.lock);
+
+ channel->chcfg |= CHCFG_SEL(channel->index) | CHCFG_DMS;
+
+ if (d->direction == DMA_DEV_TO_MEM) {
+ channel->chcfg |= CHCFG_SAD;
+ channel->chcfg &= ~CHCFG_REQD;
+ } else {
+ channel->chcfg |= CHCFG_DAD | CHCFG_REQD;
+ }
+
+ lmdesc = channel->lmdesc.tail;
+ d->start_lmdesc = lmdesc;
+
+ for (size_t i = 0; i < periods; i++) {
+ if (d->direction == DMA_DEV_TO_MEM) {
+ lmdesc->sa = d->src;
+ lmdesc->da = d->dest + (i * period_len);
+ } else {
+ lmdesc->sa = d->src + (i * period_len);
+ lmdesc->da = d->dest;
+ }
+
+ lmdesc->tb = period_len;
+ lmdesc->chitvl = 0;
+ lmdesc->chext = 0;
+ lmdesc->chcfg = channel->chcfg;
+ lmdesc->header = HEADER_LV | HEADER_WBD;
+
+ if (i == periods - 1)
+ lmdesc->nxla = rz_dmac_lmdesc_addr(channel, d->start_lmdesc);
+
+ if (++lmdesc >= (channel->lmdesc.base + DMAC_NR_LMDESC))
+ lmdesc = channel->lmdesc.base;
+ }
+
+ channel->lmdesc.tail = lmdesc;
+
+ rz_dmac_set_dma_req_no(dmac, channel->index, channel->mid_rid);
+}
+
static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
{
struct virt_dma_desc *vd;
@@ -452,6 +507,10 @@ static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
case RZ_DMAC_DESC_SLAVE_SG:
rz_dmac_prepare_descs_for_slave_sg(chan);
break;
+
+ case RZ_DMAC_DESC_CYCLIC:
+ rz_dmac_prepare_descs_for_cyclic(chan);
+ break;
}
rz_dmac_enable_hw(chan);
@@ -586,6 +645,55 @@ rz_dmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
return vchan_tx_prep(&channel->vc, &desc->vd, flags);
}
+static struct dma_async_tx_descriptor *
+rz_dmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
+ size_t buf_len, size_t period_len,
+ enum dma_transfer_direction direction,
+ unsigned long flags)
+{
+ struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
+ struct rz_dmac_desc *desc;
+ size_t periods;
+
+ if (!is_slave_direction(direction))
+ return NULL;
+
+ if (!period_len || !buf_len)
+ return NULL;
+
+ periods = buf_len / period_len;
+ if (!periods || periods > DMAC_NR_LMDESC)
+ return NULL;
+
+ scoped_guard(spinlock_irqsave, &channel->vc.lock) {
+ if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC))
+ return NULL;
+
+ desc = list_first_entry_or_null(&channel->ld_free, struct rz_dmac_desc, node);
+ if (!desc)
+ return NULL;
+
+ list_del(&desc->node);
+
+ channel->status |= BIT(RZ_DMAC_CHAN_STATUS_CYCLIC);
+ }
+
+ desc->type = RZ_DMAC_DESC_CYCLIC;
+ desc->sgcount = period_len;
+ desc->len = buf_len;
+ desc->direction = direction;
+
+ if (direction == DMA_DEV_TO_MEM) {
+ desc->src = channel->src_per_address;
+ desc->dest = buf_addr;
+ } else {
+ desc->src = buf_addr;
+ desc->dest = channel->dst_per_address;
+ }
+
+ return vchan_tx_prep(&channel->vc, &desc->vd, flags);
+}
+
static int rz_dmac_terminate_all(struct dma_chan *chan)
{
struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
@@ -733,9 +841,18 @@ static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel,
}
/* Calculate residue from next lmdesc to end of virtual desc */
- while (lmdesc->chcfg & CHCFG_DEM) {
- residue += lmdesc->tb;
- lmdesc = rz_dmac_get_next_lmdesc(channel->lmdesc.base, lmdesc);
+ if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)) {
+ u32 start_lmdesc_addr = rz_dmac_lmdesc_addr(channel, desc->start_lmdesc);
+
+ while (lmdesc->nxla != start_lmdesc_addr) {
+ residue += lmdesc->tb;
+ lmdesc = rz_dmac_get_next_lmdesc(channel->lmdesc.base, lmdesc);
+ }
+ } else {
+ while (lmdesc->chcfg & CHCFG_DEM) {
+ residue += lmdesc->tb;
+ lmdesc = rz_dmac_get_next_lmdesc(channel->lmdesc.base, lmdesc);
+ }
}
dev_dbg(dmac->dev, "%s: VD residue is %u\n", __func__, residue);
@@ -924,10 +1041,14 @@ static irqreturn_t rz_dmac_irq_handler_thread(int irq, void *dev_id)
if (!desc)
return IRQ_HANDLED;
- vchan_cookie_complete(&desc->vd);
- channel->desc = NULL;
+ if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)) {
+ vchan_cyclic_callback(&desc->vd);
+ } else {
+ vchan_cookie_complete(&desc->vd);
+ channel->desc = NULL;
- rz_dmac_xfer_desc(channel);
+ rz_dmac_xfer_desc(channel);
+ }
return IRQ_HANDLED;
}
@@ -1179,6 +1300,8 @@ static int rz_dmac_probe(struct platform_device *pdev)
engine = &dmac->engine;
dma_cap_set(DMA_SLAVE, engine->cap_mask);
dma_cap_set(DMA_MEMCPY, engine->cap_mask);
+ dma_cap_set(DMA_CYCLIC, engine->cap_mask);
+ engine->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
engine->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
rz_dmac_writel(dmac, DCTRL_DEFAULT, CHANNEL_0_7_COMMON_BASE + DCTRL);
rz_dmac_writel(dmac, DCTRL_DEFAULT, CHANNEL_8_15_COMMON_BASE + DCTRL);
@@ -1190,6 +1313,7 @@ static int rz_dmac_probe(struct platform_device *pdev)
engine->device_tx_status = rz_dmac_tx_status;
engine->device_prep_slave_sg = rz_dmac_prep_slave_sg;
engine->device_prep_dma_memcpy = rz_dmac_prep_dma_memcpy;
+ engine->device_prep_dma_cyclic = rz_dmac_prep_dma_cyclic;
engine->device_config = rz_dmac_config;
engine->device_terminate_all = rz_dmac_terminate_all;
engine->device_issue_pending = rz_dmac_issue_pending;
--
2.43.0
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support
2026-04-11 11:42 [PATCH v4 00/17] Renesas: dmaengine and ASoC fixes Claudiu
` (12 preceding siblings ...)
2026-04-11 11:42 ` [PATCH v4 13/17] dmaengine: sh: rz-dmac: Add cyclic DMA support Claudiu
@ 2026-04-11 11:43 ` Claudiu
2026-04-20 7:42 ` Biju Das
2026-04-20 12:52 ` sashiko.dev review (Re: [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support) Claudiu Beznea
2026-04-11 11:43 ` [PATCH v4 15/17] ASoC: renesas: rz-ssi: Add pause support Claudiu
` (2 subsequent siblings)
16 siblings, 2 replies; 37+ messages in thread
From: Claudiu @ 2026-04-11 11:43 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
The Renesas RZ/G3S SoC supports a power saving mode in which power to most
of the SoC components is turned off, including the DMA IP. Add suspend to
RAM support to save and restore the DMA IP registers.
Cyclic DMA channels require special handling. Since they can be paused and
resumed during system suspend/resume, the driver restores additional
registers for these channels during the system resume phase. If a channel
was not explicitly paused during suspend, the driver ensures that it is
paused and resumed as part of the system suspend/resume flow. This might be
the case of a serial device being used with no_console_suspend.
For non-cyclic channels, the dev_pm_ops::prepare callback waits for all
the ongoing transfers to complete before allowing suspend-to-RAM to
proceed.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v4:
- in rz_dmac_device_synchronize() kept the read_poll_timeout() as
this doesn't fail anymore with the proper status return from
->device_tx_status() API in case the channel is paused; with it
the patch description was updated
- keep the cleanup path in rz_dmac_suspend() simpler to avoid
confusion when using guard()
- used SYSTEM_SLEEP_PM_OPS() as there is no need for having the
suspend/resume callbacks being called in NOIRQ phase
Changes in v3:
- dropped RZ_DMAC_CHAN_STATUS_SYS_SUSPENDED
- dropped read_poll_timeout() from rz_dmac_device_synchronze() as
with audio drivers this times out all the time on suspend because
the audio DMA is already paused when the rz_dmac_device_synchronize()
is called; updated the commit description to describe this change
- call rz_dmac_device_pause_internal() only if RZ_DMAC_CHAN_STATUS_PAUSED
bit is not set or the device is enabled in HW
- updated rz_dmac_device_resume_set() to have it simpler and cover
the cases when it is called with the channel enabled or paused;
updated the comment describing the covered use cases
- call rz_dmac_device_resume_internal() only if
RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL bit is set
- in rz_dmac_chan_is_enabled() return -EAGAIN only if the channel is
enabled in HW
- in rz_dmac_suspend_recover() drop the update of
RZ_DMAC_CHAN_STATUS_SYS_SUSPENDED as this is not available anymore
- in rz_dmac_suspend() call rz_dmac_device_pause_internal() unconditionally
as the logic is now handled inside the called function; also, do not
ignore anymore the failure of internal suspend and abort the suspend
instead
- report channel internal resume failures in rz_dmac_resume()
- use rz_dmac_disable_hw() instead of open coding it in rz_dmac_resume()
- call rz_dmac_device_resume_internal() uncoditionally as the skip
logic is now handled in the function itself
- use NOIRQ_SYSTEM_SLEEP_PM_OPS()
- didn't collect Tommaso's Tb tag as the series was changed a lot since
v2
Changes in v2:
- fixed typos in patch description
- in rz_dmac_suspend_prepare(): return -EAGAIN based on the value returned
by vchan_issue_pending()
- in rz_dmac_suspend_recover(): clear RZ_DMAC_CHAN_STATUS_SYS_SUSPENDED for
non cyclic channels
- in rz_dmac_resume(): call rz_dmac_set_dma_req_no() only for cyclic channels
drivers/dma/sh/rz-dmac.c | 188 +++++++++++++++++++++++++++++++++++++--
1 file changed, 183 insertions(+), 5 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 9a10430109e5..00e18d8213ca 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -69,10 +69,12 @@ struct rz_dmac_desc {
* enum rz_dmac_chan_status: RZ DMAC channel status
* @RZ_DMAC_CHAN_STATUS_PAUSED: Channel is paused though DMA engine callbacks
* @RZ_DMAC_CHAN_STATUS_CYCLIC: Channel is cyclic
+ * @RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL: Channel is paused through driver internal logic
*/
enum rz_dmac_chan_status {
RZ_DMAC_CHAN_STATUS_PAUSED,
RZ_DMAC_CHAN_STATUS_CYCLIC,
+ RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL,
};
struct rz_dmac_chan {
@@ -92,6 +94,10 @@ struct rz_dmac_chan {
u32 chctrl;
int mid_rid;
+ struct {
+ u32 nxla;
+ } pm_state;
+
struct list_head ld_free;
struct {
@@ -962,20 +968,57 @@ static int rz_dmac_device_pause(struct dma_chan *chan)
return rz_dmac_device_pause_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED));
}
+static int rz_dmac_device_pause_internal(struct rz_dmac_chan *channel)
+{
+ lockdep_assert_held(&channel->vc.lock);
+
+ /* Skip channels explicitly paused by consummers or disabled. */
+ if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_PAUSED) ||
+ !rz_dmac_chan_is_enabled(channel))
+ return 0;
+
+ return rz_dmac_device_pause_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL));
+}
+
static int rz_dmac_device_resume_set(struct rz_dmac_chan *channel,
unsigned long clear_bitmask)
{
- int ret = 0;
u32 val;
+ int ret;
lockdep_assert_held(&channel->vc.lock);
- /* Do not check CHSTAT_SUS but rely on HW capabilities. */
+ /*
+ * We can be:
+ *
+ * 1/ after the channel was paused by a consummer and now it
+ * needs to be resummed
+ * 2/ after the channel was paused internally (as a result of
+ * a system suspend with power loss or not)
+ * 3/ after the channel was paused by a consummer, the system
+ * went through a system suspend (with power loss or not)
+ * and the consummer wants to resume the channel
+ *
+ * To cover all the above cases we set both CLRSUS and SETEN.
+ *
+ * In case 1/ setting SETEN while the channel is still enabled
+ * is harmless for the controller.
+ *
+ * In case 2/ the channel is disabled when calling this function
+ * and setting CLRSUS is harmless for the controller as the
+ * channel is disabled anyway.
+ *
+ * In case 3/ the channel is disabled/enabled if the system
+ * went though a suspend with power loss/or not and setting
+ * CLRSUS/SETEN is harmless for the controller as the channel
+ * is enabled/disabled anyway.
+ */
+
+ rz_dmac_ch_writel(channel, CHCTRL_CLRSUS | CHCTRL_SETEN, CHCTRL, 1);
- rz_dmac_ch_writel(channel, CHCTRL_CLRSUS, CHCTRL, 1);
ret = read_poll_timeout_atomic(rz_dmac_ch_readl, val,
- !(val & CHSTAT_SUS), 1, 1024, false,
- channel, CHSTAT, 1);
+ ((val & (CHSTAT_SUS | CHSTAT_EN)) == CHSTAT_EN),
+ 1, 1024, false, channel, CHSTAT, 1);
channel->status &= ~clear_bitmask;
@@ -994,6 +1037,16 @@ static int rz_dmac_device_resume(struct dma_chan *chan)
return rz_dmac_device_resume_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED));
}
+static int rz_dmac_device_resume_internal(struct rz_dmac_chan *channel)
+{
+ lockdep_assert_held(&channel->vc.lock);
+
+ if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL)))
+ return 0;
+
+ return rz_dmac_device_resume_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL));
+}
+
/*
* -----------------------------------------------------------------------------
* IRQ handling
@@ -1354,6 +1407,130 @@ static void rz_dmac_remove(struct platform_device *pdev)
pm_runtime_disable(&pdev->dev);
}
+static int rz_dmac_suspend_prepare(struct device *dev)
+{
+ struct rz_dmac *dmac = dev_get_drvdata(dev);
+
+ for (unsigned int i = 0; i < dmac->n_channels; i++) {
+ struct rz_dmac_chan *channel = &dmac->channels[i];
+
+ guard(spinlock_irqsave)(&channel->vc.lock);
+
+ /* Wait for transfer completion, except in cyclic case. */
+ if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC))
+ continue;
+
+ if (rz_dmac_chan_is_enabled(channel))
+ return -EAGAIN;
+ }
+
+ return 0;
+}
+
+static void rz_dmac_suspend_recover(struct rz_dmac *dmac)
+{
+ for (unsigned int i = 0; i < dmac->n_channels; i++) {
+ struct rz_dmac_chan *channel = &dmac->channels[i];
+
+ guard(spinlock_irqsave)(&channel->vc.lock);
+
+ if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)))
+ continue;
+
+ rz_dmac_device_resume_internal(channel);
+ }
+}
+
+static int rz_dmac_suspend(struct device *dev)
+{
+ struct rz_dmac *dmac = dev_get_drvdata(dev);
+ int ret;
+
+ for (unsigned int i = 0; i < dmac->n_channels; i++) {
+ struct rz_dmac_chan *channel = &dmac->channels[i];
+
+ guard(spinlock_irqsave)(&channel->vc.lock);
+
+ if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)))
+ continue;
+
+ ret = rz_dmac_device_pause_internal(channel);
+ if (ret) {
+ dev_err(dev, "Failed to suspend channel %s\n",
+ dma_chan_name(&channel->vc.chan));
+ break;
+ }
+
+ channel->pm_state.nxla = rz_dmac_ch_readl(channel, NXLA, 1);
+ }
+
+ if (ret) {
+ rz_dmac_suspend_recover(dmac);
+ return ret;
+ }
+
+ pm_runtime_put_sync(dmac->dev);
+
+ ret = reset_control_assert(dmac->rstc);
+ if (ret) {
+ pm_runtime_resume_and_get(dmac->dev);
+ rz_dmac_suspend_recover(dmac);
+ }
+
+ return ret;
+}
+
+static int rz_dmac_resume(struct device *dev)
+{
+ struct rz_dmac *dmac = dev_get_drvdata(dev);
+ int errors = 0, ret;
+
+ ret = reset_control_deassert(dmac->rstc);
+ if (ret)
+ return ret;
+
+ ret = pm_runtime_resume_and_get(dmac->dev);
+ if (ret) {
+ reset_control_assert(dmac->rstc);
+ return ret;
+ }
+
+ rz_dmac_writel(dmac, DCTRL_DEFAULT, CHANNEL_0_7_COMMON_BASE + DCTRL);
+ rz_dmac_writel(dmac, DCTRL_DEFAULT, CHANNEL_8_15_COMMON_BASE + DCTRL);
+
+ for (unsigned int i = 0; i < dmac->n_channels; i++) {
+ struct rz_dmac_chan *channel = &dmac->channels[i];
+
+ guard(spinlock_irqsave)(&channel->vc.lock);
+
+ rz_dmac_disable_hw(&dmac->channels[i]);
+
+ if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)))
+ continue;
+
+ rz_dmac_set_dma_req_no(dmac, channel->index, channel->mid_rid);
+
+ rz_dmac_ch_writel(channel, channel->pm_state.nxla, NXLA, 1);
+ rz_dmac_ch_writel(channel, channel->chcfg, CHCFG, 1);
+ rz_dmac_ch_writel(channel, CHCTRL_SWRST, CHCTRL, 1);
+ rz_dmac_ch_writel(channel, channel->chctrl, CHCTRL, 1);
+
+ ret = rz_dmac_device_resume_internal(channel);
+ if (ret) {
+ errors = ret;
+ dev_err(dev, "Failed to resume channel %s\n",
+ dma_chan_name(&channel->vc.chan));
+ }
+ }
+
+ return errors ? : ret;
+}
+
+static const struct dev_pm_ops rz_dmac_pm_ops = {
+ .prepare = rz_dmac_suspend_prepare,
+ SYSTEM_SLEEP_PM_OPS(rz_dmac_suspend, rz_dmac_resume)
+};
+
static const struct rz_dmac_info rz_dmac_v2h_info = {
.icu_register_dma_req = rzv2h_icu_register_dma_req,
.default_dma_req_no = RZV2H_ICU_DMAC_REQ_NO_DEFAULT,
@@ -1380,6 +1557,7 @@ static struct platform_driver rz_dmac_driver = {
.driver = {
.name = "rz-dmac",
.of_match_table = of_rz_dmac_match,
+ .pm = pm_sleep_ptr(&rz_dmac_pm_ops),
},
.probe = rz_dmac_probe,
.remove = rz_dmac_remove,
--
2.43.0
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v4 15/17] ASoC: renesas: rz-ssi: Add pause support
2026-04-11 11:42 [PATCH v4 00/17] Renesas: dmaengine and ASoC fixes Claudiu
` (13 preceding siblings ...)
2026-04-11 11:43 ` [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support Claudiu
@ 2026-04-11 11:43 ` Claudiu
2026-04-11 11:43 ` [PATCH v4 16/17] ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs Claudiu
2026-04-11 11:43 ` [PATCH v4 17/17] dmaengine: sh: rz-dmac: Set the Link End (LE) bit on the last descriptor Claudiu
16 siblings, 0 replies; 37+ messages in thread
From: Claudiu @ 2026-04-11 11:43 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Add pause support as a preparatory step to switch to PCM dmaengine APIs.
Acked-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v4:
- collected tags
Changes in v3:
- none, this patch is new
sound/soc/renesas/rz-ssi.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/sound/soc/renesas/rz-ssi.c b/sound/soc/renesas/rz-ssi.c
index 71e434cfe07b..d4e1dded3a9c 100644
--- a/sound/soc/renesas/rz-ssi.c
+++ b/sound/soc/renesas/rz-ssi.c
@@ -847,6 +847,7 @@ static int rz_ssi_dai_trigger(struct snd_pcm_substream *substream, int cmd,
switch (cmd) {
case SNDRV_PCM_TRIGGER_RESUME:
+ case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
ret = rz_ssi_trigger_resume(ssi, strm);
if (ret)
return ret;
@@ -888,6 +889,7 @@ static int rz_ssi_dai_trigger(struct snd_pcm_substream *substream, int cmd,
break;
case SNDRV_PCM_TRIGGER_SUSPEND:
+ case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
rz_ssi_stop(ssi, strm);
break;
@@ -1054,7 +1056,8 @@ static const struct snd_pcm_hardware rz_ssi_pcm_hardware = {
.info = SNDRV_PCM_INFO_INTERLEAVED |
SNDRV_PCM_INFO_MMAP |
SNDRV_PCM_INFO_MMAP_VALID |
- SNDRV_PCM_INFO_RESUME,
+ SNDRV_PCM_INFO_RESUME |
+ SNDRV_PCM_INFO_PAUSE,
.buffer_bytes_max = PREALLOC_BUFFER,
.period_bytes_min = 32,
.period_bytes_max = 8192,
--
2.43.0
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v4 16/17] ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs
2026-04-11 11:42 [PATCH v4 00/17] Renesas: dmaengine and ASoC fixes Claudiu
` (14 preceding siblings ...)
2026-04-11 11:43 ` [PATCH v4 15/17] ASoC: renesas: rz-ssi: Add pause support Claudiu
@ 2026-04-11 11:43 ` Claudiu
2026-04-11 11:43 ` [PATCH v4 17/17] dmaengine: sh: rz-dmac: Set the Link End (LE) bit on the last descriptor Claudiu
16 siblings, 0 replies; 37+ messages in thread
From: Claudiu @ 2026-04-11 11:43 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
On Renesas RZ/G2L and RZ/G3S SoCs (where this was tested), captured audio
files occasionally contained random spikes when viewed with a tool such
as Audacity. These spikes were also audible as popping noises.
Using cyclic DMA resolves this issue. The driver was reworked to use the
existing support provided by the generic PCM dmaengine APIs. In addition
to eliminating the random spikes, the following issues were addressed:
- blank periods at the beginning of recorded files, which occurred
intermittently, are no longer present
- no overruns or underruns were observed when continuously recording
short audio files (e.g. 5 seconds long) in a loop
- concurrency issues in the SSI driver when enqueuing DMA requests were
eliminated; previously, DMA requests could be prepared and submitted
both from the DMA completion callback and the interrupt handler, which
led to crashes after several hours of testing
- the SSI driver logic is simplified
- the number of generated interrupts is reduced by approximately 250%
In the SSI platform driver probe function, the following changes were
made:
- the driver-specific DMA configuration was removed in favor of the
generic PCM dmaengine APIs. As a result, explicit cleanup goto labels
are no longer required and the driver remove callback was dropped,
since resource management is now handled via devres helpers
- special handling was added for IP variants operating in half-duplex
mode, where the DMA channel name in the device tree is "rt"; this DMA
channel name is taken into account and passed to the generic PCM
dmaengine configuration data
All code previously responsible for preparing and completing DMA
transfers was removed, as this functionality is now handled entirely by
the generic PCM dmaengine APIs.
Since DMA channels must be paused and resumed during recovery paths
(overruns and underruns reported by the hardware), the DMA channel
references are stored in rz_ssi_hw_params().
The logic in rz_ssi_is_dma_enabled() was updated to reflect that the
driver no longer manages DMA transfers directly.
To avoid software reported underruns (e.g. when running aplay during
consecutive suspend/resume cycles, or when the CPU is nearly 100%
loaded), rz_ssi_pcm_hardware.buffer_bytes_max was increased to 192K.
At the same time, rz_ssi_pcm_hardware.period_bytes_max was set to 48K
to reduce interrupt overhead.
Finally, rz_ssi_stream_is_play() was removed, as it had only a single
remaining user after this rework, and its logic was inlined at the call
site.
Acked-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v4:
- collected tags
- in rz_ssi_interrupt() checked the dma channel is valid before
calling dmaengine_pause(); at the same time initialized the
rz_ssi->dmas[] with NULL in case the DMA is not available in
rz_ssi_dai_hw_params()
- set rz_ssi_dmaengine_pcm_conf.prealloc_buffer_size
- dinamically allocate the object of type snd_dmaengine_pcm_config passed
to devm_snd_dmaengine_pcm_register() to avoid issues when the driver
is instantiated for more than one HW instance
- I considered keeping the ack was still OK; Mark, please let me know if
you consider otherwise
Changes in v3:
- s/CONFIG_SND_SOC_GENERIC_DMAENGINE_PCM/SND_SOC_GENERIC_DMAENGINE_PCM
in Kconfig
- in rz_ssi_clk_setup(): drop the update of dma_dai->maxburst
- in rz_ssi_interrupt(): pause the DMA channels in case of HW over/underruns
- add different open APIs for rz_ssi_soc_component_pio and
rz_ssi_soc_component_dma
- set rz_ssi_pcm_hardware to rz_ssi_dmaengine_pcm_conf.pcm_hardware
and updated the buffer_bytes_max to avoid underruns detected by
applications just before suspending; along with it updated
period_bytes_max for lower interrupt overhead; updated the patch
description for this; with it updated the snd_pcm_set_managed_buffer_all()
arguments to use the rz_ssi_pcm_hardware
- added back rz_ssi_soc_component_pio.pcm_new instantiation as the
PIO mode was broken w/o it
- use specific rz_ssi_soc_component_dma.open implementation for DMA
- updated rz_ssi_dmaengine_pcm_conf.chan_names[].{tx, rx} either if
there is about full or half duplex instantiation and move the flags
variable local to the code block that uses it
- check devm_snd_dmaengine_pcm_register() for defer errors
Changes in v2:
- fixed typos in patch description
- select CONFIG_SND_SOC_GENERIC_DMAENGINE_PCM for rz-ssi driver
- in rz_ssi_dai_hw_params() check if DMA is enabled before calling
snd_dmaengine_pcm_get_chan() to avoid failures for PIO mode
- do not drop rz_ssi_pcm_pointer() and rz_ssi_pcm_new() as these
are necessary for PIO mode
- added 2 struct snd_soc_component_driver, one for PIO mode, one for
DMA and updated probe() to register the proper
snd_soc_component_driver based on the working mode
sound/soc/renesas/Kconfig | 1 +
sound/soc/renesas/rz-ssi.c | 383 ++++++++++++-------------------------
2 files changed, 125 insertions(+), 259 deletions(-)
diff --git a/sound/soc/renesas/Kconfig b/sound/soc/renesas/Kconfig
index 11c2027c88a7..6520217e7407 100644
--- a/sound/soc/renesas/Kconfig
+++ b/sound/soc/renesas/Kconfig
@@ -56,6 +56,7 @@ config SND_SOC_MSIOF
config SND_SOC_RZ
tristate "RZ/G2L series SSIF-2 support"
depends on ARCH_RZG2L || COMPILE_TEST
+ select SND_SOC_GENERIC_DMAENGINE_PCM
help
This option enables RZ/G2L SSIF-2 sound support.
diff --git a/sound/soc/renesas/rz-ssi.c b/sound/soc/renesas/rz-ssi.c
index d4e1dded3a9c..b1d016bcca86 100644
--- a/sound/soc/renesas/rz-ssi.c
+++ b/sound/soc/renesas/rz-ssi.c
@@ -13,6 +13,8 @@
#include <linux/module.h>
#include <linux/pm_runtime.h>
#include <linux/reset.h>
+#include <sound/dmaengine_pcm.h>
+#include <sound/pcm.h>
#include <sound/pcm_params.h>
#include <sound/soc.h>
@@ -87,8 +89,6 @@ struct rz_ssi_stream {
struct rz_ssi_priv *priv;
struct snd_pcm_substream *substream;
int fifo_sample_size; /* sample capacity of SSI FIFO */
- int dma_buffer_pos; /* The address for the next DMA descriptor */
- int completed_dma_buf_pos; /* The address of the last completed DMA descriptor. */
int period_counter; /* for keeping track of periods transferred */
int buffer_pos; /* current frame position in the buffer */
int running; /* 0=stopped, 1=running */
@@ -96,8 +96,6 @@ struct rz_ssi_stream {
int uerr_num;
int oerr_num;
- struct dma_chan *dma_ch;
-
int (*transfer)(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm);
};
@@ -108,7 +106,6 @@ struct rz_ssi_priv {
struct clk *sfr_clk;
struct clk *clk;
- phys_addr_t phys;
int irq_int;
int irq_tx;
int irq_rx;
@@ -148,9 +145,10 @@ struct rz_ssi_priv {
unsigned int sample_width;
unsigned int sample_bits;
} hw_params_cache;
-};
-static void rz_ssi_dma_complete(void *data);
+ struct snd_dmaengine_dai_dma_data dma_dais[SNDRV_PCM_STREAM_LAST + 1];
+ struct dma_chan *dmas[SNDRV_PCM_STREAM_LAST + 1];
+};
static void rz_ssi_reg_writel(struct rz_ssi_priv *priv, uint reg, u32 data)
{
@@ -172,11 +170,6 @@ static void rz_ssi_reg_mask_setl(struct rz_ssi_priv *priv, uint reg,
writel(val, (priv->base + reg));
}
-static inline bool rz_ssi_stream_is_play(struct snd_pcm_substream *substream)
-{
- return substream->stream == SNDRV_PCM_STREAM_PLAYBACK;
-}
-
static inline struct rz_ssi_stream *
rz_ssi_stream_get(struct rz_ssi_priv *ssi, struct snd_pcm_substream *substream)
{
@@ -185,7 +178,7 @@ rz_ssi_stream_get(struct rz_ssi_priv *ssi, struct snd_pcm_substream *substream)
static inline bool rz_ssi_is_dma_enabled(struct rz_ssi_priv *ssi)
{
- return (ssi->playback.dma_ch && (ssi->dma_rt || ssi->capture.dma_ch));
+ return !ssi->playback.transfer && !ssi->capture.transfer;
}
static void rz_ssi_set_substream(struct rz_ssi_stream *strm,
@@ -215,8 +208,6 @@ static void rz_ssi_stream_init(struct rz_ssi_stream *strm,
struct snd_pcm_substream *substream)
{
rz_ssi_set_substream(strm, substream);
- strm->dma_buffer_pos = 0;
- strm->completed_dma_buf_pos = 0;
strm->period_counter = 0;
strm->buffer_pos = 0;
@@ -242,12 +233,13 @@ static void rz_ssi_stream_quit(struct rz_ssi_priv *ssi,
dev_info(dev, "underrun = %d\n", strm->uerr_num);
}
-static int rz_ssi_clk_setup(struct rz_ssi_priv *ssi, unsigned int rate,
- unsigned int channels)
+static int rz_ssi_clk_setup(struct rz_ssi_priv *ssi, struct snd_pcm_substream *substream,
+ unsigned int rate, unsigned int channels)
{
static u8 ckdv[] = { 1, 2, 4, 8, 16, 32, 64, 128, 6, 12, 24, 48, 96 };
unsigned int channel_bits = 32; /* System Word Length */
unsigned long bclk_rate = rate * channels * channel_bits;
+ struct snd_dmaengine_dai_dma_data *dma_dai;
unsigned int div;
unsigned int i;
u32 ssicr = 0;
@@ -290,6 +282,8 @@ static int rz_ssi_clk_setup(struct rz_ssi_priv *ssi, unsigned int rate,
return -EINVAL;
}
+ dma_dai = &ssi->dma_dais[substream->stream];
+
/*
* DWL: Data Word Length = {16, 24, 32} bits
* SWL: System Word Length = 32 bits
@@ -298,12 +292,15 @@ static int rz_ssi_clk_setup(struct rz_ssi_priv *ssi, unsigned int rate,
switch (ssi->hw_params_cache.sample_width) {
case 16:
ssicr |= SSICR_DWL(1);
+ dma_dai->addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES;
break;
case 24:
ssicr |= SSICR_DWL(5) | SSICR_PDTA;
+ dma_dai->addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
break;
case 32:
ssicr |= SSICR_DWL(6);
+ dma_dai->addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
break;
default:
dev_err(ssi->dev, "Not support %u data width",
@@ -344,7 +341,7 @@ static void rz_ssi_set_idle(struct rz_ssi_priv *ssi)
static int rz_ssi_start(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm)
{
- bool is_play = rz_ssi_stream_is_play(strm->substream);
+ bool is_play = strm->substream->stream == SNDRV_PCM_STREAM_PLAYBACK;
bool is_full_duplex;
u32 ssicr, ssifcr;
@@ -423,14 +420,6 @@ static int rz_ssi_stop(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm)
/* Disable TX/RX */
rz_ssi_reg_mask_setl(ssi, SSICR, SSICR_TEN | SSICR_REN, 0);
- /* Cancel all remaining DMA transactions */
- if (rz_ssi_is_dma_enabled(ssi)) {
- if (ssi->playback.dma_ch)
- dmaengine_terminate_async(ssi->playback.dma_ch);
- if (ssi->capture.dma_ch)
- dmaengine_terminate_async(ssi->capture.dma_ch);
- }
-
rz_ssi_set_idle(ssi);
return 0;
@@ -458,10 +447,6 @@ static void rz_ssi_pointer_update(struct rz_ssi_stream *strm, int frames)
snd_pcm_period_elapsed(strm->substream);
strm->period_counter = current_period;
}
-
- strm->completed_dma_buf_pos += runtime->period_size;
- if (strm->completed_dma_buf_pos >= runtime->buffer_size)
- strm->completed_dma_buf_pos = 0;
}
static int rz_ssi_pio_recv(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm)
@@ -606,12 +591,6 @@ static irqreturn_t rz_ssi_interrupt(int irq, void *data)
if (irq == ssi->irq_int) { /* error or idle */
bool is_stopped = !!(ssisr & (SSISR_RUIRQ | SSISR_ROIRQ |
SSISR_TUIRQ | SSISR_TOIRQ));
- int i, count;
-
- if (rz_ssi_is_dma_enabled(ssi))
- count = 4;
- else
- count = 1;
if (ssi->capture.substream && is_stopped) {
if (ssisr & SSISR_RUIRQ)
@@ -631,18 +610,31 @@ static irqreturn_t rz_ssi_interrupt(int irq, void *data)
rz_ssi_stop(ssi, strm_playback);
}
+ if (!rz_ssi_is_stream_running(&ssi->playback) &&
+ !rz_ssi_is_stream_running(&ssi->capture) &&
+ rz_ssi_is_dma_enabled(ssi)) {
+ if (ssi->dmas[SNDRV_PCM_STREAM_PLAYBACK])
+ dmaengine_pause(ssi->dmas[SNDRV_PCM_STREAM_PLAYBACK]);
+ if (ssi->dmas[SNDRV_PCM_STREAM_CAPTURE])
+ dmaengine_pause(ssi->dmas[SNDRV_PCM_STREAM_CAPTURE]);
+ }
+
/* Clear all flags */
rz_ssi_reg_mask_setl(ssi, SSISR, SSISR_TOIRQ | SSISR_TUIRQ |
SSISR_ROIRQ | SSISR_RUIRQ, 0);
/* Add/remove more data */
if (ssi->capture.substream && is_stopped) {
- for (i = 0; i < count; i++)
+ if (rz_ssi_is_dma_enabled(ssi))
+ dmaengine_resume(ssi->dmas[SNDRV_PCM_STREAM_CAPTURE]);
+ else
strm_capture->transfer(ssi, strm_capture);
}
if (ssi->playback.substream && is_stopped) {
- for (i = 0; i < count; i++)
+ if (rz_ssi_is_dma_enabled(ssi))
+ dmaengine_resume(ssi->dmas[SNDRV_PCM_STREAM_PLAYBACK]);
+ else
strm_playback->transfer(ssi, strm_playback);
}
@@ -679,153 +671,11 @@ static irqreturn_t rz_ssi_interrupt(int irq, void *data)
return IRQ_HANDLED;
}
-static int rz_ssi_dma_slave_config(struct rz_ssi_priv *ssi,
- struct dma_chan *dma_ch, bool is_play)
-{
- struct dma_slave_config cfg;
-
- memset(&cfg, 0, sizeof(cfg));
-
- cfg.direction = is_play ? DMA_MEM_TO_DEV : DMA_DEV_TO_MEM;
- cfg.dst_addr = ssi->phys + SSIFTDR;
- cfg.src_addr = ssi->phys + SSIFRDR;
- if (ssi->hw_params_cache.sample_width == 16) {
- cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES;
- cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES;
- } else {
- cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
- cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
- }
-
- return dmaengine_slave_config(dma_ch, &cfg);
-}
-
-static int rz_ssi_dma_transfer(struct rz_ssi_priv *ssi,
- struct rz_ssi_stream *strm)
-{
- struct snd_pcm_substream *substream = strm->substream;
- struct dma_async_tx_descriptor *desc;
- struct snd_pcm_runtime *runtime;
- enum dma_transfer_direction dir;
- u32 dma_paddr, dma_size;
- int amount;
-
- if (!rz_ssi_stream_is_valid(ssi, strm))
- return -EINVAL;
-
- runtime = substream->runtime;
- if (runtime->state == SNDRV_PCM_STATE_DRAINING)
- /*
- * Stream is ending, so do not queue up any more DMA
- * transfers otherwise we play partial sound clips
- * because we can't shut off the DMA quick enough.
- */
- return 0;
-
- dir = rz_ssi_stream_is_play(substream) ? DMA_MEM_TO_DEV : DMA_DEV_TO_MEM;
-
- /* Always transfer 1 period */
- amount = runtime->period_size;
-
- /* DMA physical address and size */
- dma_paddr = runtime->dma_addr + frames_to_bytes(runtime,
- strm->dma_buffer_pos);
- dma_size = frames_to_bytes(runtime, amount);
- desc = dmaengine_prep_slave_single(strm->dma_ch, dma_paddr, dma_size,
- dir,
- DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
- if (!desc) {
- dev_err(ssi->dev, "dmaengine_prep_slave_single() fail\n");
- return -ENOMEM;
- }
-
- desc->callback = rz_ssi_dma_complete;
- desc->callback_param = strm;
-
- if (dmaengine_submit(desc) < 0) {
- dev_err(ssi->dev, "dmaengine_submit() fail\n");
- return -EIO;
- }
-
- /* Update DMA pointer */
- strm->dma_buffer_pos += amount;
- if (strm->dma_buffer_pos >= runtime->buffer_size)
- strm->dma_buffer_pos = 0;
-
- /* Start DMA */
- dma_async_issue_pending(strm->dma_ch);
-
- return 0;
-}
-
-static void rz_ssi_dma_complete(void *data)
-{
- struct rz_ssi_stream *strm = (struct rz_ssi_stream *)data;
-
- if (!strm->running || !strm->substream || !strm->substream->runtime)
- return;
-
- /* Note that next DMA transaction has probably already started */
- rz_ssi_pointer_update(strm, strm->substream->runtime->period_size);
-
- /* Queue up another DMA transaction */
- rz_ssi_dma_transfer(strm->priv, strm);
-}
-
-static void rz_ssi_release_dma_channels(struct rz_ssi_priv *ssi)
-{
- if (ssi->playback.dma_ch) {
- dma_release_channel(ssi->playback.dma_ch);
- ssi->playback.dma_ch = NULL;
- if (ssi->dma_rt)
- ssi->dma_rt = false;
- }
-
- if (ssi->capture.dma_ch) {
- dma_release_channel(ssi->capture.dma_ch);
- ssi->capture.dma_ch = NULL;
- }
-}
-
-static int rz_ssi_dma_request(struct rz_ssi_priv *ssi, struct device *dev)
-{
- ssi->playback.dma_ch = dma_request_chan(dev, "tx");
- if (IS_ERR(ssi->playback.dma_ch))
- ssi->playback.dma_ch = NULL;
-
- ssi->capture.dma_ch = dma_request_chan(dev, "rx");
- if (IS_ERR(ssi->capture.dma_ch))
- ssi->capture.dma_ch = NULL;
-
- if (!ssi->playback.dma_ch && !ssi->capture.dma_ch) {
- ssi->playback.dma_ch = dma_request_chan(dev, "rt");
- if (IS_ERR(ssi->playback.dma_ch)) {
- ssi->playback.dma_ch = NULL;
- goto no_dma;
- }
-
- ssi->dma_rt = true;
- }
-
- if (!rz_ssi_is_dma_enabled(ssi))
- goto no_dma;
-
- return 0;
-
-no_dma:
- rz_ssi_release_dma_channels(ssi);
-
- return -ENODEV;
-}
-
static int rz_ssi_trigger_resume(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm)
{
struct snd_pcm_substream *substream = strm->substream;
- struct snd_pcm_runtime *runtime = substream->runtime;
int ret;
- strm->dma_buffer_pos = strm->completed_dma_buf_pos + runtime->period_size;
-
if (rz_ssi_is_stream_running(&ssi->playback) ||
rz_ssi_is_stream_running(&ssi->capture))
return 0;
@@ -834,7 +684,7 @@ static int rz_ssi_trigger_resume(struct rz_ssi_priv *ssi, struct rz_ssi_stream *
if (ret)
return ret;
- return rz_ssi_clk_setup(ssi, ssi->hw_params_cache.rate,
+ return rz_ssi_clk_setup(ssi, substream, ssi->hw_params_cache.rate,
ssi->hw_params_cache.channels);
}
@@ -843,7 +693,7 @@ static int rz_ssi_dai_trigger(struct snd_pcm_substream *substream, int cmd,
{
struct rz_ssi_priv *ssi = snd_soc_dai_get_drvdata(dai);
struct rz_ssi_stream *strm = rz_ssi_stream_get(ssi, substream);
- int ret = 0, i, num_transfer = 1;
+ int ret = 0;
switch (cmd) {
case SNDRV_PCM_TRIGGER_RESUME:
@@ -858,28 +708,7 @@ static int rz_ssi_dai_trigger(struct snd_pcm_substream *substream, int cmd,
if (cmd == SNDRV_PCM_TRIGGER_START)
rz_ssi_stream_init(strm, substream);
- if (rz_ssi_is_dma_enabled(ssi)) {
- bool is_playback = rz_ssi_stream_is_play(substream);
-
- if (ssi->dma_rt)
- ret = rz_ssi_dma_slave_config(ssi, ssi->playback.dma_ch,
- is_playback);
- else
- ret = rz_ssi_dma_slave_config(ssi, strm->dma_ch,
- is_playback);
-
- /* Fallback to pio */
- if (ret < 0) {
- ssi->playback.transfer = rz_ssi_pio_send;
- ssi->capture.transfer = rz_ssi_pio_recv;
- rz_ssi_release_dma_channels(ssi);
- } else {
- /* For DMA, queue up multiple DMA descriptors */
- num_transfer = 4;
- }
- }
-
- for (i = 0; i < num_transfer; i++) {
+ if (!rz_ssi_is_dma_enabled(ssi)) {
ret = strm->transfer(ssi, strm);
if (ret)
return ret;
@@ -1026,6 +855,12 @@ static int rz_ssi_dai_hw_params(struct snd_pcm_substream *substream,
return -EINVAL;
}
+ /* Save the DMA channels for recovery. */
+ if (rz_ssi_is_dma_enabled(ssi))
+ ssi->dmas[substream->stream] = snd_dmaengine_pcm_get_chan(substream);
+ else
+ ssi->dmas[substream->stream] = NULL;
+
if (rz_ssi_is_stream_running(&ssi->playback) ||
rz_ssi_is_stream_running(&ssi->capture)) {
if (rz_ssi_is_valid_hw_params(ssi, rate, channels, sample_width, sample_bits))
@@ -1041,10 +876,21 @@ static int rz_ssi_dai_hw_params(struct snd_pcm_substream *substream,
if (ret)
return ret;
- return rz_ssi_clk_setup(ssi, rate, channels);
+ return rz_ssi_clk_setup(ssi, substream, rate, channels);
+}
+
+static int rz_ssi_dai_probe(struct snd_soc_dai *dai)
+{
+ struct rz_ssi_priv *ssi = snd_soc_dai_get_drvdata(dai);
+
+ snd_soc_dai_init_dma_data(dai, &ssi->dma_dais[SNDRV_PCM_STREAM_PLAYBACK],
+ &ssi->dma_dais[SNDRV_PCM_STREAM_CAPTURE]);
+
+ return 0;
}
static const struct snd_soc_dai_ops rz_ssi_dai_ops = {
+ .probe = rz_ssi_dai_probe,
.startup = rz_ssi_startup,
.shutdown = rz_ssi_shutdown,
.trigger = rz_ssi_dai_trigger,
@@ -1058,9 +904,9 @@ static const struct snd_pcm_hardware rz_ssi_pcm_hardware = {
SNDRV_PCM_INFO_MMAP_VALID |
SNDRV_PCM_INFO_RESUME |
SNDRV_PCM_INFO_PAUSE,
- .buffer_bytes_max = PREALLOC_BUFFER,
+ .buffer_bytes_max = 192 * 1024,
.period_bytes_min = 32,
- .period_bytes_max = 8192,
+ .period_bytes_max = 48 * 1024,
.channels_min = SSI_CHAN_MIN,
.channels_max = SSI_CHAN_MAX,
.periods_min = 1,
@@ -1068,8 +914,8 @@ static const struct snd_pcm_hardware rz_ssi_pcm_hardware = {
.fifo_size = 32 * 2,
};
-static int rz_ssi_pcm_open(struct snd_soc_component *component,
- struct snd_pcm_substream *substream)
+static int rz_ssi_pcm_open_pio(struct snd_soc_component *component,
+ struct snd_pcm_substream *substream)
{
snd_soc_set_runtime_hwparams(substream, &rz_ssi_pcm_hardware);
@@ -1077,6 +923,13 @@ static int rz_ssi_pcm_open(struct snd_soc_component *component,
SNDRV_PCM_HW_PARAM_PERIODS);
}
+static int rz_ssi_pcm_open_dma(struct snd_soc_component *component,
+ struct snd_pcm_substream *substream)
+{
+ return snd_pcm_hw_constraint_integer(substream->runtime,
+ SNDRV_PCM_HW_PARAM_PERIODS);
+}
+
static snd_pcm_uframes_t rz_ssi_pcm_pointer(struct snd_soc_component *component,
struct snd_pcm_substream *substream)
{
@@ -1093,7 +946,8 @@ static int rz_ssi_pcm_new(struct snd_soc_component *component,
{
snd_pcm_set_managed_buffer_all(rtd->pcm, SNDRV_DMA_TYPE_DEV,
rtd->card->snd_card->dev,
- PREALLOC_BUFFER, PREALLOC_BUFFER_MAX);
+ rz_ssi_pcm_hardware.buffer_bytes_max,
+ rz_ssi_pcm_hardware.buffer_bytes_max);
return 0;
}
@@ -1116,16 +970,30 @@ static struct snd_soc_dai_driver rz_ssi_soc_dai[] = {
},
};
-static const struct snd_soc_component_driver rz_ssi_soc_component = {
+static const struct snd_soc_component_driver rz_ssi_soc_component_pio = {
.name = "rz-ssi",
- .open = rz_ssi_pcm_open,
+ .open = rz_ssi_pcm_open_pio,
.pointer = rz_ssi_pcm_pointer,
.pcm_new = rz_ssi_pcm_new,
.legacy_dai_naming = 1,
};
+static const struct snd_soc_component_driver rz_ssi_soc_component_dma = {
+ .name = "rz-ssi",
+ .open = rz_ssi_pcm_open_dma,
+ .legacy_dai_naming = 1,
+};
+
+static const struct snd_dmaengine_pcm_config rz_ssi_dmaengine_pcm_conf = {
+ .pcm_hardware = &rz_ssi_pcm_hardware,
+ .prealloc_buffer_size = 192 * 1024,
+ .prepare_slave_config = snd_dmaengine_pcm_prepare_slave_config,
+};
+
static int rz_ssi_probe(struct platform_device *pdev)
{
+ const struct snd_soc_component_driver *component_driver;
+ struct device_node *np = pdev->dev.of_node;
struct device *dev = &pdev->dev;
struct rz_ssi_priv *ssi;
struct clk *audio_clk;
@@ -1141,7 +1009,6 @@ static int rz_ssi_probe(struct platform_device *pdev)
if (IS_ERR(ssi->base))
return PTR_ERR(ssi->base);
- ssi->phys = res->start;
ssi->clk = devm_clk_get(dev, "ssi");
if (IS_ERR(ssi->clk))
return PTR_ERR(ssi->clk);
@@ -1165,16 +1032,43 @@ static int rz_ssi_probe(struct platform_device *pdev)
ssi->audio_mck = ssi->audio_clk_1 ? ssi->audio_clk_1 : ssi->audio_clk_2;
- /* Detect DMA support */
- ret = rz_ssi_dma_request(ssi, dev);
- if (ret < 0) {
+ ssi->dma_dais[SNDRV_PCM_STREAM_PLAYBACK].addr = (dma_addr_t)res->start + SSIFTDR;
+ ssi->dma_dais[SNDRV_PCM_STREAM_CAPTURE].addr = (dma_addr_t)res->start + SSIFRDR;
+
+ if (of_property_present(np, "dma-names")) {
+ struct snd_dmaengine_pcm_config *config;
+ unsigned int flags = 0;
+
+ config = devm_kzalloc(dev, sizeof(*config), GFP_KERNEL);
+ if (!config)
+ return -ENOMEM;
+
+ config->pcm_hardware = rz_ssi_dmaengine_pcm_conf.pcm_hardware;
+ config->prealloc_buffer_size = rz_ssi_dmaengine_pcm_conf.prealloc_buffer_size;
+ config->prepare_slave_config = rz_ssi_dmaengine_pcm_conf.prepare_slave_config;
+
+ if (of_property_match_string(np, "dma-names", "rt") == 0) {
+ flags = SND_DMAENGINE_PCM_FLAG_HALF_DUPLEX;
+ config->chan_names[SNDRV_PCM_STREAM_PLAYBACK] = "rt";
+ } else {
+ config->chan_names[SNDRV_PCM_STREAM_PLAYBACK] = "tx";
+ config->chan_names[SNDRV_PCM_STREAM_CAPTURE] = "rx";
+ }
+ ret = devm_snd_dmaengine_pcm_register(&pdev->dev, config, flags);
+ } else {
+ ret = -ENODEV;
+ }
+
+ if (ret == -EPROBE_DEFER) {
+ return ret;
+ } else if (ret) {
dev_warn(dev, "DMA not available, using PIO\n");
ssi->playback.transfer = rz_ssi_pio_send;
ssi->capture.transfer = rz_ssi_pio_recv;
+ component_driver = &rz_ssi_soc_component_pio;
} else {
- dev_info(dev, "DMA enabled");
- ssi->playback.transfer = rz_ssi_dma_transfer;
- ssi->capture.transfer = rz_ssi_dma_transfer;
+ dev_info(dev, "DMA enabled\n");
+ component_driver = &rz_ssi_soc_component_dma;
}
ssi->playback.priv = ssi;
@@ -1185,17 +1079,13 @@ static int rz_ssi_probe(struct platform_device *pdev)
/* Error Interrupt */
ssi->irq_int = platform_get_irq_byname(pdev, "int_req");
- if (ssi->irq_int < 0) {
- ret = ssi->irq_int;
- goto err_release_dma_chs;
- }
+ if (ssi->irq_int < 0)
+ return ssi->irq_int;
ret = devm_request_irq(dev, ssi->irq_int, rz_ssi_interrupt,
0, dev_name(dev), ssi);
- if (ret < 0) {
- dev_err_probe(dev, ret, "irq request error (int_req)\n");
- goto err_release_dma_chs;
- }
+ if (ret < 0)
+ return dev_err_probe(dev, ret, "irq request error (int_req)\n");
if (!rz_ssi_is_dma_enabled(ssi)) {
/* Tx and Rx interrupts (pio only) */
@@ -1236,43 +1126,19 @@ static int rz_ssi_probe(struct platform_device *pdev)
}
ssi->rstc = devm_reset_control_get_exclusive(dev, NULL);
- if (IS_ERR(ssi->rstc)) {
- ret = PTR_ERR(ssi->rstc);
- goto err_release_dma_chs;
- }
+ if (IS_ERR(ssi->rstc))
+ return dev_err_probe(dev, PTR_ERR(ssi->rstc), "Failed to get reset\n");
/* Default 0 for power saving. Can be overridden via sysfs. */
pm_runtime_set_autosuspend_delay(dev, 0);
pm_runtime_use_autosuspend(dev);
ret = devm_pm_runtime_enable(dev);
- if (ret < 0) {
- dev_err(dev, "Failed to enable runtime PM!\n");
- goto err_release_dma_chs;
- }
-
- ret = devm_snd_soc_register_component(dev, &rz_ssi_soc_component,
- rz_ssi_soc_dai,
- ARRAY_SIZE(rz_ssi_soc_dai));
- if (ret < 0) {
- dev_err(dev, "failed to register snd component\n");
- goto err_release_dma_chs;
- }
-
- return 0;
-
-err_release_dma_chs:
- rz_ssi_release_dma_channels(ssi);
-
- return ret;
-}
-
-static void rz_ssi_remove(struct platform_device *pdev)
-{
- struct rz_ssi_priv *ssi = dev_get_drvdata(&pdev->dev);
-
- rz_ssi_release_dma_channels(ssi);
+ if (ret < 0)
+ return dev_err_probe(dev, ret, "Failed to enable runtime PM!\n");
- reset_control_assert(ssi->rstc);
+ return devm_snd_soc_register_component(dev, component_driver,
+ rz_ssi_soc_dai,
+ ARRAY_SIZE(rz_ssi_soc_dai));
}
static const struct of_device_id rz_ssi_of_match[] = {
@@ -1307,7 +1173,6 @@ static struct platform_driver rz_ssi_driver = {
.pm = pm_ptr(&rz_ssi_pm_ops),
},
.probe = rz_ssi_probe,
- .remove = rz_ssi_remove,
};
module_platform_driver(rz_ssi_driver);
--
2.43.0
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH v4 17/17] dmaengine: sh: rz-dmac: Set the Link End (LE) bit on the last descriptor
2026-04-11 11:42 [PATCH v4 00/17] Renesas: dmaengine and ASoC fixes Claudiu
` (15 preceding siblings ...)
2026-04-11 11:43 ` [PATCH v4 16/17] ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs Claudiu
@ 2026-04-11 11:43 ` Claudiu
16 siblings, 0 replies; 37+ messages in thread
From: Claudiu @ 2026-04-11 11:43 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
On an RZ/G2L-based system, it has been observed that when the DMA channels
for all enabled IPs are active (TX and RX for one serial IP, TX and RX for
one audio IP, and TX and RX for one SPI IP), shortly after all of them are
started, the system can become irrecoverably blocked. In one debug session
the system did not block, and the DMA HW registers were inspected. It was
found that the DER (Descriptor Error) bit in the CHSTAT register for one of
the SPI DMA channels was set.
According to the RZ/G2L HW Manual, Rev. 1.30, chapter 14.4.7 Channel
Status Register n/nS (CHSTAT_n/nS), description of the DER bit, the DER
bit is set when the LV (Link Valid) value loaded with a descriptor in link
mode is 0. This means that the DMA engine has loaded an invalid
descriptor (as defined in Table 14.14, Header Area, of the same manual).
The same chapter states that when a descriptor error occurs, the transfer
is stopped, but no DMA error interrupt is generated.
Set the LE bit on the last descriptor of a transfer. This informs the DMA
engine that this is the final descriptor for the transfer.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v4:
- none
Changes in v3:
- none
Changes in v2:
- none
drivers/dma/sh/rz-dmac.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 00e18d8213ca..f5d2e206f4bb 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -200,6 +200,7 @@ struct rz_dmac {
/* LINK MODE DESCRIPTOR */
#define HEADER_LV BIT(0)
+#define HEADER_LE BIT(1)
#define HEADER_WBD BIT(2)
#define RZ_DMAC_MAX_CHAN_DESCRIPTORS 16
@@ -382,7 +383,7 @@ static void rz_dmac_prepare_desc_for_memcpy(struct rz_dmac_chan *channel)
lmdesc->chcfg = chcfg;
lmdesc->chitvl = 0;
lmdesc->chext = 0;
- lmdesc->header = HEADER_LV;
+ lmdesc->header = HEADER_LV | HEADER_LE;
rz_dmac_set_dma_req_no(dmac, channel->index, dmac->info->default_dma_req_no);
@@ -425,7 +426,7 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
lmdesc->chext = 0;
if (i == (sg_len - 1)) {
lmdesc->chcfg = (channel->chcfg & ~CHCFG_DEM);
- lmdesc->header = HEADER_LV;
+ lmdesc->header = HEADER_LV | HEADER_LE;
} else {
lmdesc->chcfg = channel->chcfg;
lmdesc->header = HEADER_LV;
--
2.43.0
^ permalink raw reply related [flat|nested] 37+ messages in thread
* RE: [PATCH v4 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up
2026-04-11 11:42 ` [PATCH v4 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up Claudiu
@ 2026-04-11 12:17 ` Biju Das
2026-04-11 12:34 ` Claudiu Beznea
2026-04-20 12:33 ` sashiko.dev review (Re: [PATCH v4 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up) Claudiu Beznea
1 sibling, 1 reply; 37+ messages in thread
From: Biju Das @ 2026-04-11 12:17 UTC (permalink / raw)
To: Claudiu.Beznea, vkoul@kernel.org, Frank.Li@kernel.org,
lgirdwood@gmail.com, broonie@kernel.org, perex@perex.cz,
tiwai@suse.com, Prabhakar Mahadev Lad, p.zabel@pengutronix.de,
geert+renesas@glider.be, Fabrizio Castro, Long Luu
Cc: Claudiu.Beznea, dmaengine@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-sound@vger.kernel.org,
linux-renesas-soc@vger.kernel.org, Claudiu Beznea,
stable@vger.kernel.org
> -----Original Message-----
> From: Claudiu <claudiu.beznea@tuxon.dev>
> Sent: 11 April 2026 12:43
> Subject: [PATCH v4 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up
>
> From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
>
> Once the interrupt is requested, the interrupt handler may run immediately.
> Since the IRQ handler can access channel->ch_base, which is initialized only after requesting the IRQ,
> this may lead to invalid memory access.
> Likewise, the IRQ thread may access uninitialized data (the ld_free, ld_queue, and ld_active lists),
> which may also lead to issues.
>
> Request the interrupts only after everything is set up. To keep the error path simpler, use
> dmam_alloc_coherent() instead of dma_alloc_coherent().
>
> Fixes: 5000d37042a6 ("dmaengine: sh: Add DMAC driver for RZ/G2L SoC")
> Cc: stable@vger.kernel.org
> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> ---
>
> Changes in v4:
> - none, this patch is new
>
> drivers/dma/sh/rz-dmac.c | 88 +++++++++++++++-------------------------
> 1 file changed, 33 insertions(+), 55 deletions(-)
>
> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c index 625ff29024de..9f206a33dcc6
> 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
> @@ -981,25 +981,6 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac,
> channel->index = index;
> channel->mid_rid = -EINVAL;
>
> - /* Request the channel interrupt. */
> - scnprintf(pdev_irqname, sizeof(pdev_irqname), "ch%u", index);
> - irq = platform_get_irq_byname(pdev, pdev_irqname);
> - if (irq < 0)
> - return irq;
> -
> - irqname = devm_kasprintf(dmac->dev, GFP_KERNEL, "%s:%u",
> - dev_name(dmac->dev), index);
> - if (!irqname)
> - return -ENOMEM;
> -
> - ret = devm_request_threaded_irq(dmac->dev, irq, rz_dmac_irq_handler,
> - rz_dmac_irq_handler_thread, 0,
> - irqname, channel);
> - if (ret) {
> - dev_err(dmac->dev, "failed to request IRQ %u (%d)\n", irq, ret);
> - return ret;
> - }
> -
> /* Set io base address for each channel */
> if (index < 8) {
> channel->ch_base = dmac->base + CHANNEL_0_7_OFFSET + @@ -1012,9 +993,9 @@ static int
> rz_dmac_chan_probe(struct rz_dmac *dmac,
> }
>
> /* Allocate descriptors */
> - lmdesc = dma_alloc_coherent(&pdev->dev,
> - sizeof(struct rz_lmdesc) * DMAC_NR_LMDESC,
> - &channel->lmdesc.base_dma, GFP_KERNEL);
> + lmdesc = dmam_alloc_coherent(&pdev->dev,
> + sizeof(struct rz_lmdesc) * DMAC_NR_LMDESC,
> + &channel->lmdesc.base_dma, GFP_KERNEL);
> if (!lmdesc) {
> dev_err(&pdev->dev, "Can't allocate memory (lmdesc)\n");
> return -ENOMEM;
> @@ -1030,7 +1011,24 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac,
> INIT_LIST_HEAD(&channel->ld_free);
> INIT_LIST_HEAD(&channel->ld_active);
>
> - return 0;
> + /* Request the channel interrupt. */
> + scnprintf(pdev_irqname, sizeof(pdev_irqname), "ch%u", index);
> + irq = platform_get_irq_byname(pdev, pdev_irqname);
> + if (irq < 0)
> + return irq;
> +
> + irqname = devm_kasprintf(dmac->dev, GFP_KERNEL, "%s:%u",
> + dev_name(dmac->dev), index);
> + if (!irqname)
> + return -ENOMEM;
> +
> + ret = devm_request_threaded_irq(dmac->dev, irq, rz_dmac_irq_handler,
> + rz_dmac_irq_handler_thread, 0,
> + irqname, channel);
> + if (ret)
> + dev_err(dmac->dev, "failed to request IRQ %u (%d)\n", irq, ret);
As per [1], it is redundant.
[1]
https://elixir.bootlin.com/linux/v7.0-rc7/source/kernel/irq/devres.c#L108
> +
> + return ret;
> }
>
> static void rz_dmac_put_device(void *_dev) @@ -1099,7 +1097,6 @@ static int rz_dmac_probe(struct
> platform_device *pdev)
> const char *irqname = "error";
> struct dma_device *engine;
> struct rz_dmac *dmac;
> - int channel_num;
> int ret;
> int irq;
> u8 i;
> @@ -1132,18 +1129,6 @@ static int rz_dmac_probe(struct platform_device *pdev)
> return PTR_ERR(dmac->ext_base);
> }
>
> - /* Register interrupt handler for error */
> - irq = platform_get_irq_byname_optional(pdev, irqname);
> - if (irq > 0) {
> - ret = devm_request_irq(&pdev->dev, irq, rz_dmac_irq_handler, 0,
> - irqname, NULL);
> - if (ret) {
> - dev_err(&pdev->dev, "failed to request IRQ %u (%d)\n",
> - irq, ret);
> - return ret;
> - }
> - }
> -
> /* Initialize the channels. */
> INIT_LIST_HEAD(&dmac->engine.channels);
>
> @@ -1169,6 +1154,18 @@ static int rz_dmac_probe(struct platform_device *pdev)
> goto err;
> }
>
> + /* Register interrupt handler for error */
> + irq = platform_get_irq_byname_optional(pdev, irqname);
> + if (irq > 0) {
> + ret = devm_request_irq(&pdev->dev, irq, rz_dmac_irq_handler, 0,
> + irqname, NULL);
> + if (ret) {
> + dev_err(&pdev->dev, "failed to request IRQ %u (%d)\n",
> + irq, ret);
Same case here
Cheers,
Biju
> + goto err;
> + }
> + }
> +
> /* Register the DMAC as a DMA provider for DT. */
> ret = of_dma_controller_register(pdev->dev.of_node, rz_dmac_of_xlate,
> NULL);
> @@ -1210,16 +1207,6 @@ static int rz_dmac_probe(struct platform_device *pdev)
> dma_register_err:
> of_dma_controller_free(pdev->dev.of_node);
> err:
> - channel_num = i ? i - 1 : 0;
> - for (i = 0; i < channel_num; i++) {
> - struct rz_dmac_chan *channel = &dmac->channels[i];
> -
> - dma_free_coherent(&pdev->dev,
> - sizeof(struct rz_lmdesc) * DMAC_NR_LMDESC,
> - channel->lmdesc.base,
> - channel->lmdesc.base_dma);
> - }
> -
> reset_control_assert(dmac->rstc);
> err_pm_runtime_put:
> pm_runtime_put(&pdev->dev);
> @@ -1232,18 +1219,9 @@ static int rz_dmac_probe(struct platform_device *pdev) static void
> rz_dmac_remove(struct platform_device *pdev) {
> struct rz_dmac *dmac = platform_get_drvdata(pdev);
> - unsigned int i;
>
> dma_async_device_unregister(&dmac->engine);
> of_dma_controller_free(pdev->dev.of_node);
> - for (i = 0; i < dmac->n_channels; i++) {
> - struct rz_dmac_chan *channel = &dmac->channels[i];
> -
> - dma_free_coherent(&pdev->dev,
> - sizeof(struct rz_lmdesc) * DMAC_NR_LMDESC,
> - channel->lmdesc.base,
> - channel->lmdesc.base_dma);
> - }
> reset_control_assert(dmac->rstc);
> pm_runtime_put(&pdev->dev);
> pm_runtime_disable(&pdev->dev);
> --
> 2.43.0
^ permalink raw reply [flat|nested] 37+ messages in thread
* RE: [PATCH v4 05/17] dmaengine: sh: rz-dmac: Do not disable the channel on error
2026-04-11 11:42 ` [PATCH v4 05/17] dmaengine: sh: rz-dmac: Do not disable the channel on error Claudiu
@ 2026-04-11 12:30 ` Biju Das
2026-04-14 8:28 ` Claudiu Beznea
0 siblings, 1 reply; 37+ messages in thread
From: Biju Das @ 2026-04-11 12:30 UTC (permalink / raw)
To: Claudiu.Beznea, vkoul@kernel.org, Frank.Li@kernel.org,
lgirdwood@gmail.com, broonie@kernel.org, perex@perex.cz,
tiwai@suse.com, Prabhakar Mahadev Lad, p.zabel@pengutronix.de,
geert+renesas@glider.be, Fabrizio Castro, Long Luu
Cc: Claudiu.Beznea, dmaengine@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-sound@vger.kernel.org,
linux-renesas-soc@vger.kernel.org, Claudiu Beznea
> -----Original Message-----
> From: Claudiu <claudiu.beznea@tuxon.dev>
> Sent: 11 April 2026 12:43
-soc@vger.kernel.org; Claudiu Beznea
> <claudiu.beznea.uj@bp.renesas.com>
> Subject: [PATCH v4 05/17] dmaengine: sh: rz-dmac: Do not disable the channel on error
>
> From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
>
> Disabling the channel on error is pointless, as if other transfers are queued, the IRQ thread will be
> woken up and will execute them anyway by calling rz_dmac_xfer_desc().
>
> rz_dmac_xfer_desc() re-enables the transfer. Before doing so, it sets CHCTRL.SWRST, which clears
> CHSTAT.DER and CHSTAT.END anyway.
>
> Skip disabling the DMA channel and just log the error instead.
>
> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> ---
>
> Changes in v4:
> - none
>
> Changes in v3:
> - none, this patch is new
>
> drivers/dma/sh/rz-dmac.c | 4 ----
> 1 file changed, 4 deletions(-)
>
> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c index 40ddf534c094..943c005f52bd
> 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
> @@ -871,10 +871,6 @@ static void rz_dmac_irq_handle_channel(struct rz_dmac_chan *channel)
> if (chstat & CHSTAT_ER) {
> dev_err(dmac->dev, "DMAC err CHSTAT_%d = %08X\n",
> channel->index, chstat);
> -
> - scoped_guard(spinlock_irqsave, &channel
->vc.lock)
> - rz_dmac_disable_hw(channel);
On previous patch, rz_dmac_disable_hw() for initializing each register
+ /* Initialize register for each channel */
+ rz_dmac_disable_hw(channel);
As per hardware manual,
Once an error occurs, the data of the whole transfer cannot be guaranteed.
Be sure to start the transaction again from the
beginning by following the procedure below.
1. Set 1 in the SWRST bit of the CHCTRL_n/nS register.
2. Set each register again.
Is this patch doing the procedure mentioned in hardware manual?
Cheers,
Biju
^ permalink raw reply [flat|nested] 37+ messages in thread
* RE: [PATCH v4 07/17] dmaengine: sh: rz-dmac: Save the start LM descriptor
2026-04-11 11:42 ` [PATCH v4 07/17] dmaengine: sh: rz-dmac: Save the start LM descriptor Claudiu
@ 2026-04-11 12:34 ` Biju Das
2026-04-11 12:38 ` Claudiu Beznea
2026-04-20 12:37 ` sashiko.dev review (Re: [PATCH v4 07/17] dmaengine: sh: rz-dmac: Save the start LM descriptor) Claudiu Beznea
1 sibling, 1 reply; 37+ messages in thread
From: Biju Das @ 2026-04-11 12:34 UTC (permalink / raw)
To: Claudiu.Beznea, vkoul@kernel.org, Frank.Li@kernel.org,
lgirdwood@gmail.com, broonie@kernel.org, perex@perex.cz,
tiwai@suse.com, Prabhakar Mahadev Lad, p.zabel@pengutronix.de,
geert+renesas@glider.be, Fabrizio Castro, Long Luu
Cc: Claudiu.Beznea, dmaengine@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-sound@vger.kernel.org,
linux-renesas-soc@vger.kernel.org, Claudiu Beznea
Hi Claudiu,
> -----Original Message-----
> From: Claudiu <claudiu.beznea@tuxon.dev>
> Sent: 11 April 2026 12:43
> Subject: [PATCH v4 07/17] dmaengine: sh: rz-dmac: Save the start LM descriptor
>
> From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
>
> Save the start LM descriptor to avoid looping through the entire channel's LM descriptor list when
> computing the residue. This avoids unnecessary iterations.
>
> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> ---
>
> Changes in v4:
> - none
>
> Changes in v3:
> - none, this patch is new
>
> drivers/dma/sh/rz-dmac.c | 11 ++++++++---
> 1 file changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c index 6bea7c8c7053..0f871c0a28bd
> 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
> @@ -58,6 +58,7 @@ struct rz_dmac_desc {
> /* For slave sg */
> struct scatterlist *sg;
> unsigned int sgcount;
> + struct rz_lmdesc *start_lmdesc;
> };
>
> #define to_rz_dmac_desc(d) container_of(d, struct rz_dmac_desc, vd)
> @@ -343,6 +344,8 @@ static void rz_dmac_prepare_desc_for_memcpy(struct rz_dmac_chan *channel)
> struct rz_dmac_desc *d = channel->desc;
> u32 chcfg = CHCFG_MEM_COPY;
>
> + d->start_lmdesc = lmdesc;
> +
> /* prepare descriptor */
> lmdesc->sa = d->src;
> lmdesc->da = d->dest;
> @@ -377,6 +380,7 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
> }
>
> lmdesc = channel->lmdesc.tail;
> + d->start_lmdesc = lmdesc;
>
> for (i = 0, sg = sgl; i < sg_len; i++, sg = sg_next(sg)) {
> if (d->direction == DMA_DEV_TO_MEM) { @@ -693,9 +697,10 @@ rz_dmac_get_next_lmdesc(struct
> rz_lmdesc *base, struct rz_lmdesc *lmdesc)
> return next;
> }
>
> -static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel, u32 crla)
> +static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel,
> + struct rz_dmac_desc *desc, u32 crla)
U32 normally used with register read/writes hardware related.
Here it is just computation which returns number of bytes. Unsigned int will be
appropriate instead of u32.
Cheers,
Biju
> {
> - struct rz_lmdesc *lmdesc = channel->lmdesc.head;
> + struct rz_lmdesc *lmdesc = desc->start_lmdesc;
> struct dma_chan *chan = &channel->vc.chan;
> struct rz_dmac *dmac = to_rz_dmac(chan->device);
> u32 residue = 0, i = 0;
> @@ -794,7 +799,7 @@ static u32 rz_dmac_chan_get_residue(struct rz_dmac_chan *channel,
> * Calculate number of bytes transferred in processing virtual descriptor.
> * One virtual descriptor can have many lmdesc.
> */
> - return crtb + rz_dmac_calculate_residue_bytes_in_vd(channel, crla);
> + return crtb + rz_dmac_calculate_residue_bytes_in_vd(channel,
> +current_desc, crla);
> }
>
> static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
> --
> 2.43.0
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v4 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up
2026-04-11 12:17 ` Biju Das
@ 2026-04-11 12:34 ` Claudiu Beznea
0 siblings, 0 replies; 37+ messages in thread
From: Claudiu Beznea @ 2026-04-11 12:34 UTC (permalink / raw)
To: Biju Das, vkoul@kernel.org, Frank.Li@kernel.org,
lgirdwood@gmail.com, broonie@kernel.org, perex@perex.cz,
tiwai@suse.com, Prabhakar Mahadev Lad, p.zabel@pengutronix.de,
geert+renesas@glider.be, Fabrizio Castro, Long Luu
Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-sound@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
Claudiu Beznea, stable@vger.kernel.org
On 4/11/26 15:17, Biju Das wrote:
>
>> -----Original Message-----
>> From: Claudiu<claudiu.beznea@tuxon.dev>
>> Sent: 11 April 2026 12:43
>> Subject: [PATCH v4 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up
>>
>> From: Claudiu Beznea<claudiu.beznea.uj@bp.renesas.com>
>>
>> Once the interrupt is requested, the interrupt handler may run immediately.
>> Since the IRQ handler can access channel->ch_base, which is initialized only after requesting the IRQ,
>> this may lead to invalid memory access.
>> Likewise, the IRQ thread may access uninitialized data (the ld_free, ld_queue, and ld_active lists),
>> which may also lead to issues.
>>
>> Request the interrupts only after everything is set up. To keep the error path simpler, use
>> dmam_alloc_coherent() instead of dma_alloc_coherent().
>>
>> Fixes: 5000d37042a6 ("dmaengine: sh: Add DMAC driver for RZ/G2L SoC")
>> Cc:stable@vger.kernel.org
>> Signed-off-by: Claudiu Beznea<claudiu.beznea.uj@bp.renesas.com>
>> ---
>>
>> Changes in v4:
>> - none, this patch is new
>>
>> drivers/dma/sh/rz-dmac.c | 88 +++++++++++++++-------------------------
>> 1 file changed, 33 insertions(+), 55 deletions(-)
>>
>> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c index 625ff29024de..9f206a33dcc6
>> 100644
>> --- a/drivers/dma/sh/rz-dmac.c
>> +++ b/drivers/dma/sh/rz-dmac.c
>> @@ -981,25 +981,6 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac,
>> channel->index = index;
>> channel->mid_rid = -EINVAL;
>>
>> - /* Request the channel interrupt. */
>> - scnprintf(pdev_irqname, sizeof(pdev_irqname), "ch%u", index);
>> - irq = platform_get_irq_byname(pdev, pdev_irqname);
>> - if (irq < 0)
>> - return irq;
>> -
>> - irqname = devm_kasprintf(dmac->dev, GFP_KERNEL, "%s:%u",
>> - dev_name(dmac->dev), index);
>> - if (!irqname)
>> - return -ENOMEM;
>> -
>> - ret = devm_request_threaded_irq(dmac->dev, irq, rz_dmac_irq_handler,
>> - rz_dmac_irq_handler_thread, 0,
>> - irqname, channel);
>> - if (ret) {
>> - dev_err(dmac->dev, "failed to request IRQ %u (%d)\n", irq, ret);
>> - return ret;
>> - }
>> -
>> /* Set io base address for each channel */
>> if (index < 8) {
>> channel->ch_base = dmac->base + CHANNEL_0_7_OFFSET + @@ -1012,9 +993,9 @@ static int
>> rz_dmac_chan_probe(struct rz_dmac *dmac,
>> }
>>
>> /* Allocate descriptors */
>> - lmdesc = dma_alloc_coherent(&pdev->dev,
>> - sizeof(struct rz_lmdesc) * DMAC_NR_LMDESC,
>> - &channel->lmdesc.base_dma, GFP_KERNEL);
>> + lmdesc = dmam_alloc_coherent(&pdev->dev,
>> + sizeof(struct rz_lmdesc) * DMAC_NR_LMDESC,
>> + &channel->lmdesc.base_dma, GFP_KERNEL);
>> if (!lmdesc) {
>> dev_err(&pdev->dev, "Can't allocate memory (lmdesc)\n");
>> return -ENOMEM;
>> @@ -1030,7 +1011,24 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac,
>> INIT_LIST_HEAD(&channel->ld_free);
>> INIT_LIST_HEAD(&channel->ld_active);
>>
>> - return 0;
>> + /* Request the channel interrupt. */
>> + scnprintf(pdev_irqname, sizeof(pdev_irqname), "ch%u", index);
>> + irq = platform_get_irq_byname(pdev, pdev_irqname);
>> + if (irq < 0)
>> + return irq;
>> +
>> + irqname = devm_kasprintf(dmac->dev, GFP_KERNEL, "%s:%u",
>> + dev_name(dmac->dev), index);
>> + if (!irqname)
>> + return -ENOMEM;
>> +
>> + ret = devm_request_threaded_irq(dmac->dev, irq, rz_dmac_irq_handler,
>> + rz_dmac_irq_handler_thread, 0,
>> + irqname, channel);
>> + if (ret)
>> + dev_err(dmac->dev, "failed to request IRQ %u (%d)\n", irq, ret);
> As per [1], it is redundant.
>
> [1]
> https://elixir.bootlin.com/linux/v7.0-rc7/source/kernel/irq/devres.c#L108
This is a fix patch, it just moves code around, intended to be backported to
older kernels (e.g. v6.1, v6.12). However devm_request_result() is introduced in:
commit 55b48e23f5c4
Author: Pan Chuang <panchuang@vivo.com>
Date: Tue Aug 5 17:29:22 2025 +0800
genirq/devres: Add error handling in devm_request_*_irq()
devm_request_threaded_irq() and devm_request_any_context_irq() currently
don't print any error message when interrupt registration fails.
This forces each driver to implement redundant error logging - over 2,000
lines of error messages exist across drivers. Additionally, when
upper-layer functions propagate these errors without logging, critical
debugging information is lost.
Add devm_request_result() helper to unify error reporting via dev_err_probe(),
Use it in devm_request_threaded_irq() and devm_request_any_context_irq()
printing device name, IRQ number, handler functions, and error code on failure
automatically.
Co-developed-by: Yangtao Li <frank.li@vivo.com>
Signed-off-by: Yangtao Li <frank.li@vivo.com>
Signed-off-by: Pan Chuang <panchuang@vivo.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/all/20250805092922.135500-2-panchuang@vivo.com
And it is not present in v6.1, v6.12 kernels.
To have a clean backport (at least to the above mentioned kernel versions),
would be better to have the alignment to devm_request_result() done in a later
cleanup patch.
Thank you,
Claudiu
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v4 07/17] dmaengine: sh: rz-dmac: Save the start LM descriptor
2026-04-11 12:34 ` Biju Das
@ 2026-04-11 12:38 ` Claudiu Beznea
2026-04-11 12:50 ` Biju Das
0 siblings, 1 reply; 37+ messages in thread
From: Claudiu Beznea @ 2026-04-11 12:38 UTC (permalink / raw)
To: Biju Das, vkoul@kernel.org, Frank.Li@kernel.org,
lgirdwood@gmail.com, broonie@kernel.org, perex@perex.cz,
tiwai@suse.com, Prabhakar Mahadev Lad, p.zabel@pengutronix.de,
geert+renesas@glider.be, Fabrizio Castro, Long Luu
Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-sound@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
Claudiu Beznea
On 4/11/26 15:34, Biju Das wrote:
> Hi Claudiu,
>
>> -----Original Message-----
>> From: Claudiu <claudiu.beznea@tuxon.dev>
>> Sent: 11 April 2026 12:43
>> Subject: [PATCH v4 07/17] dmaengine: sh: rz-dmac: Save the start LM descriptor
>>
>> From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
>>
>> Save the start LM descriptor to avoid looping through the entire channel's LM descriptor list when
>> computing the residue. This avoids unnecessary iterations.
>>
>> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
>> ---
>>
>> Changes in v4:
>> - none
>>
>> Changes in v3:
>> - none, this patch is new
>>
>> drivers/dma/sh/rz-dmac.c | 11 ++++++++---
>> 1 file changed, 8 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c index 6bea7c8c7053..0f871c0a28bd
>> 100644
>> --- a/drivers/dma/sh/rz-dmac.c
>> +++ b/drivers/dma/sh/rz-dmac.c
>> @@ -58,6 +58,7 @@ struct rz_dmac_desc {
>> /* For slave sg */
>> struct scatterlist *sg;
>> unsigned int sgcount;
>> + struct rz_lmdesc *start_lmdesc;
>> };
>>
>> #define to_rz_dmac_desc(d) container_of(d, struct rz_dmac_desc, vd)
>> @@ -343,6 +344,8 @@ static void rz_dmac_prepare_desc_for_memcpy(struct rz_dmac_chan *channel)
>> struct rz_dmac_desc *d = channel->desc;
>> u32 chcfg = CHCFG_MEM_COPY;
>>
>> + d->start_lmdesc = lmdesc;
>> +
>> /* prepare descriptor */
>> lmdesc->sa = d->src;
>> lmdesc->da = d->dest;
>> @@ -377,6 +380,7 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
>> }
>>
>> lmdesc = channel->lmdesc.tail;
>> + d->start_lmdesc = lmdesc;
>>
>> for (i = 0, sg = sgl; i < sg_len; i++, sg = sg_next(sg)) {
>> if (d->direction == DMA_DEV_TO_MEM) { @@ -693,9 +697,10 @@ rz_dmac_get_next_lmdesc(struct
>> rz_lmdesc *base, struct rz_lmdesc *lmdesc)
>> return next;
>> }
>>
>> -static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel, u32 crla)
>> +static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel,
>> + struct rz_dmac_desc *desc, u32 crla)
>
> U32 normally used with register read/writes hardware related.
>
> Here it is just computation which returns number of bytes. Unsigned int will be
> appropriate instead of u32.
Please check the type of residue as defined by dma_set_residue().
Thank you,
Claudiu
^ permalink raw reply [flat|nested] 37+ messages in thread
* RE: [PATCH v4 07/17] dmaengine: sh: rz-dmac: Save the start LM descriptor
2026-04-11 12:38 ` Claudiu Beznea
@ 2026-04-11 12:50 ` Biju Das
0 siblings, 0 replies; 37+ messages in thread
From: Biju Das @ 2026-04-11 12:50 UTC (permalink / raw)
To: Claudiu.Beznea, vkoul@kernel.org, Frank.Li@kernel.org,
lgirdwood@gmail.com, broonie@kernel.org, perex@perex.cz,
tiwai@suse.com, Prabhakar Mahadev Lad, p.zabel@pengutronix.de,
geert+renesas@glider.be, Fabrizio Castro, Long Luu
Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-sound@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
Claudiu Beznea
Hi Claudiu,
> -----Original Message-----
> From: Claudiu Beznea <claudiu.beznea@tuxon.dev>
> Sent: 11 April 2026 13:39
> Subject: Re: [PATCH v4 07/17] dmaengine: sh: rz-dmac: Save the start LM descriptor
>
>
>
> On 4/11/26 15:34, Biju Das wrote:
> > Hi Claudiu,
> >
> >> -----Original Message-----
> >> From: Claudiu <claudiu.beznea@tuxon.dev>
> >> Sent: 11 April 2026 12:43
> >> Subject: [PATCH v4 07/17] dmaengine: sh: rz-dmac: Save the start LM
> >> descriptor
> >>
> >> From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> >>
> >> Save the start LM descriptor to avoid looping through the entire
> >> channel's LM descriptor list when computing the residue. This avoids unnecessary iterations.
> >>
> >> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> >> ---
> >>
> >> Changes in v4:
> >> - none
> >>
> >> Changes in v3:
> >> - none, this patch is new
> >>
> >> drivers/dma/sh/rz-dmac.c | 11 ++++++++---
> >> 1 file changed, 8 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> >> index 6bea7c8c7053..0f871c0a28bd
> >> 100644
> >> --- a/drivers/dma/sh/rz-dmac.c
> >> +++ b/drivers/dma/sh/rz-dmac.c
> >> @@ -58,6 +58,7 @@ struct rz_dmac_desc {
> >> /* For slave sg */
> >> struct scatterlist *sg;
> >> unsigned int sgcount;
> >> + struct rz_lmdesc *start_lmdesc;
> >> };
> >>
> >> #define to_rz_dmac_desc(d) container_of(d, struct rz_dmac_desc, vd)
> >> @@ -343,6 +344,8 @@ static void rz_dmac_prepare_desc_for_memcpy(struct rz_dmac_chan *channel)
> >> struct rz_dmac_desc *d = channel->desc;
> >> u32 chcfg = CHCFG_MEM_COPY;
> >>
> >> + d->start_lmdesc = lmdesc;
> >> +
> >> /* prepare descriptor */
> >> lmdesc->sa = d->src;
> >> lmdesc->da = d->dest;
> >> @@ -377,6 +380,7 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
> >> }
> >>
> >> lmdesc = channel->lmdesc.tail;
> >> + d->start_lmdesc = lmdesc;
> >>
> >> for (i = 0, sg = sgl; i < sg_len; i++, sg = sg_next(sg)) {
> >> if (d->direction == DMA_DEV_TO_MEM) { @@ -693,9 +697,10 @@
> >> rz_dmac_get_next_lmdesc(struct rz_lmdesc *base, struct rz_lmdesc *lmdesc)
> >> return next;
> >> }
> >>
> >> -static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan
> >> *channel, u32 crla)
> >> +static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel,
> >> + struct rz_dmac_desc *desc, u32 crla)
> >
> > U32 normally used with register read/writes hardware related.
> >
> > Here it is just computation which returns number of bytes. Unsigned
> > int will be appropriate instead of u32.
>
> Please check the type of residue as defined by dma_set_residue().
My comment is based on [1]
[1] https://lore.kernel.org/all/87o6kilwd5.ffs@tglx/
Cheers,
Biju
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v4 05/17] dmaengine: sh: rz-dmac: Do not disable the channel on error
2026-04-11 12:30 ` Biju Das
@ 2026-04-14 8:28 ` Claudiu Beznea
0 siblings, 0 replies; 37+ messages in thread
From: Claudiu Beznea @ 2026-04-14 8:28 UTC (permalink / raw)
To: Biju Das, vkoul@kernel.org, Frank.Li@kernel.org,
lgirdwood@gmail.com, broonie@kernel.org, perex@perex.cz,
tiwai@suse.com, Prabhakar Mahadev Lad, p.zabel@pengutronix.de,
geert+renesas@glider.be, Fabrizio Castro, Long Luu
Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-sound@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
Claudiu Beznea
On 4/11/26 15:30, Biju Das wrote:
>
>
>> -----Original Message-----
>> From: Claudiu <claudiu.beznea@tuxon.dev>
>> Sent: 11 April 2026 12:43
> -soc@vger.kernel.org; Claudiu Beznea
>> <claudiu.beznea.uj@bp.renesas.com>
>> Subject: [PATCH v4 05/17] dmaengine: sh: rz-dmac: Do not disable the channel on error
>>
>> From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
>>
>> Disabling the channel on error is pointless, as if other transfers are queued, the IRQ thread will be
>> woken up and will execute them anyway by calling rz_dmac_xfer_desc().
>>
>> rz_dmac_xfer_desc() re-enables the transfer. Before doing so, it sets CHCTRL.SWRST, which clears
>> CHSTAT.DER and CHSTAT.END anyway.
>>
>> Skip disabling the DMA channel and just log the error instead.
>>
>> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
>> ---
>>
>> Changes in v4:
>> - none
>>
>> Changes in v3:
>> - none, this patch is new
>>
>> drivers/dma/sh/rz-dmac.c | 4 ----
>> 1 file changed, 4 deletions(-)
>>
>> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c index 40ddf534c094..943c005f52bd
>> 100644
>> --- a/drivers/dma/sh/rz-dmac.c
>> +++ b/drivers/dma/sh/rz-dmac.c
>> @@ -871,10 +871,6 @@ static void rz_dmac_irq_handle_channel(struct rz_dmac_chan *channel)
>> if (chstat & CHSTAT_ER) {
>> dev_err(dmac->dev, "DMAC err CHSTAT_%d = %08X\n",
>> channel->index, chstat);
>> -
>> - scoped_guard(spinlock_irqsave, &channel
> ->vc.lock)
>> - rz_dmac_disable_hw(channel);
>
> On previous patch, rz_dmac_disable_hw() for initializing each register
>
> + /* Initialize register for each channel */
> + rz_dmac_disable_hw(channel);
This initializes a single register by clearing various bits.
>
>
> As per hardware manual,
>
> Once an error occurs, the data of the whole transfer cannot be guaranteed.
> Be sure to start the transaction again from the
> beginning by following the procedure below.
> 1. Set 1 in the SWRST bit of the CHCTRL_n/nS register.
> 2. Set each register again.
I wasn't aware of this sequence. Thank for pointing it. However, calling
rz_dmac_disable_hw() as it previously was may be wrong from my point of view.
According to the sequence you pointed, I think the code here should have only
set the SWRST, if any, and let the rz_dmac_xfer_desc() "set each register
again". According to "Figure 14.26 Setting Example 4", of RZ/G3S HW manual, rev
1.20, the registers that need to be set when starting DMAC ch in Link mode are:
- DCTRL = 0x1
- NXLA = 0x1000
- CHCFG = 0x80000000
- CHCTRL = 0x8 // swreset
- CHCTRL = 0x5 // enable
So, I think these are the registers that need to be re-configured again (handled
though the rz_dmac_xfer_desc()).
Anyway, I'll drop this patch from the next version, as it is not the subject of
cyclic DMA.
Thank you,
Claudiu
^ permalink raw reply [flat|nested] 37+ messages in thread
* RE: [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support
2026-04-11 11:43 ` [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support Claudiu
@ 2026-04-20 7:42 ` Biju Das
2026-04-20 14:15 ` Claudiu Beznea
2026-04-20 12:52 ` sashiko.dev review (Re: [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support) Claudiu Beznea
1 sibling, 1 reply; 37+ messages in thread
From: Biju Das @ 2026-04-20 7:42 UTC (permalink / raw)
To: Claudiu.Beznea, vkoul@kernel.org, Frank.Li@kernel.org,
lgirdwood@gmail.com, broonie@kernel.org, perex@perex.cz,
tiwai@suse.com, Prabhakar Mahadev Lad, p.zabel@pengutronix.de,
geert+renesas@glider.be, Fabrizio Castro, Long Luu
Cc: Claudiu.Beznea, dmaengine@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-sound@vger.kernel.org,
linux-renesas-soc@vger.kernel.org, Claudiu Beznea
Hi Claudiu,
> -----Original Message-----
> From: Claudiu <claudiu.beznea@tuxon.dev>
> Sent: 11 April 2026 12:43
> Subject: [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support
>
> From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
>
> The Renesas RZ/G3S SoC supports a power saving mode in which power to most of the SoC components is
> turned off, including the DMA IP. Add suspend to RAM support to save and restore the DMA IP registers.
>
> Cyclic DMA channels require special handling. Since they can be paused and resumed during system
> suspend/resume, the driver restores additional registers for these channels during the system resume
> phase. If a channel was not explicitly paused during suspend, the driver ensures that it is paused and
> resumed as part of the system suspend/resume flow. This might be the case of a serial device being used
> with no_console_suspend.
>
> For non-cyclic channels, the dev_pm_ops::prepare callback waits for all the ongoing transfers to
> complete before allowing suspend-to-RAM to proceed.
>
> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> ---
>
> Changes in v4:
> - in rz_dmac_device_synchronize() kept the read_poll_timeout() as
> this doesn't fail anymore with the proper status return from
> ->device_tx_status() API in case the channel is paused; with it
> the patch description was updated
> - keep the cleanup path in rz_dmac_suspend() simpler to avoid
> confusion when using guard()
> - used SYSTEM_SLEEP_PM_OPS() as there is no need for having the
> suspend/resume callbacks being called in NOIRQ phase
>
> Changes in v3:
> - dropped RZ_DMAC_CHAN_STATUS_SYS_SUSPENDED
> - dropped read_poll_timeout() from rz_dmac_device_synchronze() as
> with audio drivers this times out all the time on suspend because
> the audio DMA is already paused when the rz_dmac_device_synchronize()
> is called; updated the commit description to describe this change
> - call rz_dmac_device_pause_internal() only if RZ_DMAC_CHAN_STATUS_PAUSED
> bit is not set or the device is enabled in HW
> - updated rz_dmac_device_resume_set() to have it simpler and cover
> the cases when it is called with the channel enabled or paused;
> updated the comment describing the covered use cases
> - call rz_dmac_device_resume_internal() only if
> RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL bit is set
> - in rz_dmac_chan_is_enabled() return -EAGAIN only if the channel is
> enabled in HW
> - in rz_dmac_suspend_recover() drop the update of
> RZ_DMAC_CHAN_STATUS_SYS_SUSPENDED as this is not available anymore
> - in rz_dmac_suspend() call rz_dmac_device_pause_internal() unconditionally
> as the logic is now handled inside the called function; also, do not
> ignore anymore the failure of internal suspend and abort the suspend
> instead
> - report channel internal resume failures in rz_dmac_resume()
> - use rz_dmac_disable_hw() instead of open coding it in rz_dmac_resume()
> - call rz_dmac_device_resume_internal() uncoditionally as the skip
> logic is now handled in the function itself
> - use NOIRQ_SYSTEM_SLEEP_PM_OPS()
> - didn't collect Tommaso's Tb tag as the series was changed a lot since
> v2
>
> Changes in v2:
> - fixed typos in patch description
> - in rz_dmac_suspend_prepare(): return -EAGAIN based on the value returned
> by vchan_issue_pending()
> - in rz_dmac_suspend_recover(): clear RZ_DMAC_CHAN_STATUS_SYS_SUSPENDED for
> non cyclic channels
> - in rz_dmac_resume(): call rz_dmac_set_dma_req_no() only for cyclic channels
>
> drivers/dma/sh/rz-dmac.c | 188 +++++++++++++++++++++++++++++++++++++--
> 1 file changed, 183 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c index 9a10430109e5..00e18d8213ca
> 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
> @@ -69,10 +69,12 @@ struct rz_dmac_desc {
> * enum rz_dmac_chan_status: RZ DMAC channel status
> * @RZ_DMAC_CHAN_STATUS_PAUSED: Channel is paused though DMA engine callbacks
> * @RZ_DMAC_CHAN_STATUS_CYCLIC: Channel is cyclic
> + * @RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL: Channel is paused through
> + driver internal logic
> */
> enum rz_dmac_chan_status {
> RZ_DMAC_CHAN_STATUS_PAUSED,
> RZ_DMAC_CHAN_STATUS_CYCLIC,
> + RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL,
> };
>
> struct rz_dmac_chan {
> @@ -92,6 +94,10 @@ struct rz_dmac_chan {
> u32 chctrl;
> int mid_rid;
>
> + struct {
> + u32 nxla;
> + } pm_state;
> +
> struct list_head ld_free;
>
> struct {
> @@ -962,20 +968,57 @@ static int rz_dmac_device_pause(struct dma_chan *chan)
> return rz_dmac_device_pause_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED)); }
>
> +static int rz_dmac_device_pause_internal(struct rz_dmac_chan *channel)
> +{
> + lockdep_assert_held(&channel->vc.lock);
> +
> + /* Skip channels explicitly paused by consummers or disabled. */
> + if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_PAUSED) ||
> + !rz_dmac_chan_is_enabled(channel))
> + return 0;
> +
> + return rz_dmac_device_pause_set(channel,
> +BIT(RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL));
> +}
> +
> static int rz_dmac_device_resume_set(struct rz_dmac_chan *channel,
> unsigned long clear_bitmask)
> {
> - int ret = 0;
> u32 val;
> + int ret;
>
> lockdep_assert_held(&channel->vc.lock);
>
> - /* Do not check CHSTAT_SUS but rely on HW capabilities. */
> + /*
> + * We can be:
> + *
> + * 1/ after the channel was paused by a consummer and now it
> + * needs to be resummed
> + * 2/ after the channel was paused internally (as a result of
> + * a system suspend with power loss or not)
> + * 3/ after the channel was paused by a consummer, the system
> + * went through a system suspend (with power loss or not)
> + * and the consummer wants to resume the channel
> + *
> + * To cover all the above cases we set both CLRSUS and SETEN.
> + *
> + * In case 1/ setting SETEN while the channel is still enabled
> + * is harmless for the controller.
> + *
> + * In case 2/ the channel is disabled when calling this function
> + * and setting CLRSUS is harmless for the controller as the
> + * channel is disabled anyway.
> + *
> + * In case 3/ the channel is disabled/enabled if the system
> + * went though a suspend with power loss/or not and setting
> + * CLRSUS/SETEN is harmless for the controller as the channel
> + * is enabled/disabled anyway.
> + */
> +
> + rz_dmac_ch_writel(channel, CHCTRL_CLRSUS | CHCTRL_SETEN, CHCTRL, 1);
>
> - rz_dmac_ch_writel(channel, CHCTRL_CLRSUS, CHCTRL, 1);
> ret = read_poll_timeout_atomic(rz_dmac_ch_readl, val,
> - !(val & CHSTAT_SUS), 1, 1024, false,
> - channel, CHSTAT, 1);
> + ((val & (CHSTAT_SUS | CHSTAT_EN)) == CHSTAT_EN),
> + 1, 1024, false, channel, CHSTAT, 1);
>
> channel->status &= ~clear_bitmask;
>
> @@ -994,6 +1037,16 @@ static int rz_dmac_device_resume(struct dma_chan *chan)
> return rz_dmac_device_resume_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED)); }
>
> +static int rz_dmac_device_resume_internal(struct rz_dmac_chan *channel)
> +{
> + lockdep_assert_held(&channel->vc.lock);
> +
> + if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL)))
> + return 0;
> +
> + return rz_dmac_device_resume_set(channel,
> +BIT(RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL));
> +}
> +
> /*
> * -----------------------------------------------------------------------------
> * IRQ handling
> @@ -1354,6 +1407,130 @@ static void rz_dmac_remove(struct platform_device *pdev)
> pm_runtime_disable(&pdev->dev);
> }
>
> +static int rz_dmac_suspend_prepare(struct device *dev) {
> + struct rz_dmac *dmac = dev_get_drvdata(dev);
> +
> + for (unsigned int i = 0; i < dmac->n_channels; i++) {
> + struct rz_dmac_chan *channel = &dmac->channels[i];
> +
> + guard(spinlock_irqsave)(&channel->vc.lock);
> +
> + /* Wait for transfer completion, except in cyclic case. */
> + if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC))
> + continue;
> +
> + if (rz_dmac_chan_is_enabled(channel))
> + return -EAGAIN;
> + }
> +
> + return 0;
> +}
> +
> +static void rz_dmac_suspend_recover(struct rz_dmac *dmac) {
> + for (unsigned int i = 0; i < dmac->n_channels; i++) {
> + struct rz_dmac_chan *channel = &dmac->channels[i];
> +
> + guard(spinlock_irqsave)(&channel->vc.lock);
> +
> + if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)))
> + continue;
> +
> + rz_dmac_device_resume_internal(channel);
> + }
> +}
> +
> +static int rz_dmac_suspend(struct device *dev) {
> + struct rz_dmac *dmac = dev_get_drvdata(dev);
> + int ret;
> +
> + for (unsigned int i = 0; i < dmac->n_channels; i++) {
> + struct rz_dmac_chan *channel = &dmac->channels[i];
> +
> + guard(spinlock_irqsave)(&channel->vc.lock);
> +
> + if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)))
> + continue;
> +
> + ret = rz_dmac_device_pause_internal(channel);
> + if (ret) {
> + dev_err(dev, "Failed to suspend channel %s\n",
> + dma_chan_name(&channel->vc.chan));
> + break;
> + }
> +
> + channel->pm_state.nxla = rz_dmac_ch_readl(channel, NXLA, 1);
> + }
> +
> + if (ret) {
> + rz_dmac_suspend_recover(dmac);
> + return ret;
> + }
> +
> + pm_runtime_put_sync(dmac->dev);
> +
> + ret = reset_control_assert(dmac->rstc);
> + if (ret) {
> + pm_runtime_resume_and_get(dmac->dev);
> + rz_dmac_suspend_recover(dmac);
> + }
> +
> + return ret;
> +}
> +
> +static int rz_dmac_resume(struct device *dev) {
> + struct rz_dmac *dmac = dev_get_drvdata(dev);
> + int errors = 0, ret;
> +
> + ret = reset_control_deassert(dmac->rstc);
> + if (ret)
> + return ret;
> +
> + ret = pm_runtime_resume_and_get(dmac->dev);
If this fails for any reason, the next suspend still be called and it will decrement the counter, potentially undeflowing it.
Consider switching to pm_runtime_get_sync(), which suits better here.
Cheers,
Biju
^ permalink raw reply [flat|nested] 37+ messages in thread
* sashiko.dev review (Re: [PATCH v4 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up)
2026-04-11 11:42 ` [PATCH v4 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up Claudiu
2026-04-11 12:17 ` Biju Das
@ 2026-04-20 12:33 ` Claudiu Beznea
1 sibling, 0 replies; 37+ messages in thread
From: Claudiu Beznea @ 2026-04-20 12:33 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: dmaengine, linux-kernel, linux-sound, linux-renesas-soc,
Claudiu Beznea, stable
> commit b3dfdd5fbd2463db407b1396ee1ae79801cb96bf
> Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> dmaengine: sh: rz-dmac: Move interrupt request after everything is set up
> This patch moves the channel interrupt request to the end of the channel
> probe function to prevent the IRQ handler from accessing uninitialized
> data. It also replaces dma_alloc_coherent() with dmam_alloc_coherent()
> to simplify the error path.
> > diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> > index 625ff29024de0..9f206a33dcc6b 100644
> > --- a/drivers/dma/sh/rz-dmac.c
> > +++ b/drivers/dma/sh/rz-dmac.c
> [ ... ]
> > @@ -1210,16 +1207,6 @@ static int rz_dmac_probe(struct platform_device *pdev)
> > dma_register_err:
> > of_dma_controller_free(pdev->dev.of_node);
> > err:
> > - channel_num = i ? i - 1 : 0;
> > - for (i = 0; i < channel_num; i++) {
> > - struct rz_dmac_chan *channel = &dmac->channels[i];
> > -
> > - dma_free_coherent(&pdev->dev,
> > - sizeof(struct rz_lmdesc) * DMAC_NR_LMDESC,
> > - channel->lmdesc.base,
> > - channel->lmdesc.base_dma);
> > - }
> > -
> > reset_control_assert(dmac->rstc);
> > err_pm_runtime_put:
> > pm_runtime_put(&pdev->dev);
> > @@ -1232,18 +1219,9 @@ static int rz_dmac_probe(struct platform_device *pdev)
> > static void rz_dmac_remove(struct platform_device *pdev)
> > {
> > struct rz_dmac *dmac = platform_get_drvdata(pdev);
> > - unsigned int i;
> >
> > dma_async_device_unregister(&dmac->engine);
> > of_dma_controller_free(pdev->dev.of_node);
> > - for (i = 0; i < dmac->n_channels; i++) {
> > - struct rz_dmac_chan *channel = &dmac->channels[i];
> > -
> > - dma_free_coherent(&pdev->dev,
> > - sizeof(struct rz_lmdesc) * DMAC_NR_LMDESC,
> > - channel->lmdesc.base,
> > - channel->lmdesc.base_dma);
> > - }
> > reset_control_assert(dmac->rstc);
> > pm_runtime_put(&pdev->dev);
> > pm_runtime_disable(&pdev->dev);
> > }
> While this patch fixes the initialization races, does it leave a similar
> vulnerability exposed during teardown?
> Since devm_request_threaded_irq() is used to allocate the interrupts, they
> will remain active until after rz_dmac_remove() and the rz_dmac_probe() error
> paths complete.
> If an interrupt fires during or just after rz_dmac_remove(), could the handler
> attempt to access hardware registers while the device is in reset or powered
> down by pm_runtime_put()?
> Would it be safer to explicitly free or disable the IRQs before asserting the
> hardware reset and disabling runtime PM, or perhaps manage the reset and PM
> states via devm actions to guarantee correct teardown ordering?
On either failure or remove path the device is with runtime PM put (clocks being
disabled) and in reset state. The controller cannot generate interrupts from
this state.
However, to keep this series simple, I'll add this in a cleanup patch after the
current series will be merged.
^ permalink raw reply [flat|nested] 37+ messages in thread
* sashiko.dev review (Re: [PATCH v4 04/17] dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw())
2026-04-11 11:42 ` [PATCH v4 04/17] dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw() Claudiu
@ 2026-04-20 12:34 ` Claudiu Beznea
0 siblings, 0 replies; 37+ messages in thread
From: Claudiu Beznea @ 2026-04-20 12:34 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: dmaengine, linux-kernel, linux-sound, linux-renesas-soc,
Claudiu Beznea
> commit f7b3ac8bceb85f09d10e2c6b3d49ae768415c77c
> Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw()
> This patch unifies the hardware disable code by replacing an open-coded
> register write with rz_dmac_disable_hw(), moving it after vchan_init()
> so the channel device is properly initialized.
> > diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> > index 1717b407ab9e9..40ddf534c0948 100644
> > --- a/drivers/dma/sh/rz-dmac.c
> > +++ b/drivers/dma/sh/rz-dmac.c
> [ ... ]
> > @@ -1000,15 +1000,15 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac,
> > }
> > rz_lmdesc_setup(channel, lmdesc);
> >
> > - /* Initialize register for each channel */
> > - rz_dmac_ch_writel(channel, CHCTRL_DEFAULT, CHCTRL, 1);
> > -
> > channel->vc.desc_free = rz_dmac_virt_desc_free;
> > vchan_init(&channel->vc, &dmac->engine);
> This is not a bug introduced by this patch, but seeing the vchan
> initialization here made me wonder about the driver teardown path.
> When vchan_init() is called, it sets up a background tasklet for the
> channel. During driver removal in rz_dmac_remove(), the DMA device is
> unregistered, but it does not appear that tasklet_kill() is ever called
> for this tasklet.
> Because the channel structures are allocated with devm_kzalloc, their
> memory is automatically freed immediately after rz_dmac_remove() returns.
> If a DMA transfer is active when the driver is forcefully unbound, could
> the IRQ thread schedule the tasklet via vchan_cookie_complete() right
> before the IRQ is freed by devres?
> Since free_irq() waits for the IRQ thread to finish but does not flush
> tasklets, could this scheduled tasklet execute after the device memory
> has been freed, resulting in a use-after-free?
> Should the driver explicitly kill the tasklet before returning from
> rz_dmac_remove()?
> > INIT_LIST_HEAD(&channel->ld_queue);
> > INIT_LIST_HEAD(&channel->ld_free);
> > INIT_LIST_HEAD(&channel->ld_active);
tasklet_setup() only sets the members of a struct tasklet object. The tasklet
itself is scheduled at runtime when the DMA clients requests transfers. On
remove the DMA clients would have already been called the
rz_dmac_terminate_all() and/or rz_dmac_device_synchronize() which should kill
the previously scheduled tasklet.
^ permalink raw reply [flat|nested] 37+ messages in thread
* sashiko.dev review (Re: [PATCH v4 07/17] dmaengine: sh: rz-dmac: Save the start LM descriptor)
2026-04-11 11:42 ` [PATCH v4 07/17] dmaengine: sh: rz-dmac: Save the start LM descriptor Claudiu
2026-04-11 12:34 ` Biju Das
@ 2026-04-20 12:37 ` Claudiu Beznea
1 sibling, 0 replies; 37+ messages in thread
From: Claudiu Beznea @ 2026-04-20 12:37 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: dmaengine, linux-kernel, linux-sound, linux-renesas-soc,
Claudiu Beznea
> commit 3d17698f275248947e3d2df42293a89e89cafaa6
> Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> dmaengine: sh: rz-dmac: Save the start LM descriptor
> Save the start LM descriptor to avoid looping through the entire
> channel's LM descriptor list when computing the residue. This avoids
> unnecessary iterations.
> > diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> > index 6bea7c8c70537..0f871c0a28bd9 100644
> > --- a/drivers/dma/sh/rz-dmac.c
> > +++ b/drivers/dma/sh/rz-dmac.c
> [ ... ]
> > @@ -377,6 +380,7 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct
rz_dmac_chan *channel)
> > }
> >
> > lmdesc = channel->lmdesc.tail;
> > + d->start_lmdesc = lmdesc;
> >
> > for (i = 0, sg = sgl; i < sg_len; i++, sg = sg_next(sg)) {
> > if (d->direction == DMA_DEV_TO_MEM) {
> This isn't a bug introduced by this patch, but is there a potential buffer
> overflow in the link descriptor ring due to a lack of bounds checking?
That can happen but this is not the scope of this series. Series is already long
enough. I will address this in a separate series after the current one will be
merged.
> The driver allocates a fixed-size ring buffer of 64 link descriptors
> (DMAC_NR_LMDESC). However, rz_dmac_prepare_descs_for_slave_sg() iterates over
> the provided scatterlist and increments the lmdesc pointer without ever
> verifying if the number of scatterlist elements exceeds 64.
> If a client submits a sufficiently large scatterlist, will it silently
> overflow the ring buffer, overwriting the descriptors it just wrote, and
> cause the DMA controller to read a corrupted list of operations?
> [ ... ]
> > @@ -693,9 +697,10 @@ rz_dmac_get_next_lmdesc(struct rz_lmdesc *base, struct
rz_lmdesc *lmdesc)
> > return next;
> > }
> >
> > -static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan
*channel, u32 crla)
> > +static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel,
> > + struct rz_dmac_desc *desc, u32
crla)
> > {
> > - struct rz_lmdesc *lmdesc = channel->lmdesc.head;
> > + struct rz_lmdesc *lmdesc = desc->start_lmdesc;
> > struct dma_chan *chan = &channel->vc.chan;
> > struct rz_dmac *dmac = to_rz_dmac(chan->device);
> > u32 residue = 0, i = 0;
> Can this lead to a NULL pointer dereference?
> In rz_dmac_prep_slave_sg(), a new descriptor is immediately added to
> channel->ld_queue. If a previous transfer completes before the client calls
> dma_async_issue_pending(), rz_dmac_irq_handler_thread() attempts to start the
> next transfer from ld_queue.
> However, vchan_next_desc() returns NULL since the transfer hasn't been
> issued, causing rz_dmac_xfer_desc() to return 0 without executing the hardware
> preparation function. The IRQ handler sees the 0 return value and erroneously
> moves the unprepared descriptor to channel->ld_active.
> If dmaengine_tx_status() is subsequently called, rz_dmac_chan_get_residue()
> fetches this active descriptor. Because it was never prepared, the newly added
> desc->start_lmdesc pointer remains NULL.
> The loop while (lmdesc->nxla != crla) then immediately dereferences NULL.
> Note that this issue was fixed later in the series by commit 'dmaengine: sh:
> rz-dmac: Use virt-dma APIs for channel descriptor processing'.
All the above could happen and the scope of patch "dmaengine: sh:
rz-dmac: Use virt-dma APIs for channel descriptor processing" was to fix all
these subtle issues. The patch here is only a preparation patch to keep things
simple in "dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor
processing"
> Additionally, this isn't a bug introduced by this patch, but does the
> mathematical logic in the residue calculation produce incorrect results?
> The loop while (lmdesc->nxla != crla) in
> rz_dmac_calculate_residue_bytes_in_vd() locates the link descriptor preceding
> the currently executing one. The subsequent loop while (lmdesc->chcfg &
> CHCFG_DEM) then iterates from this preceding descriptor, adding lmdesc->tb to
> the residue.
> If the hardware is executing the second descriptor of a transfer, it adds the
> total size of the first (already completed) descriptor, the total size of the
> second (currently executing) descriptor, and crtb (the remaining bytes of the
> second descriptor). Does this mathematically double-count bytes and include
> already transferred data?
> Conversely, if executing the first descriptor, the first loop wraps around the
> entire ring buffer, encounters the final descriptor of the previous transfer,
> and since its CHCFG_DEM bit is cleared, the second loop immediately
> terminates. Does this completely omit all subsequent descriptors from the
> residue calculation?
I think this is a valid point that I also noticed and thought about it. However,
I would like to keep it aside from this series, as it is already too big.
> [ ... ]
> This isn't a bug introduced by this patch, but is there a pre-existing race
> condition with dma_async_issue_pending()?
> rz_dmac_issue_pending() unconditionally calls rz_dmac_xfer_desc() if
> vchan_issue_pending() returns true, without checking if a transfer is already
> actively executing in hardware (e.g., if channel->desc is already set).
> For RZ_DMAC_DESC_MEMCPY transfers, the driver does not advance
> channel->lmdesc.tail, meaning the newly issued transfer will silently
> overwrite the exact link descriptor currently being executed by the active
> hardware. Does this lead to memory corruption and undefined hardware behavior?
> Note that this is fixed later in the series by commit 'dmaengine: sh: rz-dmac:
> Use virt-dma APIs for channel descriptor processing' which properly checks
> !channel->desc.
The role of the pointed descriptors was to fix all these subtle issues.
^ permalink raw reply [flat|nested] 37+ messages in thread
* sashiko.dev review (Re: [PATCH v4 10/17] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing)
2026-04-11 11:42 ` [PATCH v4 10/17] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing Claudiu
@ 2026-04-20 12:41 ` Claudiu Beznea
0 siblings, 0 replies; 37+ messages in thread
From: Claudiu Beznea @ 2026-04-20 12:41 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: dmaengine, linux-kernel, linux-sound, linux-renesas-soc,
Claudiu Beznea
> commit c2a00d64377d19d3ae2ce858af0fab6e9fae6f1a
> Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> Subject: dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor
processing
> This commit converts the rz-dmac driver to fully use virt-dma APIs for
> descriptor processing. It removes custom queue management (ld_queue,
> ld_active), keeping only ld_free to track pre-allocated descriptors. It also
> simplifies residue calculation and synchronization mechanisms.
> > diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> > index f35ff57397189..04eb1a7f1e627 100644
> > --- a/drivers/dma/sh/rz-dmac.c
> > +++ b/drivers/dma/sh/rz-dmac.c
> [ ... ]
> > @@ -494,8 +485,6 @@ static void rz_dmac_free_chan_resources(struct dma_chan
*chan)
> > rz_lmdesc_setup(channel, channel->lmdesc.base);
> >
> > rz_dmac_disable_hw(channel);
> > - list_splice_tail_init(&channel->ld_active, &channel->ld_free);
> > - list_splice_tail_init(&channel->ld_queue, &channel->ld_free);
> >
> > if (channel->mid_rid >= 0) {
> > clear_bit(channel->mid_rid, dmac->modules);
> > @@ -504,13 +493,19 @@ static void rz_dmac_free_chan_resources(struct
dma_chan *chan)
> >
> > spin_unlock_irqrestore(&channel->vc.lock, flags);
> >
> > + vchan_free_chan_resources(&channel->vc);
> > +
> > + spin_lock_irqsave(&channel->vc.lock, flags);
> Can this result in a memory leak and use-after-free regression?
> The patch removes the old list splicing but does not clear channel->desc.
> If rz_dmac_free_chan_resources() is called without a prior terminate,
> channel->desc is not returned to ld_free or freed by
> vchan_free_chan_resources(), leading to a memory leak of the descriptor.
AFAIK, it is expected hat device_terminate_all() is called before before
device_free_chan_resources().
Vinod, Frank,
Could you please let me know if my assumption is wrong?
Thank you!
> Additionally, the hardware is disabled and the lock is temporarily dropped
> before calling vchan_free_chan_resources(). During this unlocked window, the
> rz_dmac_irq_handler_thread could execute since channel->desc is not NULL.
> This would complete the descriptor, call rz_dmac_xfer_desc(), fetch the next
> descriptor, and restart the hardware while the channel is being destroyed.
If device_terminate_all() is called before before device_free_chan_resources()
this shouldn't happen. channel->desc is set to null in device_terminate_all().
> Should vchan_terminate_vdesc() be called to clear channel->desc under the
> lock before dropping it?
This is already done in device_terminate_all().
> [ ... ]
> > @@ -558,22 +553,22 @@ rz_dmac_prep_slave_sg(struct dma_chan *chan, struct
scatterlist *sgl,
> > if (!desc)
> > return NULL;
> >
> > - for_each_sg(sgl, sg, sg_len, i)
> > - dma_length += sg_dma_len(sg);
> > + list_del(&desc->node);
> > + }
> >
> > - desc->type = RZ_DMAC_DESC_SLAVE_SG;
> > - desc->sg = sgl;
> > - desc->sgcount = sg_len;
> This issue is not introduced by this commit, but could this assignment cause a
> buffer overflow?
> The variable sg_len is assigned to desc->sgcount without validating that it is
> less than or equal to the hardware limit (DMAC_NR_LMDESC, which is 64). Later
> in rz_dmac_prepare_descs_for_slave_sg(), the code iterates sgcount times to
> populate the hardware descriptor ring. If an SG list with more than 64 entries
> is provided, the loop will wrap around and overwrite the beginning of the DMA
> ring, corrupting the DMA descriptors.
As mentioned in a previous review comment, this can happen but, as it is not
introduced by this series, I'll prepare a cleanup patch later, after this series
will be integrated, as it is already big enough.
> [ ... ]
> > @@ -692,6 +681,8 @@ static void rz_dmac_device_synchronize(struct dma_chan
*chan)
> > u32 chstat;
> > int ret;
> >
> > + vchan_synchronize(&channel->vc);
> > +
> > ret = read_poll_timeout(rz_dmac_ch_readl, chstat, !(chstat & CHSTAT_EN),
> > 100, 100000, false, channel, CHSTAT, 1);
> Could calling vchan_synchronize() before polling the hardware lead to a
> use-after-free?
> If the hardware is actively processing a transfer, vchan_synchronize() will
> return immediately because no completion tasklet has been scheduled yet.
> The hardware is then polled until it finishes, at which point it asserts an
> IRQ and schedules the virt-dma tasklet.
> However, the caller of rz_dmac_device_synchronize() would have already
> resumed assuming all activity is finished, potentially freeing the transfer
> context before the delayed tasklet runs.
> Should vchan_synchronize() be called after read_poll_timeout() ensures the
> hardware has actually stopped?
AFAIK, it is expected at this point that the user alredy called
device_terminate_all(). That frees the active transfer and sets channel->desc =
NULL which don't allow scheduling any tasklet.
Vinod, Frank,
Could you please let me know if I'm wrong here?
> [ ... ]
> > @@ -739,58 +730,22 @@ static u32
rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel,
> > static u32 rz_dmac_chan_get_residue(struct rz_dmac_chan *channel,
> > dma_cookie_t cookie)
> > {
> > - struct rz_dmac_desc *current_desc, *desc;
> > - enum dma_status status;
> > + struct rz_dmac_desc *desc = NULL;
> > + struct virt_dma_desc *vd;
> > u32 crla, crtb, i;
> >
> > - /* Get current processing virtual descriptor */
> > - current_desc = list_first_entry_or_null(&channel->ld_active,
> > - struct rz_dmac_desc, node);
> > - if (!current_desc)
> > - return 0;
> [ ... ]
> > + vd = vchan_find_desc(&channel->vc, cookie);
> > + if (vd) {
> > + /* Descriptor has been issued but not yet processed. */
> > + desc = to_rz_dmac_desc(vd);
> > + return desc->len;
> > + } else if (channel->desc && channel->desc->vd.tx.cookie == cookie) {
> > + /* Descriptor is currently processed. */
> > + desc = channel->desc;
> > + }
> >
> > - /*
> > - * No descriptor found for the cookie, there's thus no residue.
> > - * This shouldn't happen if the calling driver passes a correct
> > - * cookie value.
> > - */
> > - WARN(1, "No descriptor for cookie!");
> > + if (!desc) {
> > + /* Descriptor was not found. May be already completed by now. */
> > return 0;
> > }
> Does this logic correctly handle descriptors that have been submitted but not
> yet issued?
> The vchan_find_desc() function only searches the desc_issued list. If a client
> has submitted a transfer using vchan_tx_submit() but has not yet called
> dma_async_issue_pending(), the descriptor will be in the desc_submitted list.
> As a result, vd will be NULL, and the function will fall through to return
> 0 bytes instead of the descriptor's true length. This changes the behavior
> from the previous implementation and may break clients that rely on
> dmaengine_tx_status() to check the progress of submitted transfers.
Investigating the already existing drivers using the pattern proposed in this
set found that this is the way the checked drivers are implemeinting the
tx_status API. Based on that, I've adjusted it like this.
Vinod, Frank,
Could you please let me know what is expected here?
Thank you!
> [ ... ]
> > @@ -914,28 +865,18 @@ static irqreturn_t rz_dmac_irq_handler(int irq, void
*dev_id)
> > static irqreturn_t rz_dmac_irq_handler_thread(int irq, void *dev_id)
> > {
> > struct rz_dmac_chan *channel = dev_id;
> > - struct rz_dmac_desc *desc = NULL;
> > - unsigned long flags;
> > + struct rz_dmac_desc *desc;
> >
> > - spin_lock_irqsave(&channel->vc.lock, flags);
> > + guard(spinlock_irqsave)(&channel->vc.lock);
> >
> > - if (list_empty(&channel->ld_active)) {
> > - /* Someone might have called terminate all */
> > - goto out;
> > - }
> > + desc = channel->desc;
> > + if (!desc)
> > + return IRQ_HANDLED;
> >
> > - desc = list_first_entry(&channel->ld_active, struct rz_dmac_desc, node);
> > vchan_cookie_complete(&desc->vd);
> > - list_move_tail(channel->ld_active.next, &channel->ld_free);
> This issue is not introduced by this commit, but can a race condition here
> lead to premature completion of new transfers?
> If a transfer completes and asserts the hardirq, the threaded handler is
> scheduled. Before the thread runs, the client could call
> dmaengine_terminate_all() and immediately issue a new transfer. The
> rz_dmac_issue_pending() function would start the new transfer and assign it
> to channel->desc.
> When the delayed IRQ thread finally runs, it takes the lock and sees
> channel->desc pointing to the new transfer. It then unconditionally completes
> it via vchan_cookie_complete(). This could result in a transfer being
> prematurely reported as complete while it is still running.
I think this can happen in theory, but, as it is a behaviour not introduced by
this commit I will keep the series as is.
^ permalink raw reply [flat|nested] 37+ messages in thread
* sashiko.dev review (Re: [PATCH v4 11/17] dmaengine: sh: rz-dmac: Refactor pause/resume code)
2026-04-11 11:42 ` [PATCH v4 11/17] dmaengine: sh: rz-dmac: Refactor pause/resume code Claudiu
@ 2026-04-20 12:42 ` Claudiu Beznea
0 siblings, 0 replies; 37+ messages in thread
From: Claudiu Beznea @ 2026-04-20 12:42 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: dmaengine, linux-kernel, linux-sound, linux-renesas-soc,
Claudiu Beznea
> commit c08d50f3400ecf3dbc032455be7a589baf899258
> Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> dmaengine: sh: rz-dmac: Refactor pause/resume code
> This commit refactors the pause/resume support to use a bitmask for the
> channel status. This prepares the driver for upcoming system suspend and
> cyclic DMA support where multiple independent pause states might be active.
> > diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> > index 04eb1a7f1e627..d009b7607d447 100644
> > --- a/drivers/dma/sh/rz-dmac.c
> > +++ b/drivers/dma/sh/rz-dmac.c
> [ ... ]
> > +static int rz_dmac_device_resume_set(struct rz_dmac_chan *channel,
> > + unsigned long clear_bitmask)
> > +{
> > + int ret = 0;
> > + u32 val;
> > +
> > + lockdep_assert_held(&channel->vc.lock);
> > +
> > + /* Do not check CHSTAT_SUS but rely on HW capabilities. */
> > +
> > + rz_dmac_ch_writel(channel, CHCTRL_CLRSUS, CHCTRL, 1);
> > + ret = read_poll_timeout_atomic(rz_dmac_ch_readl, val,
> > + !(val & CHSTAT_SUS), 1, 1024, false,
> > + channel, CHSTAT, 1);
> > +
> > + channel->status &= ~clear_bitmask;
> > +
> > + return ret;
> > +}
> If multiple pause bits are set in channel->status, will this unconditionally
> resume the hardware?
> If a channel is paused explicitly by a dmaengine client using
> RZ_DMAC_CHAN_STATUS_PAUSED, and is subsequently paused by the upcoming system
> suspend functionality,
The upcomming system suspend functionality don't call rz_dmac_device_pause_set()
if RZ_DMAC_CHAN_STATUS_PAUSED is set or the channel is not enabled(), it just
return. The following could be checked:
static int rz_dmac_device_pause_internal(struct rz_dmac_chan *channel)
{
lockdep_assert_held(&channel->vc.lock);
/* Skip channels explicitly paused by consummers or disabled. */
if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_PAUSED) ||
!rz_dmac_chan_is_enabled(channel))
return 0;
return rz_dmac_device_pause_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL));
}
> it appears this code will write CHCTRL_CLRSUS to the
> hardware as soon as the first of those conditions is cleared.
> This would silently override the other pause request, causing the hardware to
> resume prematurely while another client expects it to be paused.
> Should this update channel->status first, and only write CHCTRL_CLRSUS
> to the hardware if no other pause bits remain active (channel->status == 0)?
^ permalink raw reply [flat|nested] 37+ messages in thread
* sashiko.dev review (Re: [PATCH v4 13/17] dmaengine: sh: rz-dmac: Add cyclic DMA support)
2026-04-11 11:42 ` [PATCH v4 13/17] dmaengine: sh: rz-dmac: Add cyclic DMA support Claudiu
@ 2026-04-20 12:51 ` Claudiu Beznea
0 siblings, 0 replies; 37+ messages in thread
From: Claudiu Beznea @ 2026-04-20 12:51 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: dmaengine, linux-kernel, linux-sound, linux-renesas-soc,
Claudiu Beznea
> commit cb66d54ec04fe2598560135c66410142d7295633
> Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> dmaengine: sh: rz-dmac: Add cyclic DMA support
> This commit introduces cyclic DMA support to the RZ DMAC driver. It adds a
> per-channel status bit to mark cyclic channels and sets it during the DMA
> prepare callback. The IRQ handler then uses this bit to determine whether
> to invoke the cyclic callback function.
> > diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> > index 958ee45abc700..9a10430109e59 100644
> > --- a/drivers/dma/sh/rz-dmac.c
> > +++ b/drivers/dma/sh/rz-dmac.c
> [ ... ]
> > @@ -586,6 +645,55 @@ rz_dmac_prep_slave_sg(struct dma_chan *chan, struct
scatterlist *sgl,
> [ ... ]
> > + scoped_guard(spinlock_irqsave, &channel->vc.lock) {
> > + if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC))
> > + return NULL;
> > +
> > + desc = list_first_entry_or_null(&channel->ld_free, struct
rz_dmac_desc, node);
> > + if (!desc)
> > + return NULL;
> > +
> > + list_del(&desc->node);
> > +
> > + channel->status |= BIT(RZ_DMAC_CHAN_STATUS_CYCLIC);
> > + }
> > +
> > + desc->type = RZ_DMAC_DESC_CYCLIC;
> > + desc->sgcount = period_len;
> Can tracking cyclic state via the global channel->status flag lead to
> cross-descriptor state corruption?
> If a prepared cyclic descriptor is freed via dmaengine_desc_free(), the
> global flag might not be cleared during virt-dma desc release, leaving
> the channel poisoned.
dmaengine_desc_free() works only for reusable descriptors. This functionality is
not implemented by the rz-dmac driver.
> Also, because the lock is dropped when exiting the scoped_guard before
> vchan_tx_prep() is called, a concurrent terminate_all() could potentially
> clear the flag before the descriptor is queued.
> Could the driver rely on the desc->type field instead of a global state
> flag to identify cyclic transfers?
I'm going to explore that but I think this will complicates the code further.
Vinod, Frank,
Could you please let me know if this is a valid expected usecase?
> [ ... ]
> > @@ -733,9 +841,18 @@ static u32
rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel,
> > }
> >
> > /* Calculate residue from next lmdesc to end of virtual desc */
> > - while (lmdesc->chcfg & CHCFG_DEM) {
> > - residue += lmdesc->tb;
> > - lmdesc = rz_dmac_get_next_lmdesc(channel->lmdesc.base, lmdesc);
> > + if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)) {
> > + u32 start_lmdesc_addr = rz_dmac_lmdesc_addr(channel,
desc->start_lmdesc);
> > +
> > + while (lmdesc->nxla != start_lmdesc_addr) {
> > + residue += lmdesc->tb;
> > + lmdesc = rz_dmac_get_next_lmdesc(channel->lmdesc.base,
lmdesc);
> > + }
> Will this loop calculate a monotonically decreasing residue for cyclic
> transfers?
Yes, according to my testing.
> ALSA requires the residue to be the exact, monotonically decreasing number
> of bytes remaining in the buffer. Iterating until the end of the descriptor
> ring and adding the lengths of subsequent periods might cause the returned
> residue to jump wildly depending on which period is currently executing.
> For example, in a 3-period buffer (A, B, C), executing period B might
> return a much larger residue than executing period A, since it would add
> the remaining bytes in the ring differently.
> Could this non-monotonic behavior trigger 'PCM position is invalid'
> errors in the ALSA framework, breaking audio playback?
Never see this in my testing.
> [ ... ]
> > @@ -924,10 +1041,14 @@ static irqreturn_t rz_dmac_irq_handler_thread(int
irq, void *dev_id)
> > if (!desc)
> > return IRQ_HANDLED;
> >
> > - vchan_cookie_complete(&desc->vd);
> > - channel->desc = NULL;
> > + if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)) {
> > + vchan_cyclic_callback(&desc->vd);
> > + } else {
> > + vchan_cookie_complete(&desc->vd);
> > + channel->desc = NULL;
> If a non-cyclic transfer completes while the global channel->status flag
> is set, could the IRQ handler erroneously call vchan_cyclic_callback()
> instead of vchan_cookie_complete(), permanently hanging the channel?
> Checking desc->type == RZ_DMAC_DESC_CYCLIC here instead might prevent
> misidentifying non-cyclic transfers as cyclic.
Once the channel is configured as cyclic it cannot be used otherwise. The use
has to call terminate_all(), synchornize() + re-get + re-prepare the transfers.
So, this exact scenario here is invalid according to the current use case.
If the status is set it means the channel->desc = NULL, which means the IRQ
handler is not executed up to this point.
^ permalink raw reply [flat|nested] 37+ messages in thread
* sashiko.dev review (Re: [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support)
2026-04-11 11:43 ` [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support Claudiu
2026-04-20 7:42 ` Biju Das
@ 2026-04-20 12:52 ` Claudiu Beznea
1 sibling, 0 replies; 37+ messages in thread
From: Claudiu Beznea @ 2026-04-20 12:52 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, long.luu.ur
Cc: dmaengine, linux-kernel, linux-sound, linux-renesas-soc,
Claudiu Beznea
> commit e7af08b0f2dc569cc63b17e07dd4e506e50fee28
> Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> dmaengine: sh: rz-dmac: Add suspend to RAM support
> The Renesas RZ/G3S SoC supports a power saving mode in which power to most
> of the SoC components is turned off, including the DMA IP. Add suspend to
> RAM support to save and restore the DMA IP registers.
> > diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> > --- a/drivers/dma/sh/rz-dmac.c
> > +++ b/drivers/dma/sh/rz-dmac.c
> [ ... ]
> > @@ -1354,6 +1407,130 @@ static void rz_dmac_remove(struct platform_device
*pdev)
> > pm_runtime_disable(&pdev->dev);
> > }
> >
> > +static int rz_dmac_suspend_prepare(struct device *dev)
> > +{
> > + struct rz_dmac *dmac = dev_get_drvdata(dev);
> > +
> > + for (unsigned int i = 0; i < dmac->n_channels; i++) {
> > + struct rz_dmac_chan *channel = &dmac->channels[i];
> > +
> > + guard(spinlock_irqsave)(&channel->vc.lock);
> > +
> > + /* Wait for transfer completion, except in cyclic case. */
> > + if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC))
> > + continue;
> > +
> > + if (rz_dmac_chan_is_enabled(channel))
> > + return -EAGAIN;
> Is it intended that this aborts the system suspend instead of waiting?
Returning -EAGAIN here makes the core suspend code to keep trying calling the
prepare API of this driver. That's why it has mentioned waiting.
> The commit message states the .prepare callback "waits for all the ongoing
> transfers to complete before allowing suspend-to-RAM to proceed."
> However, returning -EAGAIN from a system sleep .prepare callback immediately
> aborts the system suspend process instead of waiting.
> Additionally, since the PM core executes the .prepare phase for all devices
> before executing any .suspend callbacks, consumer devices haven't yet run
> their own .suspend callbacks to stop their DMA transfers.
> If any normal background DMA activity is ongoing, won't this immediately
> abort the entire system suspend transition?
If DMA activity is ongoing the expectation is that it will finish and the
suspend_prepare() will go forward.
> > + }
> > +
> > + return 0;
> > +}
> [ ... ]
> > +static int rz_dmac_suspend(struct device *dev)
> > +{
> > + struct rz_dmac *dmac = dev_get_drvdata(dev);
> > + int ret;
> Can ret be used uninitialized here?
Yes, this is a typo. I'll properly initialize this here.
> If the DMA controller has no cyclic channels configured, the loop below will
> skip every iteration due to the continue statement, leaving ret
> uninitialized.
> > +
> > + for (unsigned int i = 0; i < dmac->n_channels; i++) {
> > + struct rz_dmac_chan *channel = &dmac->channels[i];
> > +
> > + guard(spinlock_irqsave)(&channel->vc.lock);
> > +
> > + if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)))
> > + continue;
> > +
> > + ret = rz_dmac_device_pause_internal(channel);
> > + if (ret) {
> > + dev_err(dev, "Failed to suspend channel %s\n",
> > + dma_chan_name(&channel->vc.chan));
> > + break;
> > + }
> > +
> > + channel->pm_state.nxla = rz_dmac_ch_readl(channel, NXLA, 1);
> > + }
> > +
> > + if (ret) {
> If ret contains uninitialized stack memory that evaluates to true, could this
> trigger the error recovery path and silently abort the system suspend
> transition?
> > + rz_dmac_suspend_recover(dmac);
> > + return ret;
> > + }
> > +
> > + pm_runtime_put_sync(dmac->dev);
> > +
> > + ret = reset_control_assert(dmac->rstc);
> Is the ordering of runtime PM and reset control inverted here compared to
> rz_dmac_remove()?
Yes, I'll adjust it.
> In rz_dmac_remove(), reset_control_assert() is called before pm_runtime_put().
> Operating on the reset control after the device's power domain is suspended
> might cause the operations to be silently ignored by the hardware or lead to
> system bus lockups.
> > + if (ret) {
> > + pm_runtime_resume_and_get(dmac->dev);
> > + rz_dmac_suspend_recover(dmac);
> > + }
> > +
> > + return ret;
> > +}
> > +
> > +static int rz_dmac_resume(struct device *dev)
> > +{
> > + struct rz_dmac *dmac = dev_get_drvdata(dev);
> > + int errors = 0, ret;
> > +
> > + ret = reset_control_deassert(dmac->rstc);
> Similarly, is the ordering of reset control and runtime PM inverted here
> compared to rz_dmac_probe()?
Yes, I'll adjust it.
> In rz_dmac_probe(), pm_runtime_resume_and_get() is called before
> reset_control_deassert(). Doing this in the reverse order here means the
> reset is deasserted before the hardware power domain is enabled.
> > + if (ret)
> > + return ret;
> > +
> > + ret = pm_runtime_resume_and_get(dmac->dev);
> > + if (ret) {
> > + reset_control_assert(dmac->rstc);
> > + return ret;
> > + }
> [ ... ]
> > @@ -1380,6 +1557,7 @@ static struct platform_driver rz_dmac_driver = {
> > .driver = {
> > .name = "rz-dmac",
> > .of_match_table = of_rz_dmac_match,
> > + .pm = pm_sleep_ptr(&rz_dmac_pm_ops),
> Should this use pm_ptr() instead of pm_sleep_ptr()?
> If CONFIG_PM_SLEEP is disabled, pm_sleep_ptr() evaluates to NULL, but the
> struct rz_dmac_pm_ops itself and its unconditional callbacks like
> rz_dmac_suspend_prepare will remain fully compiled into the binary as unused
> dead code, potentially leading to compiler warnings.
> Using pm_ptr() for the dev_pm_ops structure pointer and pm_sleep_ptr()
> around the sleep callbacks inside the structure might resolve this.
> > },
> > .probe = rz_dmac_probe,
> > .remove = rz_dmac_remove,
I think pm_sleep_ptr() fits better here as CONFIG_PM_SLEEP depends on
CONFIG_SUSPEND which is the flag under which the system suspend/resume code
resides. Also, CONFIG_PM_SLEEP selects CONFIG_PM, so CONFIG_PM_SLEEP cannot be
enabled w/o CONFIG_PM.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support
2026-04-20 7:42 ` Biju Das
@ 2026-04-20 14:15 ` Claudiu Beznea
2026-04-20 14:21 ` Biju Das
0 siblings, 1 reply; 37+ messages in thread
From: Claudiu Beznea @ 2026-04-20 14:15 UTC (permalink / raw)
To: Biju Das, vkoul@kernel.org, Frank.Li@kernel.org,
lgirdwood@gmail.com, broonie@kernel.org, perex@perex.cz,
tiwai@suse.com, Prabhakar Mahadev Lad, p.zabel@pengutronix.de,
geert+renesas@glider.be, Fabrizio Castro, Long Luu
Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-sound@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
Claudiu Beznea
On 4/20/26 10:42, Biju Das wrote:
>> +static int rz_dmac_suspend(struct device *dev) {
>> + struct rz_dmac *dmac = dev_get_drvdata(dev);
>> + int ret;
>> +
>> + for (unsigned int i = 0; i < dmac->n_channels; i++) {
>> + struct rz_dmac_chan *channel = &dmac->channels[i];
>> +
>> + guard(spinlock_irqsave)(&channel->vc.lock);
>> +
>> + if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)))
>> + continue;
>> +
>> + ret = rz_dmac_device_pause_internal(channel);
>> + if (ret) {
>> + dev_err(dev, "Failed to suspend channel %s\n",
>> + dma_chan_name(&channel->vc.chan));
>> + break;
>> + }
>> +
>> + channel->pm_state.nxla = rz_dmac_ch_readl(channel, NXLA, 1);
>> + }
>> +
>> + if (ret) {
>> + rz_dmac_suspend_recover(dmac);
>> + return ret;
>> + }
>> +
>> + pm_runtime_put_sync(dmac->dev);
>> +
>> + ret = reset_control_assert(dmac->rstc);
>> + if (ret) {
>> + pm_runtime_resume_and_get(dmac->dev);
>> + rz_dmac_suspend_recover(dmac);
>> + }
>> +
>> + return ret;
>> +}
>> +
>> +static int rz_dmac_resume(struct device *dev) {
>> + struct rz_dmac *dmac = dev_get_drvdata(dev);
>> + int errors = 0, ret;
>> +
>> + ret = reset_control_deassert(dmac->rstc);
>> + if (ret)
>> + return ret;
>> +
>> + ret = pm_runtime_resume_and_get(dmac->dev);
>
> If this fails for any reason, the next suspend still be called and it will decrement the counter, potentially undeflowing it.
> Consider switching to pm_runtime_get_sync(), which suits better here
I think runtime PM usage counter underflow will be the less significant problem
in case runtime PM fails.
Anyhow, could you please provide the code pattern you consider would be better
for both suspend and resume?
Thank you,
Claudiu
^ permalink raw reply [flat|nested] 37+ messages in thread
* RE: [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support
2026-04-20 14:15 ` Claudiu Beznea
@ 2026-04-20 14:21 ` Biju Das
2026-04-20 14:37 ` Claudiu Beznea
0 siblings, 1 reply; 37+ messages in thread
From: Biju Das @ 2026-04-20 14:21 UTC (permalink / raw)
To: Claudiu.Beznea, vkoul@kernel.org, Frank.Li@kernel.org,
lgirdwood@gmail.com, broonie@kernel.org, perex@perex.cz,
tiwai@suse.com, Prabhakar Mahadev Lad, p.zabel@pengutronix.de,
geert+renesas@glider.be, Fabrizio Castro, Long Luu
Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-sound@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
Claudiu Beznea
Hi Claudiu,
> -----Original Message-----
> From: Claudiu Beznea <claudiu.beznea@tuxon.dev>
> Sent: 20 April 2026 15:15
> Subject: Re: [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support
>
>
>
> On 4/20/26 10:42, Biju Das wrote:
> >> +static int rz_dmac_suspend(struct device *dev) {
> >> + struct rz_dmac *dmac = dev_get_drvdata(dev);
> >> + int ret;
> >> +
> >> + for (unsigned int i = 0; i < dmac->n_channels; i++) {
> >> + struct rz_dmac_chan *channel = &dmac->channels[i];
> >> +
> >> + guard(spinlock_irqsave)(&channel->vc.lock);
> >> +
> >> + if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)))
> >> + continue;
> >> +
> >> + ret = rz_dmac_device_pause_internal(channel);
> >> + if (ret) {
> >> + dev_err(dev, "Failed to suspend channel %s\n",
> >> + dma_chan_name(&channel->vc.chan));
> >> + break;
> >> + }
> >> +
> >> + channel->pm_state.nxla = rz_dmac_ch_readl(channel, NXLA, 1);
> >> + }
> >> +
> >> + if (ret) {
> >> + rz_dmac_suspend_recover(dmac);
> >> + return ret;
> >> + }
> >> +
> >> + pm_runtime_put_sync(dmac->dev);
> >> +
> >> + ret = reset_control_assert(dmac->rstc);
> >> + if (ret) {
> >> + pm_runtime_resume_and_get(dmac->dev);
> >> + rz_dmac_suspend_recover(dmac);
> >> + }
> >> +
> >> + return ret;
> >> +}
> >> +
> >> +static int rz_dmac_resume(struct device *dev) {
> >> + struct rz_dmac *dmac = dev_get_drvdata(dev);
> >> + int errors = 0, ret;
> >> +
> >> + ret = reset_control_deassert(dmac->rstc);
> >> + if (ret)
> >> + return ret;
> >> +
> >> + ret = pm_runtime_resume_and_get(dmac->dev);
> >
> > If this fails for any reason, the next suspend still be called and it will decrement the counter,
> potentially undeflowing it.
> > Consider switching to pm_runtime_get_sync(), which suits better here
>
>
> I think runtime PM usage counter underflow will be the less significant problem in case runtime PM
> fails.
>
> Anyhow, could you please provide the code pattern you consider would be better for both suspend and
> resume?
system_resume()
{
pm_runtime_resume_and_get() --> PM counter is not incremented in case of error
}
system_suspend()
{
pm_runtime_put() --> counter is decremented and prints a noisy WARN message
}
Just replace pm_runtime_resume_and_get()->pm_runtime_get_sync()
this will return the error to caller like previously and also increment the counter
which avoids warning on the subsequent suspend()
Cheers,
Biju
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support
2026-04-20 14:21 ` Biju Das
@ 2026-04-20 14:37 ` Claudiu Beznea
2026-04-20 14:53 ` Biju Das
0 siblings, 1 reply; 37+ messages in thread
From: Claudiu Beznea @ 2026-04-20 14:37 UTC (permalink / raw)
To: Biju Das, vkoul@kernel.org, Frank.Li@kernel.org,
lgirdwood@gmail.com, broonie@kernel.org, perex@perex.cz,
tiwai@suse.com, Prabhakar Mahadev Lad, p.zabel@pengutronix.de,
geert+renesas@glider.be, Fabrizio Castro, Long Luu
Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-sound@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
Claudiu Beznea
On 4/20/26 17:21, Biju Das wrote:
> Hi Claudiu,
>
>> -----Original Message-----
>> From: Claudiu Beznea <claudiu.beznea@tuxon.dev>
>> Sent: 20 April 2026 15:15
>> Subject: Re: [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support
>>
>>
>>
>> On 4/20/26 10:42, Biju Das wrote:
>>>> +static int rz_dmac_suspend(struct device *dev) {
>>>> + struct rz_dmac *dmac = dev_get_drvdata(dev);
>>>> + int ret;
>>>> +
>>>> + for (unsigned int i = 0; i < dmac->n_channels; i++) {
>>>> + struct rz_dmac_chan *channel = &dmac->channels[i];
>>>> +
>>>> + guard(spinlock_irqsave)(&channel->vc.lock);
>>>> +
>>>> + if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)))
>>>> + continue;
>>>> +
>>>> + ret = rz_dmac_device_pause_internal(channel);
>>>> + if (ret) {
>>>> + dev_err(dev, "Failed to suspend channel %s\n",
>>>> + dma_chan_name(&channel->vc.chan));
>>>> + break;
>>>> + }
>>>> +
>>>> + channel->pm_state.nxla = rz_dmac_ch_readl(channel, NXLA, 1);
>>>> + }
>>>> +
>>>> + if (ret) {
>>>> + rz_dmac_suspend_recover(dmac);
>>>> + return ret;
>>>> + }
>>>> +
>>>> + pm_runtime_put_sync(dmac->dev);
>>>> +
>>>> + ret = reset_control_assert(dmac->rstc);
>>>> + if (ret) {
>>>> + pm_runtime_resume_and_get(dmac->dev);
>>>> + rz_dmac_suspend_recover(dmac);
>>>> + }
>>>> +
>>>> + return ret;
>>>> +}
>>>> +
>>>> +static int rz_dmac_resume(struct device *dev) {
>>>> + struct rz_dmac *dmac = dev_get_drvdata(dev);
>>>> + int errors = 0, ret;
>>>> +
>>>> + ret = reset_control_deassert(dmac->rstc);
>>>> + if (ret)
>>>> + return ret;
>>>> +
>>>> + ret = pm_runtime_resume_and_get(dmac->dev);
>>>
>>> If this fails for any reason, the next suspend still be called and it will decrement the counter,
>> potentially undeflowing it.
>>> Consider switching to pm_runtime_get_sync(), which suits better here
>>
>>
>> I think runtime PM usage counter underflow will be the less significant problem in case runtime PM
>> fails.
>>
>> Anyhow, could you please provide the code pattern you consider would be better for both suspend and
>> resume?
>
>
> system_resume()
> {
> pm_runtime_resume_and_get() --> PM counter is not incremented in case of error
> }
>
> system_suspend()
> {
> pm_runtime_put() --> counter is decremented and prints a noisy WARN message
> }
>
> Just replace pm_runtime_resume_and_get()->pm_runtime_get_sync()
> this will return the error to caller like previously and also increment the counter
> which avoids warning on the subsequent suspend()
This wouldn't solve anything.
If the newly added pm_runtime_get_sync() fails the next dev_pm_ops::prepare()
call, accesses DMA IP registers. That will sync abort (due to MSTOP) even before
any warning (I guess underflow runtime PM usage counter) will be printed.
If we add runtime PM resumes in the dev_pm_ops::prepare() to overcome part of
the sync abort in the next dev_pm_ops::prepare() call and keep
pm_runtime_get_sync() blindly, w/o dropping the usage counter on failure, that
will still lead to sync aborts, because the runtime PM resumes in
dev_pm_ops::prepare() should only increase the runtime PM ref counter and return
success.
Thank you,
Claudiu
^ permalink raw reply [flat|nested] 37+ messages in thread
* RE: [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support
2026-04-20 14:37 ` Claudiu Beznea
@ 2026-04-20 14:53 ` Biju Das
0 siblings, 0 replies; 37+ messages in thread
From: Biju Das @ 2026-04-20 14:53 UTC (permalink / raw)
To: Claudiu.Beznea, vkoul@kernel.org, Frank.Li@kernel.org,
lgirdwood@gmail.com, broonie@kernel.org, perex@perex.cz,
tiwai@suse.com, Prabhakar Mahadev Lad, p.zabel@pengutronix.de,
geert+renesas@glider.be, Fabrizio Castro, Long Luu
Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-sound@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
Claudiu Beznea
> -----Original Message-----
> From: Claudiu Beznea <claudiu.beznea@tuxon.dev>
> Sent: 20 April 2026 15:37
> Subject: Re: [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support
>
>
>
> On 4/20/26 17:21, Biju Das wrote:
> > Hi Claudiu,
> >
> >> -----Original Message-----
> >> From: Claudiu Beznea <claudiu.beznea@tuxon.dev>
> >> Sent: 20 April 2026 15:15
> >> Subject: Re: [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to
> >> RAM support
> >>
> >>
> >>
> >> On 4/20/26 10:42, Biju Das wrote:
> >>>> +static int rz_dmac_suspend(struct device *dev) {
> >>>> + struct rz_dmac *dmac = dev_get_drvdata(dev);
> >>>> + int ret;
> >>>> +
> >>>> + for (unsigned int i = 0; i < dmac->n_channels; i++) {
> >>>> + struct rz_dmac_chan *channel = &dmac->channels[i];
> >>>> +
> >>>> + guard(spinlock_irqsave)(&channel->vc.lock);
> >>>> +
> >>>> + if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)))
> >>>> + continue;
> >>>> +
> >>>> + ret = rz_dmac_device_pause_internal(channel);
> >>>> + if (ret) {
> >>>> + dev_err(dev, "Failed to suspend channel %s\n",
> >>>> + dma_chan_name(&channel->vc.chan));
> >>>> + break;
> >>>> + }
> >>>> +
> >>>> + channel->pm_state.nxla = rz_dmac_ch_readl(channel, NXLA, 1);
> >>>> + }
> >>>> +
> >>>> + if (ret) {
> >>>> + rz_dmac_suspend_recover(dmac);
> >>>> + return ret;
> >>>> + }
> >>>> +
> >>>> + pm_runtime_put_sync(dmac->dev);
> >>>> +
> >>>> + ret = reset_control_assert(dmac->rstc);
> >>>> + if (ret) {
> >>>> + pm_runtime_resume_and_get(dmac->dev);
> >>>> + rz_dmac_suspend_recover(dmac);
> >>>> + }
> >>>> +
> >>>> + return ret;
> >>>> +}
> >>>> +
> >>>> +static int rz_dmac_resume(struct device *dev) {
> >>>> + struct rz_dmac *dmac = dev_get_drvdata(dev);
> >>>> + int errors = 0, ret;
> >>>> +
> >>>> + ret = reset_control_deassert(dmac->rstc);
> >>>> + if (ret)
> >>>> + return ret;
> >>>> +
> >>>> + ret = pm_runtime_resume_and_get(dmac->dev);
> >>>
> >>> If this fails for any reason, the next suspend still be called and
> >>> it will decrement the counter,
> >> potentially undeflowing it.
> >>> Consider switching to pm_runtime_get_sync(), which suits better here
> >>
> >>
> >> I think runtime PM usage counter underflow will be the less
> >> significant problem in case runtime PM fails.
> >>
> >> Anyhow, could you please provide the code pattern you consider would
> >> be better for both suspend and resume?
> >
> >
> > system_resume()
> > {
> > pm_runtime_resume_and_get() --> PM counter is not
> > incremented in case of error }
> >
> > system_suspend()
> > {
> > pm_runtime_put() --> counter is decremented and prints a noisy
> > WARN message }
> >
> > Just replace pm_runtime_resume_and_get()->pm_runtime_get_sync()
> > this will return the error to caller like previously and also
> > increment the counter which avoids warning on the subsequent suspend()
>
> This wouldn't solve anything.
It basically avoids printing the messasge from [1]
dev_warn(dev, "Runtime PM usage count underflow!\n");
[1] https://elixir.bootlin.com/linux/v7.0-rc7/source/drivers/base/power/runtime.c#L1094
>
> If the newly added pm_runtime_get_sync() fails the next dev_pm_ops::prepare() call, accesses DMA IP
> registers. That will sync abort (due to MSTOP) even before any warning (I guess underflow runtime PM
> usage counter) will be printed.
This can happen with current patch as well as both the api's reports same error to caller
and the clock is not turned on by the clock framework and accessing clk register will lead
to undesired effects.
>
> If we add runtime PM resumes in the dev_pm_ops::prepare() to overcome part of the sync abort in the
> next dev_pm_ops::prepare() call and keep
> pm_runtime_get_sync() blindly, w/o dropping the usage counter on failure, that will still lead to sync
> aborts, because the runtime PM resumes in
> dev_pm_ops::prepare() should only increase the runtime PM ref counter and return success.
How is this behaviour different from current patch see [2]?
[2] https://elixir.bootlin.com/linux/v7.0-rc7/source/drivers/base/power/runtime.c#L1093
Cheers,
Biju
^ permalink raw reply [flat|nested] 37+ messages in thread
end of thread, other threads:[~2026-04-20 14:53 UTC | newest]
Thread overview: 37+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-11 11:42 [PATCH v4 00/17] Renesas: dmaengine and ASoC fixes Claudiu
2026-04-11 11:42 ` [PATCH v4 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up Claudiu
2026-04-11 12:17 ` Biju Das
2026-04-11 12:34 ` Claudiu Beznea
2026-04-20 12:33 ` sashiko.dev review (Re: [PATCH v4 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up) Claudiu Beznea
2026-04-11 11:42 ` [PATCH v4 02/17] dmaengine: sh: rz-dmac: Fix incorrect NULL check on list_first_entry() Claudiu
2026-04-11 11:42 ` [PATCH v4 03/17] dmaengine: sh: rz-dmac: Use list_first_entry_or_null() Claudiu
2026-04-11 11:42 ` [PATCH v4 04/17] dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw() Claudiu
2026-04-20 12:34 ` sashiko.dev review (Re: [PATCH v4 04/17] dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw()) Claudiu Beznea
2026-04-11 11:42 ` [PATCH v4 05/17] dmaengine: sh: rz-dmac: Do not disable the channel on error Claudiu
2026-04-11 12:30 ` Biju Das
2026-04-14 8:28 ` Claudiu Beznea
2026-04-11 11:42 ` [PATCH v4 06/17] dmaengine: sh: rz-dmac: Add helper to compute the lmdesc address Claudiu
2026-04-11 11:42 ` [PATCH v4 07/17] dmaengine: sh: rz-dmac: Save the start LM descriptor Claudiu
2026-04-11 12:34 ` Biju Das
2026-04-11 12:38 ` Claudiu Beznea
2026-04-11 12:50 ` Biju Das
2026-04-20 12:37 ` sashiko.dev review (Re: [PATCH v4 07/17] dmaengine: sh: rz-dmac: Save the start LM descriptor) Claudiu Beznea
2026-04-11 11:42 ` [PATCH v4 08/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is enabled Claudiu
2026-04-11 11:42 ` [PATCH v4 09/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is paused Claudiu
2026-04-11 11:42 ` [PATCH v4 10/17] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing Claudiu
2026-04-20 12:41 ` sashiko.dev review (Re: [PATCH v4 10/17] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing) Claudiu Beznea
2026-04-11 11:42 ` [PATCH v4 11/17] dmaengine: sh: rz-dmac: Refactor pause/resume code Claudiu
2026-04-20 12:42 ` sashiko.dev review (Re: [PATCH v4 11/17] dmaengine: sh: rz-dmac: Refactor pause/resume code) Claudiu Beznea
2026-04-11 11:42 ` [PATCH v4 12/17] dmaengine: sh: rz-dmac: Drop the update of channel->chctrl with CHCTRL_SETEN Claudiu
2026-04-11 11:42 ` [PATCH v4 13/17] dmaengine: sh: rz-dmac: Add cyclic DMA support Claudiu
2026-04-20 12:51 ` sashiko.dev review (Re: [PATCH v4 13/17] dmaengine: sh: rz-dmac: Add cyclic DMA support) Claudiu Beznea
2026-04-11 11:43 ` [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support Claudiu
2026-04-20 7:42 ` Biju Das
2026-04-20 14:15 ` Claudiu Beznea
2026-04-20 14:21 ` Biju Das
2026-04-20 14:37 ` Claudiu Beznea
2026-04-20 14:53 ` Biju Das
2026-04-20 12:52 ` sashiko.dev review (Re: [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support) Claudiu Beznea
2026-04-11 11:43 ` [PATCH v4 15/17] ASoC: renesas: rz-ssi: Add pause support Claudiu
2026-04-11 11:43 ` [PATCH v4 16/17] ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs Claudiu
2026-04-11 11:43 ` [PATCH v4 17/17] dmaengine: sh: rz-dmac: Set the Link End (LE) bit on the last descriptor Claudiu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox