* [PATCH v3 01/15] dmaengine: sh: rz-dmac: Use list_first_entry_or_null()
2026-04-07 13:34 [PATCH v3 00/15] Renesas: dmaengine and ASoC fixes Claudiu
@ 2026-04-07 13:34 ` Claudiu
2026-04-07 13:34 ` [PATCH v3 02/15] dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw() Claudiu
` (13 subsequent siblings)
14 siblings, 0 replies; 21+ messages in thread
From: Claudiu @ 2026-04-07 13:34 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Use list_first_entry_or_null() instead of open-coding it with a
list_empty() check and list_first_entry(). This simplifies the code.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v3:
- none, this patch is new
drivers/dma/sh/rz-dmac.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 625ff29024de..3d383afebecd 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -503,11 +503,10 @@ rz_dmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
__func__, channel->index, &src, &dest, len);
scoped_guard(spinlock_irqsave, &channel->vc.lock) {
- if (list_empty(&channel->ld_free))
+ desc = list_first_entry_or_null(&channel->ld_free, struct rz_dmac_desc, node);
+ if (!desc)
return NULL;
- desc = list_first_entry(&channel->ld_free, struct rz_dmac_desc, node);
-
desc->type = RZ_DMAC_DESC_MEMCPY;
desc->src = src;
desc->dest = dest;
@@ -533,11 +532,10 @@ rz_dmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
int i = 0;
scoped_guard(spinlock_irqsave, &channel->vc.lock) {
- if (list_empty(&channel->ld_free))
+ desc = list_first_entry_or_null(&channel->ld_free, struct rz_dmac_desc, node);
+ if (!desc)
return NULL;
- desc = list_first_entry(&channel->ld_free, struct rz_dmac_desc, node);
-
for_each_sg(sgl, sg, sg_len, i)
dma_length += sg_dma_len(sg);
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread* [PATCH v3 02/15] dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw()
2026-04-07 13:34 [PATCH v3 00/15] Renesas: dmaengine and ASoC fixes Claudiu
2026-04-07 13:34 ` [PATCH v3 01/15] dmaengine: sh: rz-dmac: Use list_first_entry_or_null() Claudiu
@ 2026-04-07 13:34 ` Claudiu
2026-04-07 13:34 ` [PATCH v3 03/15] dmaengine: sh: rz-dmac: Do not disable the channel on error Claudiu
` (12 subsequent siblings)
14 siblings, 0 replies; 21+ messages in thread
From: Claudiu @ 2026-04-07 13:34 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Use rz_dmac_disable_hw() instead of open codding it. This unifies the
code and prepares it for the addition of suspend to RAM and cyclic DMA.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v3:
- none, this patch is new
drivers/dma/sh/rz-dmac.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 3d383afebecd..12c1163cb6ef 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -873,7 +873,7 @@ static void rz_dmac_irq_handle_channel(struct rz_dmac_chan *channel)
channel->index, chstat);
scoped_guard(spinlock_irqsave, &channel->vc.lock)
- rz_dmac_ch_writel(channel, CHCTRL_DEFAULT, CHCTRL, 1);
+ rz_dmac_disable_hw(channel);
return;
}
@@ -1020,7 +1020,7 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac,
rz_lmdesc_setup(channel, lmdesc);
/* Initialize register for each channel */
- rz_dmac_ch_writel(channel, CHCTRL_DEFAULT, CHCTRL, 1);
+ rz_dmac_disable_hw(channel);
channel->vc.desc_free = rz_dmac_virt_desc_free;
vchan_init(&channel->vc, &dmac->engine);
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread* [PATCH v3 03/15] dmaengine: sh: rz-dmac: Do not disable the channel on error
2026-04-07 13:34 [PATCH v3 00/15] Renesas: dmaengine and ASoC fixes Claudiu
2026-04-07 13:34 ` [PATCH v3 01/15] dmaengine: sh: rz-dmac: Use list_first_entry_or_null() Claudiu
2026-04-07 13:34 ` [PATCH v3 02/15] dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw() Claudiu
@ 2026-04-07 13:34 ` Claudiu
2026-04-07 13:34 ` [PATCH v3 04/15] dmaengine: sh: rz-dmac: Add helper to compute the lmdesc address Claudiu
` (11 subsequent siblings)
14 siblings, 0 replies; 21+ messages in thread
From: Claudiu @ 2026-04-07 13:34 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Disabling the channel on error is pointless, as if other transfers are
queued, the IRQ thread will be woken up and will execute them anyway by
calling rz_dmac_xfer_desc().
rz_dmac_xfer_desc() re-enables the transfer. Before doing so, it sets
CHCTRL.SWRST, which clears CHSTAT.DER and CHSTAT.END anyway.
Skip disabling the DMA channel and just log the error instead.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v3:
- none, this patch is new
drivers/dma/sh/rz-dmac.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 12c1163cb6ef..34c00f3ffd4c 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -871,10 +871,6 @@ static void rz_dmac_irq_handle_channel(struct rz_dmac_chan *channel)
if (chstat & CHSTAT_ER) {
dev_err(dmac->dev, "DMAC err CHSTAT_%d = %08X\n",
channel->index, chstat);
-
- scoped_guard(spinlock_irqsave, &channel->vc.lock)
- rz_dmac_disable_hw(channel);
- return;
}
/*
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread* [PATCH v3 04/15] dmaengine: sh: rz-dmac: Add helper to compute the lmdesc address
2026-04-07 13:34 [PATCH v3 00/15] Renesas: dmaengine and ASoC fixes Claudiu
` (2 preceding siblings ...)
2026-04-07 13:34 ` [PATCH v3 03/15] dmaengine: sh: rz-dmac: Do not disable the channel on error Claudiu
@ 2026-04-07 13:34 ` Claudiu
2026-04-07 13:34 ` [PATCH v3 05/15] dmaengine: sh: rz-dmac: Save the start LM descriptor Claudiu
` (10 subsequent siblings)
14 siblings, 0 replies; 21+ messages in thread
From: Claudiu @ 2026-04-07 13:34 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Add a helper function to compute the lmdesc address. This makes the
code easier to understand, and the helper will be used in subsequent
patches.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v3:
- none, this patch is new
drivers/dma/sh/rz-dmac.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 34c00f3ffd4c..ef775ffa1099 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -272,6 +272,12 @@ static void rz_dmac_lmdesc_recycle(struct rz_dmac_chan *channel)
channel->lmdesc.head = lmdesc;
}
+static u32 rz_dmac_lmdesc_addr(struct rz_dmac_chan *channel, struct rz_lmdesc *lmdesc)
+{
+ return channel->lmdesc.base_dma +
+ (sizeof(struct rz_lmdesc) * (lmdesc - channel->lmdesc.base));
+}
+
static void rz_dmac_enable_hw(struct rz_dmac_chan *channel)
{
struct dma_chan *chan = &channel->vc.chan;
@@ -284,9 +290,7 @@ static void rz_dmac_enable_hw(struct rz_dmac_chan *channel)
rz_dmac_lmdesc_recycle(channel);
- nxla = channel->lmdesc.base_dma +
- (sizeof(struct rz_lmdesc) * (channel->lmdesc.head -
- channel->lmdesc.base));
+ nxla = rz_dmac_lmdesc_addr(channel, channel->lmdesc.head);
chstat = rz_dmac_ch_readl(channel, CHSTAT, 1);
if (!(chstat & CHSTAT_EN)) {
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread* [PATCH v3 05/15] dmaengine: sh: rz-dmac: Save the start LM descriptor
2026-04-07 13:34 [PATCH v3 00/15] Renesas: dmaengine and ASoC fixes Claudiu
` (3 preceding siblings ...)
2026-04-07 13:34 ` [PATCH v3 04/15] dmaengine: sh: rz-dmac: Add helper to compute the lmdesc address Claudiu
@ 2026-04-07 13:34 ` Claudiu
2026-04-07 13:34 ` [PATCH v3 06/15] dmaengine: sh: rz-dmac: Add helper to check if the channel is enabled Claudiu
` (9 subsequent siblings)
14 siblings, 0 replies; 21+ messages in thread
From: Claudiu @ 2026-04-07 13:34 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Save the start LM descriptor to avoid looping through the entire
channel's LM descriptor list when computing the residue. This avoids
unnecessary iterations.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v3:
- none, this patch is new
drivers/dma/sh/rz-dmac.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index ef775ffa1099..cd639aa9186a 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -58,6 +58,7 @@ struct rz_dmac_desc {
/* For slave sg */
struct scatterlist *sg;
unsigned int sgcount;
+ struct rz_lmdesc *start_lmdesc;
};
#define to_rz_dmac_desc(d) container_of(d, struct rz_dmac_desc, vd)
@@ -343,6 +344,8 @@ static void rz_dmac_prepare_desc_for_memcpy(struct rz_dmac_chan *channel)
struct rz_dmac_desc *d = channel->desc;
u32 chcfg = CHCFG_MEM_COPY;
+ d->start_lmdesc = lmdesc;
+
/* prepare descriptor */
lmdesc->sa = d->src;
lmdesc->da = d->dest;
@@ -377,6 +380,7 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
}
lmdesc = channel->lmdesc.tail;
+ d->start_lmdesc = lmdesc;
for (i = 0, sg = sgl; i < sg_len; i++, sg = sg_next(sg)) {
if (d->direction == DMA_DEV_TO_MEM) {
@@ -693,9 +697,10 @@ rz_dmac_get_next_lmdesc(struct rz_lmdesc *base, struct rz_lmdesc *lmdesc)
return next;
}
-static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel, u32 crla)
+static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel,
+ struct rz_dmac_desc *desc, u32 crla)
{
- struct rz_lmdesc *lmdesc = channel->lmdesc.head;
+ struct rz_lmdesc *lmdesc = desc->start_lmdesc;
struct dma_chan *chan = &channel->vc.chan;
struct rz_dmac *dmac = to_rz_dmac(chan->device);
u32 residue = 0, i = 0;
@@ -794,7 +799,7 @@ static u32 rz_dmac_chan_get_residue(struct rz_dmac_chan *channel,
* Calculate number of bytes transferred in processing virtual descriptor.
* One virtual descriptor can have many lmdesc.
*/
- return crtb + rz_dmac_calculate_residue_bytes_in_vd(channel, crla);
+ return crtb + rz_dmac_calculate_residue_bytes_in_vd(channel, current_desc, crla);
}
static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread* [PATCH v3 06/15] dmaengine: sh: rz-dmac: Add helper to check if the channel is enabled
2026-04-07 13:34 [PATCH v3 00/15] Renesas: dmaengine and ASoC fixes Claudiu
` (4 preceding siblings ...)
2026-04-07 13:34 ` [PATCH v3 05/15] dmaengine: sh: rz-dmac: Save the start LM descriptor Claudiu
@ 2026-04-07 13:34 ` Claudiu
2026-04-07 13:34 ` [PATCH v3 07/15] dmaengine: sh: rz-dmac: Add helper to check if the channel is paused Claudiu
` (8 subsequent siblings)
14 siblings, 0 replies; 21+ messages in thread
From: Claudiu @ 2026-04-07 13:34 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Add a helper to check if the channel is enabled. This will be reused in
subsequent patches.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v3:
- none, this patch is new
drivers/dma/sh/rz-dmac.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index cd639aa9186a..083e81c07aff 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -279,6 +279,13 @@ static u32 rz_dmac_lmdesc_addr(struct rz_dmac_chan *channel, struct rz_lmdesc *l
(sizeof(struct rz_lmdesc) * (lmdesc - channel->lmdesc.base));
}
+static bool rz_dmac_chan_is_enabled(struct rz_dmac_chan *chan)
+{
+ u32 val = rz_dmac_ch_readl(chan, CHSTAT, 1);
+
+ return !!(val & CHSTAT_EN);
+}
+
static void rz_dmac_enable_hw(struct rz_dmac_chan *channel)
{
struct dma_chan *chan = &channel->vc.chan;
@@ -840,8 +847,7 @@ static int rz_dmac_device_pause(struct dma_chan *chan)
guard(spinlock_irqsave)(&channel->vc.lock);
- val = rz_dmac_ch_readl(channel, CHSTAT, 1);
- if (!(val & CHSTAT_EN))
+ if (!rz_dmac_chan_is_enabled(channel))
return 0;
rz_dmac_ch_writel(channel, CHCTRL_SETSUS, CHCTRL, 1);
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread* [PATCH v3 07/15] dmaengine: sh: rz-dmac: Add helper to check if the channel is paused
2026-04-07 13:34 [PATCH v3 00/15] Renesas: dmaengine and ASoC fixes Claudiu
` (5 preceding siblings ...)
2026-04-07 13:34 ` [PATCH v3 06/15] dmaengine: sh: rz-dmac: Add helper to check if the channel is enabled Claudiu
@ 2026-04-07 13:34 ` Claudiu
2026-04-07 13:35 ` [PATCH v3 08/15] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing Claudiu
` (7 subsequent siblings)
14 siblings, 0 replies; 21+ messages in thread
From: Claudiu @ 2026-04-07 13:34 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Add a helper to check if the channel is paused. This will be reused in
subsequent patches.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v3:
- none, this patch is new
drivers/dma/sh/rz-dmac.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 083e81c07aff..bfc217e8f873 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -286,6 +286,13 @@ static bool rz_dmac_chan_is_enabled(struct rz_dmac_chan *chan)
return !!(val & CHSTAT_EN);
}
+static bool rz_dmac_chan_is_paused(struct rz_dmac_chan *chan)
+{
+ u32 val = rz_dmac_ch_readl(chan, CHSTAT, 1);
+
+ return !!(val & CHSTAT_SUS);
+}
+
static void rz_dmac_enable_hw(struct rz_dmac_chan *channel)
{
struct dma_chan *chan = &channel->vc.chan;
@@ -822,12 +829,9 @@ static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
return status;
scoped_guard(spinlock_irqsave, &channel->vc.lock) {
- u32 val;
-
residue = rz_dmac_chan_get_residue(channel, cookie);
- val = rz_dmac_ch_readl(channel, CHSTAT, 1);
- if (val & CHSTAT_SUS)
+ if (rz_dmac_chan_is_paused(channel))
status = DMA_PAUSED;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread* [PATCH v3 08/15] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing
2026-04-07 13:34 [PATCH v3 00/15] Renesas: dmaengine and ASoC fixes Claudiu
` (6 preceding siblings ...)
2026-04-07 13:34 ` [PATCH v3 07/15] dmaengine: sh: rz-dmac: Add helper to check if the channel is paused Claudiu
@ 2026-04-07 13:35 ` Claudiu
2026-04-07 13:35 ` [PATCH v3 09/15] dmaengine: sh: rz-dmac: Refactor pause/resume code Claudiu
` (6 subsequent siblings)
14 siblings, 0 replies; 21+ messages in thread
From: Claudiu @ 2026-04-07 13:35 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
The driver used a mix of virt-dma APIs and driver specific logic to
process descriptors. It maintained three internal queues: ld_free,
ld_queue, and ld_active as follows:
- ld_free: stores the descriptors pre-allocated at probe time
- ld_queue: stores descriptors after they are taken from ld_free and
prepared. At the same time, vchan_tx_prep() queues them to
vc->desc_allocated. The vc->desc_allocated list is then checked in
rz_dmac_issue_pending() and rz_dmac_irq_handler_thread() before
starting a new transfer via rz_dmac_xfer_desc(). In turn,
rz_dmac_xfer_desc() grabs the next descriptor from vc->desc_issued and
submits it for transfer
- ld_active: stores the descriptors currently being transferred
The interrupt handler moved a completed descriptor to ld_free before
invoking its completion callback. Once returned to ld_free, the
descriptor can be reused to prepare a new transfer. In theory, this
means the descriptor could be re-prepared before its completion
callback is called.
Commit fully back the driver by the virt-dma APIs. With this, only ld_free
need to be kept to track how many free descriptors are available. This
is now done as follows:
- the prepare stage removes the first descriptor from the ld_free and
prepares it
- the completion calls for it vc->desc_free() (rz_dmac_virt_desc_free())
which re-adds the descriptor at the end of ld_free
With this, the critical areas in prepare callbacks were minimized to only
getting the descriptor from the ld_free list.
This change introduces struct rz_dmac_chan::desc to keep track of the
currently transferred descriptor. It is cleared in
rz_dmac_terminate_all(), referenced from rz_dmac_issue_pending() to
determine whether a new transfer can be started, and from
rz_dmac_irq_handler_thread() once a descriptor has completed. Finally,
the rz_dmac_device_synchronize() was updated with vchan_synchronize()
call to ensure the terminated descriptor is freed and the tasklet is
killed.
With this, residue computation is also simplified, as it can now be
handled entirely through the virt-dma APIs.
The spin_lock/unlock operations from rz_dmac_irq_handler_thread() were
replaced by guard as the final code after rework is simpler this way.
As subsequent commits will set the Link End bit on the last descriptor
of a transfer, rz_dmac_enable_hw() is also adjusted as part of the full
conversion to virt-dma APIs. It no longer checks the channel enable
status itself; instead, its callers verify whether the channel is
enabled and whether the previous transfer has completed before starting
a new one.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v3:
- none, this patch is new
drivers/dma/sh/rz-dmac.c | 234 ++++++++++++++-------------------------
1 file changed, 85 insertions(+), 149 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index bfc217e8f873..d47c7601907f 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -79,8 +79,6 @@ struct rz_dmac_chan {
int mid_rid;
struct list_head ld_free;
- struct list_head ld_queue;
- struct list_head ld_active;
struct {
struct rz_lmdesc *base;
@@ -299,7 +297,6 @@ static void rz_dmac_enable_hw(struct rz_dmac_chan *channel)
struct rz_dmac *dmac = to_rz_dmac(chan->device);
u32 nxla;
u32 chctrl;
- u32 chstat;
dev_dbg(dmac->dev, "%s channel %d\n", __func__, channel->index);
@@ -307,14 +304,11 @@ static void rz_dmac_enable_hw(struct rz_dmac_chan *channel)
nxla = rz_dmac_lmdesc_addr(channel, channel->lmdesc.head);
- chstat = rz_dmac_ch_readl(channel, CHSTAT, 1);
- if (!(chstat & CHSTAT_EN)) {
- chctrl = (channel->chctrl | CHCTRL_SETEN);
- rz_dmac_ch_writel(channel, nxla, NXLA, 1);
- rz_dmac_ch_writel(channel, channel->chcfg, CHCFG, 1);
- rz_dmac_ch_writel(channel, CHCTRL_SWRST, CHCTRL, 1);
- rz_dmac_ch_writel(channel, chctrl, CHCTRL, 1);
- }
+ chctrl = (channel->chctrl | CHCTRL_SETEN);
+ rz_dmac_ch_writel(channel, nxla, NXLA, 1);
+ rz_dmac_ch_writel(channel, channel->chcfg, CHCFG, 1);
+ rz_dmac_ch_writel(channel, CHCTRL_SWRST, CHCTRL, 1);
+ rz_dmac_ch_writel(channel, chctrl, CHCTRL, 1);
}
static void rz_dmac_disable_hw(struct rz_dmac_chan *channel)
@@ -426,18 +420,20 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
channel->chctrl = CHCTRL_SETEN;
}
-static int rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
+static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
{
- struct rz_dmac_desc *d = chan->desc;
struct virt_dma_desc *vd;
vd = vchan_next_desc(&chan->vc);
- if (!vd)
- return 0;
+ if (!vd) {
+ chan->desc = NULL;
+ return;
+ }
list_del(&vd->node);
+ chan->desc = to_rz_dmac_desc(vd);
- switch (d->type) {
+ switch (chan->desc->type) {
case RZ_DMAC_DESC_MEMCPY:
rz_dmac_prepare_desc_for_memcpy(chan);
break;
@@ -445,14 +441,9 @@ static int rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
case RZ_DMAC_DESC_SLAVE_SG:
rz_dmac_prepare_descs_for_slave_sg(chan);
break;
-
- default:
- return -EINVAL;
}
rz_dmac_enable_hw(chan);
-
- return 0;
}
/*
@@ -494,8 +485,6 @@ static void rz_dmac_free_chan_resources(struct dma_chan *chan)
rz_lmdesc_setup(channel, channel->lmdesc.base);
rz_dmac_disable_hw(channel);
- list_splice_tail_init(&channel->ld_active, &channel->ld_free);
- list_splice_tail_init(&channel->ld_queue, &channel->ld_free);
if (channel->mid_rid >= 0) {
clear_bit(channel->mid_rid, dmac->modules);
@@ -504,13 +493,19 @@ static void rz_dmac_free_chan_resources(struct dma_chan *chan)
spin_unlock_irqrestore(&channel->vc.lock, flags);
+ vchan_free_chan_resources(&channel->vc);
+
+ spin_lock_irqsave(&channel->vc.lock, flags);
+
list_for_each_entry_safe(desc, _desc, &channel->ld_free, node) {
+ list_del(&desc->node);
kfree(desc);
channel->descs_allocated--;
}
INIT_LIST_HEAD(&channel->ld_free);
- vchan_free_chan_resources(&channel->vc);
+
+ spin_unlock_irqrestore(&channel->vc.lock, flags);
}
static struct dma_async_tx_descriptor *
@@ -529,15 +524,15 @@ rz_dmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
if (!desc)
return NULL;
- desc->type = RZ_DMAC_DESC_MEMCPY;
- desc->src = src;
- desc->dest = dest;
- desc->len = len;
- desc->direction = DMA_MEM_TO_MEM;
-
- list_move_tail(channel->ld_free.next, &channel->ld_queue);
+ list_del(&desc->node);
}
+ desc->type = RZ_DMAC_DESC_MEMCPY;
+ desc->src = src;
+ desc->dest = dest;
+ desc->len = len;
+ desc->direction = DMA_MEM_TO_MEM;
+
return vchan_tx_prep(&channel->vc, &desc->vd, flags);
}
@@ -558,22 +553,22 @@ rz_dmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
if (!desc)
return NULL;
- for_each_sg(sgl, sg, sg_len, i)
- dma_length += sg_dma_len(sg);
+ list_del(&desc->node);
+ }
- desc->type = RZ_DMAC_DESC_SLAVE_SG;
- desc->sg = sgl;
- desc->sgcount = sg_len;
- desc->len = dma_length;
- desc->direction = direction;
+ for_each_sg(sgl, sg, sg_len, i)
+ dma_length += sg_dma_len(sg);
- if (direction == DMA_DEV_TO_MEM)
- desc->src = channel->src_per_address;
- else
- desc->dest = channel->dst_per_address;
+ desc->type = RZ_DMAC_DESC_SLAVE_SG;
+ desc->sg = sgl;
+ desc->sgcount = sg_len;
+ desc->len = dma_length;
+ desc->direction = direction;
- list_move_tail(channel->ld_free.next, &channel->ld_queue);
- }
+ if (direction == DMA_DEV_TO_MEM)
+ desc->src = channel->src_per_address;
+ else
+ desc->dest = channel->dst_per_address;
return vchan_tx_prep(&channel->vc, &desc->vd, flags);
}
@@ -588,8 +583,11 @@ static int rz_dmac_terminate_all(struct dma_chan *chan)
rz_dmac_disable_hw(channel);
rz_lmdesc_setup(channel, channel->lmdesc.base);
- list_splice_tail_init(&channel->ld_active, &channel->ld_free);
- list_splice_tail_init(&channel->ld_queue, &channel->ld_free);
+ if (channel->desc) {
+ vchan_terminate_vdesc(&channel->desc->vd);
+ channel->desc = NULL;
+ }
+
vchan_get_all_descriptors(&channel->vc, &head);
spin_unlock_irqrestore(&channel->vc.lock, flags);
vchan_dma_desc_free_list(&channel->vc, &head);
@@ -600,25 +598,16 @@ static int rz_dmac_terminate_all(struct dma_chan *chan)
static void rz_dmac_issue_pending(struct dma_chan *chan)
{
struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
- struct rz_dmac *dmac = to_rz_dmac(chan->device);
- struct rz_dmac_desc *desc;
unsigned long flags;
spin_lock_irqsave(&channel->vc.lock, flags);
- if (!list_empty(&channel->ld_queue)) {
- desc = list_first_entry(&channel->ld_queue,
- struct rz_dmac_desc, node);
- channel->desc = desc;
- if (vchan_issue_pending(&channel->vc)) {
- if (rz_dmac_xfer_desc(channel) < 0)
- dev_warn(dmac->dev, "ch: %d couldn't issue DMA xfer\n",
- channel->index);
- else
- list_move_tail(channel->ld_queue.next,
- &channel->ld_active);
- }
- }
+ /*
+ * Issue the descriptor. If another transfer is already in progress, the
+ * issued descriptor will be handled after the current transfer finishes.
+ */
+ if (vchan_issue_pending(&channel->vc) && !channel->desc)
+ rz_dmac_xfer_desc(channel);
spin_unlock_irqrestore(&channel->vc.lock, flags);
}
@@ -676,13 +665,13 @@ static int rz_dmac_config(struct dma_chan *chan,
static void rz_dmac_virt_desc_free(struct virt_dma_desc *vd)
{
- /*
- * Place holder
- * Descriptor allocation is done during alloc_chan_resources and
- * get freed during free_chan_resources.
- * list is used to manage the descriptors and avoid any memory
- * allocation/free during DMA read/write.
- */
+ struct rz_dmac_chan *channel = to_rz_dmac_chan(vd->tx.chan);
+ struct virt_dma_chan *vc = to_virt_chan(vd->tx.chan);
+ struct rz_dmac_desc *desc = to_rz_dmac_desc(vd);
+
+ guard(spinlock_irqsave)(&vc->lock);
+
+ list_add_tail(&desc->node, &channel->ld_free);
}
static void rz_dmac_device_synchronize(struct dma_chan *chan)
@@ -692,6 +681,8 @@ static void rz_dmac_device_synchronize(struct dma_chan *chan)
u32 chstat;
int ret;
+ vchan_synchronize(&channel->vc);
+
ret = read_poll_timeout(rz_dmac_ch_readl, chstat, !(chstat & CHSTAT_EN),
100, 100000, false, channel, CHSTAT, 1);
if (ret < 0)
@@ -739,58 +730,22 @@ static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel,
static u32 rz_dmac_chan_get_residue(struct rz_dmac_chan *channel,
dma_cookie_t cookie)
{
- struct rz_dmac_desc *current_desc, *desc;
- enum dma_status status;
+ struct rz_dmac_desc *desc = NULL;
+ struct virt_dma_desc *vd;
u32 crla, crtb, i;
- /* Get current processing virtual descriptor */
- current_desc = list_first_entry(&channel->ld_active,
- struct rz_dmac_desc, node);
- if (!current_desc)
- return 0;
-
- /*
- * If the cookie corresponds to a descriptor that has been completed
- * there is no residue. The same check has already been performed by the
- * caller but without holding the channel lock, so the descriptor could
- * now be complete.
- */
- status = dma_cookie_status(&channel->vc.chan, cookie, NULL);
- if (status == DMA_COMPLETE)
- return 0;
-
- /*
- * If the cookie doesn't correspond to the currently processing virtual
- * descriptor then the descriptor hasn't been processed yet, and the
- * residue is equal to the full descriptor size. Also, a client driver
- * is possible to call this function before rz_dmac_irq_handler_thread()
- * runs. In this case, the running descriptor will be the next
- * descriptor, and will appear in the done list. So, if the argument
- * cookie matches the done list's cookie, we can assume the residue is
- * zero.
- */
- if (cookie != current_desc->vd.tx.cookie) {
- list_for_each_entry(desc, &channel->ld_free, node) {
- if (cookie == desc->vd.tx.cookie)
- return 0;
- }
-
- list_for_each_entry(desc, &channel->ld_queue, node) {
- if (cookie == desc->vd.tx.cookie)
- return desc->len;
- }
-
- list_for_each_entry(desc, &channel->ld_active, node) {
- if (cookie == desc->vd.tx.cookie)
- return desc->len;
- }
+ vd = vchan_find_desc(&channel->vc, cookie);
+ if (vd) {
+ /* Descriptor has been issued but not yet processed. */
+ desc = to_rz_dmac_desc(vd);
+ return desc->len;
+ } else if (channel->desc && channel->desc->vd.tx.cookie == cookie) {
+ /* Descriptor is currently processed. */
+ desc = channel->desc;
+ }
- /*
- * No descriptor found for the cookie, there's thus no residue.
- * This shouldn't happen if the calling driver passes a correct
- * cookie value.
- */
- WARN(1, "No descriptor for cookie!");
+ if (!desc) {
+ /* Descriptor was not found. May be already completed by now. */
return 0;
}
@@ -813,7 +768,7 @@ static u32 rz_dmac_chan_get_residue(struct rz_dmac_chan *channel,
* Calculate number of bytes transferred in processing virtual descriptor.
* One virtual descriptor can have many lmdesc.
*/
- return crtb + rz_dmac_calculate_residue_bytes_in_vd(channel, current_desc, crla);
+ return crtb + rz_dmac_calculate_residue_bytes_in_vd(channel, desc, crla);
}
static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
@@ -824,21 +779,14 @@ static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
enum dma_status status;
u32 residue;
- status = dma_cookie_status(chan, cookie, txstate);
- if (status == DMA_COMPLETE || !txstate)
- return status;
-
scoped_guard(spinlock_irqsave, &channel->vc.lock) {
- residue = rz_dmac_chan_get_residue(channel, cookie);
+ status = dma_cookie_status(chan, cookie, txstate);
+ if (status == DMA_COMPLETE || !txstate)
+ return status;
- if (rz_dmac_chan_is_paused(channel))
- status = DMA_PAUSED;
+ residue = rz_dmac_chan_get_residue(channel, cookie);
}
- /* if there's no residue and no paused, the cookie is complete */
- if (!residue && status != DMA_PAUSED)
- return DMA_COMPLETE;
-
dma_set_residue(txstate, residue);
return status;
@@ -914,28 +862,18 @@ static irqreturn_t rz_dmac_irq_handler(int irq, void *dev_id)
static irqreturn_t rz_dmac_irq_handler_thread(int irq, void *dev_id)
{
struct rz_dmac_chan *channel = dev_id;
- struct rz_dmac_desc *desc = NULL;
- unsigned long flags;
+ struct rz_dmac_desc *desc;
- spin_lock_irqsave(&channel->vc.lock, flags);
+ guard(spinlock_irqsave)(&channel->vc.lock);
- if (list_empty(&channel->ld_active)) {
- /* Someone might have called terminate all */
- goto out;
- }
+ desc = channel->desc;
+ if (!desc)
+ return IRQ_HANDLED;
- desc = list_first_entry(&channel->ld_active, struct rz_dmac_desc, node);
vchan_cookie_complete(&desc->vd);
- list_move_tail(channel->ld_active.next, &channel->ld_free);
- if (!list_empty(&channel->ld_queue)) {
- desc = list_first_entry(&channel->ld_queue, struct rz_dmac_desc,
- node);
- channel->desc = desc;
- if (rz_dmac_xfer_desc(channel) == 0)
- list_move_tail(channel->ld_queue.next, &channel->ld_active);
- }
-out:
- spin_unlock_irqrestore(&channel->vc.lock, flags);
+ channel->desc = NULL;
+
+ rz_dmac_xfer_desc(channel);
return IRQ_HANDLED;
}
@@ -1039,9 +977,7 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac,
channel->vc.desc_free = rz_dmac_virt_desc_free;
vchan_init(&channel->vc, &dmac->engine);
- INIT_LIST_HEAD(&channel->ld_queue);
INIT_LIST_HEAD(&channel->ld_free);
- INIT_LIST_HEAD(&channel->ld_active);
return 0;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread* [PATCH v3 09/15] dmaengine: sh: rz-dmac: Refactor pause/resume code
2026-04-07 13:34 [PATCH v3 00/15] Renesas: dmaengine and ASoC fixes Claudiu
` (7 preceding siblings ...)
2026-04-07 13:35 ` [PATCH v3 08/15] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing Claudiu
@ 2026-04-07 13:35 ` Claudiu
2026-04-07 13:35 ` [PATCH v3 10/15] dmaengine: sh: rz-dmac: Drop the update of channel->chctrl with CHCTRL_SETEN Claudiu
` (5 subsequent siblings)
14 siblings, 0 replies; 21+ messages in thread
From: Claudiu @ 2026-04-07 13:35 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Subsequent patches will add suspend/resume and cyclic DMA support to the
rz-dmac driver. This support needs to work on SoCs where power to most
components (including DMA) is turned off during system suspend. For this,
some channels (for example cyclic ones) may need to be paused and resumed
manually by the DMA driver during system suspend/resume.
Refactor the pause/resume support so the same code can be reused in the
system suspend/resume path.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v3:
- none, this patch new new
drivers/dma/sh/rz-dmac.c | 68 +++++++++++++++++++++++++++++++++-------
1 file changed, 57 insertions(+), 11 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index d47c7601907f..bacde5e28616 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -18,6 +18,7 @@
#include <linux/irqchip/irq-renesas-rzv2h.h>
#include <linux/irqchip/irq-renesas-rzt2h.h>
#include <linux/list.h>
+#include <linux/lockdep.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_dma.h>
@@ -63,6 +64,14 @@ struct rz_dmac_desc {
#define to_rz_dmac_desc(d) container_of(d, struct rz_dmac_desc, vd)
+/**
+ * enum rz_dmac_chan_status: RZ DMAC channel status
+ * @RZ_DMAC_CHAN_STATUS_PAUSED: Channel is paused though DMA engine callbacks
+ */
+enum rz_dmac_chan_status {
+ RZ_DMAC_CHAN_STATUS_PAUSED,
+};
+
struct rz_dmac_chan {
struct virt_dma_chan vc;
void __iomem *ch_base;
@@ -74,6 +83,8 @@ struct rz_dmac_chan {
dma_addr_t src_per_address;
dma_addr_t dst_per_address;
+ unsigned long status;
+
u32 chcfg;
u32 chctrl;
int mid_rid;
@@ -792,35 +803,70 @@ static enum dma_status rz_dmac_tx_status(struct dma_chan *chan,
return status;
}
-static int rz_dmac_device_pause(struct dma_chan *chan)
+static int rz_dmac_device_pause_set(struct rz_dmac_chan *channel,
+ unsigned long set_bitmask)
{
- struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
+ int ret = 0;
u32 val;
- guard(spinlock_irqsave)(&channel->vc.lock);
+ lockdep_assert_held(&channel->vc.lock);
if (!rz_dmac_chan_is_enabled(channel))
return 0;
+ if (rz_dmac_chan_is_paused(channel))
+ goto set_bit;
+
rz_dmac_ch_writel(channel, CHCTRL_SETSUS, CHCTRL, 1);
- return read_poll_timeout_atomic(rz_dmac_ch_readl, val,
- (val & CHSTAT_SUS), 1, 1024,
- false, channel, CHSTAT, 1);
+ ret = read_poll_timeout_atomic(rz_dmac_ch_readl, val,
+ (val & CHSTAT_SUS), 1, 1024, false,
+ channel, CHSTAT, 1);
+
+set_bit:
+ channel->status |= set_bitmask;
+
+ return ret;
}
-static int rz_dmac_device_resume(struct dma_chan *chan)
+static int rz_dmac_device_pause(struct dma_chan *chan)
{
struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
- u32 val;
guard(spinlock_irqsave)(&channel->vc.lock);
+ return rz_dmac_device_pause_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED));
+}
+
+static int rz_dmac_device_resume_set(struct rz_dmac_chan *channel,
+ unsigned long clear_bitmask)
+{
+ int ret = 0;
+ u32 val;
+
+ lockdep_assert_held(&channel->vc.lock);
+
/* Do not check CHSTAT_SUS but rely on HW capabilities. */
rz_dmac_ch_writel(channel, CHCTRL_CLRSUS, CHCTRL, 1);
- return read_poll_timeout_atomic(rz_dmac_ch_readl, val,
- !(val & CHSTAT_SUS), 1, 1024,
- false, channel, CHSTAT, 1);
+ ret = read_poll_timeout_atomic(rz_dmac_ch_readl, val,
+ !(val & CHSTAT_SUS), 1, 1024, false,
+ channel, CHSTAT, 1);
+
+ channel->status &= ~clear_bitmask;
+
+ return ret;
+}
+
+static int rz_dmac_device_resume(struct dma_chan *chan)
+{
+ struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
+
+ guard(spinlock_irqsave)(&channel->vc.lock);
+
+ if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_PAUSED)))
+ return 0;
+
+ return rz_dmac_device_resume_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED));
}
/*
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread* [PATCH v3 10/15] dmaengine: sh: rz-dmac: Drop the update of channel->chctrl with CHCTRL_SETEN
2026-04-07 13:34 [PATCH v3 00/15] Renesas: dmaengine and ASoC fixes Claudiu
` (8 preceding siblings ...)
2026-04-07 13:35 ` [PATCH v3 09/15] dmaengine: sh: rz-dmac: Refactor pause/resume code Claudiu
@ 2026-04-07 13:35 ` Claudiu
2026-04-07 13:35 ` [PATCH v3 11/15] dmaengine: sh: rz-dmac: Add cyclic DMA support Claudiu
` (4 subsequent siblings)
14 siblings, 0 replies; 21+ messages in thread
From: Claudiu @ 2026-04-07 13:35 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
The CHCTRL_SETEN bit is explicitly set in rz_dmac_enable_hw(). Updating
struct rz_dmac_chan::chctrl with this bit in
rz_dmac_prepare_desc_for_memcpy() and rz_dmac_prepare_descs_for_slave_sg()
is unnecessary in the current code base. Moreover, it conflicts with the
configuration sequence that will be used for cyclic DMA channels during
suspend to RAM. Cyclic DMA support will be introduced in subsequent
commits.
This is a preparatory commit for cyclic DMA suspend to RAM support.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v3:
- none
Changes in v2:
- fixed typos in patch title and patch description
drivers/dma/sh/rz-dmac.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index bacde5e28616..8fbccabc94e4 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -377,7 +377,7 @@ static void rz_dmac_prepare_desc_for_memcpy(struct rz_dmac_chan *channel)
rz_dmac_set_dma_req_no(dmac, channel->index, dmac->info->default_dma_req_no);
channel->chcfg = chcfg;
- channel->chctrl = CHCTRL_STG | CHCTRL_SETEN;
+ channel->chctrl = CHCTRL_STG;
}
static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
@@ -427,8 +427,6 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
channel->lmdesc.tail = lmdesc;
rz_dmac_set_dma_req_no(dmac, channel->index, channel->mid_rid);
-
- channel->chctrl = CHCTRL_SETEN;
}
static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread* [PATCH v3 11/15] dmaengine: sh: rz-dmac: Add cyclic DMA support
2026-04-07 13:34 [PATCH v3 00/15] Renesas: dmaengine and ASoC fixes Claudiu
` (9 preceding siblings ...)
2026-04-07 13:35 ` [PATCH v3 10/15] dmaengine: sh: rz-dmac: Drop the update of channel->chctrl with CHCTRL_SETEN Claudiu
@ 2026-04-07 13:35 ` Claudiu
2026-04-07 14:36 ` Biju Das
2026-04-07 13:35 ` [PATCH v3 12/15] dmaengine: sh: rz-dmac: Add suspend to RAM support Claudiu
` (3 subsequent siblings)
14 siblings, 1 reply; 21+ messages in thread
From: Claudiu @ 2026-04-07 13:35 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Add cyclic DMA support to the RZ DMAC driver. A per-channel status bit is
introduced to mark cyclic channels and is set during the DMA prepare
callback. The IRQ handler checks this status bit and calls
vchan_cyclic_callback() accordingly.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v3:
- updated rz_dmac_lmdesc_recycle() to restore the lmdesc->nxla
- in rz_dmac_prepare_descs_for_cyclic() update directly the
desc->start_lmdesc with the descriptor pointer insted of the
descriptor address
- used rz_dmac_lmdesc_addr() to compute the descritor address
- set channel->status = 0 in rz_dmac_free_chan_resources()
- in rz_dmac_prep_dma_cyclic() check for invalid periods or buffer len
and limit the critical area protected by spinlock
- set channel->status = 0 in rz_dmac_terminate_all()
- updated rz_dmac_calculate_residue_bytes_in_vd() to use
rz_dmac_lmdesc_addr()
- dropped goto in rz_dmac_irq_handler_thread() as it is not needed
anymore; dropped also the local variable desc
Changes in v2:
- none
drivers/dma/sh/rz-dmac.c | 144 +++++++++++++++++++++++++++++++++++++--
1 file changed, 138 insertions(+), 6 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 8fbccabc94e4..f7133ac6af60 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -35,6 +35,7 @@
enum rz_dmac_prep_type {
RZ_DMAC_DESC_MEMCPY,
RZ_DMAC_DESC_SLAVE_SG,
+ RZ_DMAC_DESC_CYCLIC,
};
struct rz_lmdesc {
@@ -67,9 +68,11 @@ struct rz_dmac_desc {
/**
* enum rz_dmac_chan_status: RZ DMAC channel status
* @RZ_DMAC_CHAN_STATUS_PAUSED: Channel is paused though DMA engine callbacks
+ * @RZ_DMAC_CHAN_STATUS_CYCLIC: Channel is cyclic
*/
enum rz_dmac_chan_status {
RZ_DMAC_CHAN_STATUS_PAUSED,
+ RZ_DMAC_CHAN_STATUS_CYCLIC,
};
struct rz_dmac_chan {
@@ -191,6 +194,7 @@ struct rz_dmac {
/* LINK MODE DESCRIPTOR */
#define HEADER_LV BIT(0)
+#define HEADER_WBD BIT(2)
#define RZ_DMAC_MAX_CHAN_DESCRIPTORS 16
#define RZ_DMAC_MAX_CHANNELS 16
@@ -272,9 +276,12 @@ static void rz_lmdesc_setup(struct rz_dmac_chan *channel,
static void rz_dmac_lmdesc_recycle(struct rz_dmac_chan *channel)
{
struct rz_lmdesc *lmdesc = channel->lmdesc.head;
+ u32 nxla = channel->lmdesc.base_dma;
while (!(lmdesc->header & HEADER_LV)) {
lmdesc->header = 0;
+ nxla += sizeof(*lmdesc);
+ lmdesc->nxla = nxla;
lmdesc++;
if (lmdesc >= (channel->lmdesc.base + DMAC_NR_LMDESC))
lmdesc = channel->lmdesc.base;
@@ -429,6 +436,57 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
rz_dmac_set_dma_req_no(dmac, channel->index, channel->mid_rid);
}
+static void rz_dmac_prepare_descs_for_cyclic(struct rz_dmac_chan *channel)
+{
+ struct dma_chan *chan = &channel->vc.chan;
+ struct rz_dmac *dmac = to_rz_dmac(chan->device);
+ struct rz_dmac_desc *d = channel->desc;
+ size_t period_len = d->sgcount;
+ struct rz_lmdesc *lmdesc;
+ size_t buf_len = d->len;
+ size_t periods = buf_len / period_len;
+
+ lockdep_assert_held(&channel->vc.lock);
+
+ channel->chcfg |= CHCFG_SEL(channel->index) | CHCFG_DMS;
+
+ if (d->direction == DMA_DEV_TO_MEM) {
+ channel->chcfg |= CHCFG_SAD;
+ channel->chcfg &= ~CHCFG_REQD;
+ } else {
+ channel->chcfg |= CHCFG_DAD | CHCFG_REQD;
+ }
+
+ lmdesc = channel->lmdesc.tail;
+ d->start_lmdesc = lmdesc;
+
+ for (size_t i = 0; i < periods; i++) {
+ if (d->direction == DMA_DEV_TO_MEM) {
+ lmdesc->sa = d->src;
+ lmdesc->da = d->dest + (i * period_len);
+ } else {
+ lmdesc->sa = d->src + (i * period_len);
+ lmdesc->da = d->dest;
+ }
+
+ lmdesc->tb = period_len;
+ lmdesc->chitvl = 0;
+ lmdesc->chext = 0;
+ lmdesc->chcfg = channel->chcfg;
+ lmdesc->header = HEADER_LV | HEADER_WBD;
+
+ if (i == periods - 1)
+ lmdesc->nxla = rz_dmac_lmdesc_addr(channel, d->start_lmdesc);
+
+ if (++lmdesc >= (channel->lmdesc.base + DMAC_NR_LMDESC))
+ lmdesc = channel->lmdesc.base;
+ }
+
+ channel->lmdesc.tail = lmdesc;
+
+ rz_dmac_set_dma_req_no(dmac, channel->index, channel->mid_rid);
+}
+
static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
{
struct virt_dma_desc *vd;
@@ -450,6 +508,10 @@ static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
case RZ_DMAC_DESC_SLAVE_SG:
rz_dmac_prepare_descs_for_slave_sg(chan);
break;
+
+ case RZ_DMAC_DESC_CYCLIC:
+ rz_dmac_prepare_descs_for_cyclic(chan);
+ break;
}
rz_dmac_enable_hw(chan);
@@ -500,6 +562,8 @@ static void rz_dmac_free_chan_resources(struct dma_chan *chan)
channel->mid_rid = -EINVAL;
}
+ channel->status = 0;
+
spin_unlock_irqrestore(&channel->vc.lock, flags);
vchan_free_chan_resources(&channel->vc);
@@ -582,6 +646,55 @@ rz_dmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
return vchan_tx_prep(&channel->vc, &desc->vd, flags);
}
+static struct dma_async_tx_descriptor *
+rz_dmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr,
+ size_t buf_len, size_t period_len,
+ enum dma_transfer_direction direction,
+ unsigned long flags)
+{
+ struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
+ struct rz_dmac_desc *desc;
+ size_t periods;
+
+ if (!is_slave_direction(direction))
+ return NULL;
+
+ if (!period_len || !buf_len)
+ return NULL;
+
+ periods = buf_len / period_len;
+ if (!periods || periods > DMAC_NR_LMDESC)
+ return NULL;
+
+ scoped_guard(spinlock_irqsave, &channel->vc.lock) {
+ if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC))
+ return NULL;
+
+ desc = list_first_entry_or_null(&channel->ld_free, struct rz_dmac_desc, node);
+ if (!desc)
+ return NULL;
+
+ list_del(&desc->node);
+
+ channel->status |= BIT(RZ_DMAC_CHAN_STATUS_CYCLIC);
+ }
+
+ desc->type = RZ_DMAC_DESC_CYCLIC;
+ desc->sgcount = period_len;
+ desc->len = buf_len;
+ desc->direction = direction;
+
+ if (direction == DMA_DEV_TO_MEM) {
+ desc->src = channel->src_per_address;
+ desc->dest = buf_addr;
+ } else {
+ desc->src = buf_addr;
+ desc->dest = channel->dst_per_address;
+ }
+
+ return vchan_tx_prep(&channel->vc, &desc->vd, flags);
+}
+
static int rz_dmac_terminate_all(struct dma_chan *chan)
{
struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
@@ -598,6 +711,9 @@ static int rz_dmac_terminate_all(struct dma_chan *chan)
}
vchan_get_all_descriptors(&channel->vc, &head);
+
+ channel->status = 0;
+
spin_unlock_irqrestore(&channel->vc.lock, flags);
vchan_dma_desc_free_list(&channel->vc, &head);
@@ -726,9 +842,18 @@ static u32 rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel,
}
/* Calculate residue from next lmdesc to end of virtual desc */
- while (lmdesc->chcfg & CHCFG_DEM) {
- residue += lmdesc->tb;
- lmdesc = rz_dmac_get_next_lmdesc(channel->lmdesc.base, lmdesc);
+ if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)) {
+ u32 start_lmdesc_addr = rz_dmac_lmdesc_addr(channel, desc->start_lmdesc);
+
+ while (lmdesc->nxla != start_lmdesc_addr) {
+ residue += lmdesc->tb;
+ lmdesc = rz_dmac_get_next_lmdesc(channel->lmdesc.base, lmdesc);
+ }
+ } else {
+ while (lmdesc->chcfg & CHCFG_DEM) {
+ residue += lmdesc->tb;
+ lmdesc = rz_dmac_get_next_lmdesc(channel->lmdesc.base, lmdesc);
+ }
}
dev_dbg(dmac->dev, "%s: VD residue is %u\n", __func__, residue);
@@ -914,10 +1039,14 @@ static irqreturn_t rz_dmac_irq_handler_thread(int irq, void *dev_id)
if (!desc)
return IRQ_HANDLED;
- vchan_cookie_complete(&desc->vd);
- channel->desc = NULL;
+ if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)) {
+ vchan_cyclic_callback(&desc->vd);
+ } else {
+ vchan_cookie_complete(&desc->vd);
+ channel->desc = NULL;
- rz_dmac_xfer_desc(channel);
+ rz_dmac_xfer_desc(channel);
+ }
return IRQ_HANDLED;
}
@@ -1172,6 +1301,8 @@ static int rz_dmac_probe(struct platform_device *pdev)
engine = &dmac->engine;
dma_cap_set(DMA_SLAVE, engine->cap_mask);
dma_cap_set(DMA_MEMCPY, engine->cap_mask);
+ dma_cap_set(DMA_CYCLIC, engine->cap_mask);
+ engine->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
engine->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
rz_dmac_writel(dmac, DCTRL_DEFAULT, CHANNEL_0_7_COMMON_BASE + DCTRL);
rz_dmac_writel(dmac, DCTRL_DEFAULT, CHANNEL_8_15_COMMON_BASE + DCTRL);
@@ -1183,6 +1314,7 @@ static int rz_dmac_probe(struct platform_device *pdev)
engine->device_tx_status = rz_dmac_tx_status;
engine->device_prep_slave_sg = rz_dmac_prep_slave_sg;
engine->device_prep_dma_memcpy = rz_dmac_prep_dma_memcpy;
+ engine->device_prep_dma_cyclic = rz_dmac_prep_dma_cyclic;
engine->device_config = rz_dmac_config;
engine->device_terminate_all = rz_dmac_terminate_all;
engine->device_issue_pending = rz_dmac_issue_pending;
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread* RE: [PATCH v3 11/15] dmaengine: sh: rz-dmac: Add cyclic DMA support
2026-04-07 13:35 ` [PATCH v3 11/15] dmaengine: sh: rz-dmac: Add cyclic DMA support Claudiu
@ 2026-04-07 14:36 ` Biju Das
2026-04-07 15:12 ` Claudiu Beznea
0 siblings, 1 reply; 21+ messages in thread
From: Biju Das @ 2026-04-07 14:36 UTC (permalink / raw)
To: Claudiu.Beznea, vkoul@kernel.org, Frank.Li@kernel.org,
lgirdwood@gmail.com, broonie@kernel.org, perex@perex.cz,
tiwai@suse.com, Prabhakar Mahadev Lad, p.zabel@pengutronix.de,
geert+renesas@glider.be, Fabrizio Castro
Cc: Claudiu.Beznea, dmaengine@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-sound@vger.kernel.org,
linux-renesas-soc@vger.kernel.org, Claudiu Beznea
Hi Claudiu,
Thanks for the patch.
> -----Original Message-----
> From: Claudiu <claudiu.beznea@tuxon.dev>
> Sent: 07 April 2026 14:35
> Subject: [PATCH v3 11/15] dmaengine: sh: rz-dmac: Add cyclic DMA support
>
> From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
>
> Add cyclic DMA support to the RZ DMAC driver. A per-channel status bit is introduced to mark cyclic
> channels and is set during the DMA prepare callback. The IRQ handler checks this status bit and calls
> vchan_cyclic_callback() accordingly.
>
> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> ---
>
> Changes in v3:
> - updated rz_dmac_lmdesc_recycle() to restore the lmdesc->nxla
> - in rz_dmac_prepare_descs_for_cyclic() update directly the
> desc->start_lmdesc with the descriptor pointer insted of the
> descriptor address
> - used rz_dmac_lmdesc_addr() to compute the descritor address
> - set channel->status = 0 in rz_dmac_free_chan_resources()
> - in rz_dmac_prep_dma_cyclic() check for invalid periods or buffer len
> and limit the critical area protected by spinlock
> - set channel->status = 0 in rz_dmac_terminate_all()
> - updated rz_dmac_calculate_residue_bytes_in_vd() to use
> rz_dmac_lmdesc_addr()
> - dropped goto in rz_dmac_irq_handler_thread() as it is not needed
> anymore; dropped also the local variable desc
>
> Changes in v2:
> - none
>
> drivers/dma/sh/rz-dmac.c | 144 +++++++++++++++++++++++++++++++++++++--
> 1 file changed, 138 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c index 8fbccabc94e4..f7133ac6af60
> 100644
> --- a/drivers/dma/sh/rz-dmac.c
> +++ b/drivers/dma/sh/rz-dmac.c
> @@ -35,6 +35,7 @@
> enum rz_dmac_prep_type {
> RZ_DMAC_DESC_MEMCPY,
> RZ_DMAC_DESC_SLAVE_SG,
> + RZ_DMAC_DESC_CYCLIC,
> };
>
> struct rz_lmdesc {
> @@ -67,9 +68,11 @@ struct rz_dmac_desc {
> /**
> * enum rz_dmac_chan_status: RZ DMAC channel status
> * @RZ_DMAC_CHAN_STATUS_PAUSED: Channel is paused though DMA engine callbacks
> + * @RZ_DMAC_CHAN_STATUS_CYCLIC: Channel is cyclic
> */
> enum rz_dmac_chan_status {
> RZ_DMAC_CHAN_STATUS_PAUSED,
> + RZ_DMAC_CHAN_STATUS_CYCLIC,
> };
>
> struct rz_dmac_chan {
> @@ -191,6 +194,7 @@ struct rz_dmac {
>
> /* LINK MODE DESCRIPTOR */
> #define HEADER_LV BIT(0)
> +#define HEADER_WBD BIT(2)
>
> #define RZ_DMAC_MAX_CHAN_DESCRIPTORS 16
> #define RZ_DMAC_MAX_CHANNELS 16
> @@ -272,9 +276,12 @@ static void rz_lmdesc_setup(struct rz_dmac_chan *channel, static void
> rz_dmac_lmdesc_recycle(struct rz_dmac_chan *channel) {
> struct rz_lmdesc *lmdesc = channel->lmdesc.head;
> + u32 nxla = channel->lmdesc.base_dma;
>
> while (!(lmdesc->header & HEADER_LV)) {
> lmdesc->header = 0;
> + nxla += sizeof(*lmdesc);
> + lmdesc->nxla = nxla;
> lmdesc++;
> if (lmdesc >= (channel->lmdesc.base + DMAC_NR_LMDESC))
> lmdesc = channel->lmdesc.base;
> @@ -429,6 +436,57 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
> rz_dmac_set_dma_req_no(dmac, channel->index, channel->mid_rid); }
>
> +static void rz_dmac_prepare_descs_for_cyclic(struct rz_dmac_chan
> +*channel) {
> + struct dma_chan *chan = &channel->vc.chan;
> + struct rz_dmac *dmac = to_rz_dmac(chan->device);
> + struct rz_dmac_desc *d = channel->desc;
> + size_t period_len = d->sgcount;
> + struct rz_lmdesc *lmdesc;
> + size_t buf_len = d->len;
> + size_t periods = buf_len / period_len;
> +
> + lockdep_assert_held(&channel->vc.lock);
> +
> + channel->chcfg |= CHCFG_SEL(channel->index) | CHCFG_DMS;
> +
> + if (d->direction == DMA_DEV_TO_MEM) {
> + channel->chcfg |= CHCFG_SAD;
> + channel->chcfg &= ~CHCFG_REQD;
> + } else {
> + channel->chcfg |= CHCFG_DAD | CHCFG_REQD;
> + }
> +
> + lmdesc = channel->lmdesc.tail;
> + d->start_lmdesc = lmdesc;
> +
> + for (size_t i = 0; i < periods; i++) {
> + if (d->direction == DMA_DEV_TO_MEM) {
> + lmdesc->sa = d->src;
> + lmdesc->da = d->dest + (i * period_len);
> + } else {
> + lmdesc->sa = d->src + (i * period_len);
> + lmdesc->da = d->dest;
> + }
> +
> + lmdesc->tb = period_len;
> + lmdesc->chitvl = 0;
> + lmdesc->chext = 0;
> + lmdesc->chcfg = channel->chcfg;
> + lmdesc->header = HEADER_LV | HEADER_WBD;
> +
> + if (i == periods - 1)
> + lmdesc->nxla = rz_dmac_lmdesc_addr(channel, d->start_lmdesc);
> +
> + if (++lmdesc >= (channel->lmdesc.base + DMAC_NR_LMDESC))
> + lmdesc = channel->lmdesc.base;
> + }
> +
> + channel->lmdesc.tail = lmdesc;
> +
> + rz_dmac_set_dma_req_no(dmac, channel->index, channel->mid_rid); }
> +
> static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan) {
> struct virt_dma_desc *vd;
> @@ -450,6 +508,10 @@ static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan)
> case RZ_DMAC_DESC_SLAVE_SG:
> rz_dmac_prepare_descs_for_slave_sg(chan);
> break;
> +
> + case RZ_DMAC_DESC_CYCLIC:
> + rz_dmac_prepare_descs_for_cyclic(chan);
> + break;
> }
>
> rz_dmac_enable_hw(chan);
> @@ -500,6 +562,8 @@ static void rz_dmac_free_chan_resources(struct dma_chan *chan)
> channel->mid_rid = -EINVAL;
> }
>
> + channel->status = 0;
> +
> spin_unlock_irqrestore(&channel->vc.lock, flags);
Maybe create a patch to convert all the spin_{lock,unlock} with guard()
in this driver.
Cheers,
Biju
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [PATCH v3 11/15] dmaengine: sh: rz-dmac: Add cyclic DMA support
2026-04-07 14:36 ` Biju Das
@ 2026-04-07 15:12 ` Claudiu Beznea
2026-04-07 15:16 ` Biju Das
0 siblings, 1 reply; 21+ messages in thread
From: Claudiu Beznea @ 2026-04-07 15:12 UTC (permalink / raw)
To: Biju Das, vkoul@kernel.org, Frank.Li@kernel.org,
lgirdwood@gmail.com, broonie@kernel.org, perex@perex.cz,
tiwai@suse.com, Prabhakar Mahadev Lad, p.zabel@pengutronix.de,
geert+renesas@glider.be, Fabrizio Castro
Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-sound@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
Claudiu Beznea
Hi, Biju,
On 4/7/26 17:36, Biju Das wrote:
>
> Hi Claudiu,
>
> Thanks for the patch.
>
>> -----Original Message-----
>> From: Claudiu <claudiu.beznea@tuxon.dev>
>> Sent: 07 April 2026 14:35
>> Subject: [PATCH v3 11/15] dmaengine: sh: rz-dmac: Add cyclic DMA support
>>
>> From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
>>
>> Add cyclic DMA support to the RZ DMAC driver. A per-channel status bit is introduced to mark cyclic
>> channels and is set during the DMA prepare callback. The IRQ handler checks this status bit and calls
>> vchan_cyclic_callback() accordingly.
>>
>> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
>> ---
>>
>> Changes in v3:
>> - updated rz_dmac_lmdesc_recycle() to restore the lmdesc->nxla
>> - in rz_dmac_prepare_descs_for_cyclic() update directly the
>> desc->start_lmdesc with the descriptor pointer insted of the
>> descriptor address
>> - used rz_dmac_lmdesc_addr() to compute the descritor address
>> - set channel->status = 0 in rz_dmac_free_chan_resources()
>> - in rz_dmac_prep_dma_cyclic() check for invalid periods or buffer len
>> and limit the critical area protected by spinlock
>> - set channel->status = 0 in rz_dmac_terminate_all()
>> - updated rz_dmac_calculate_residue_bytes_in_vd() to use
>> rz_dmac_lmdesc_addr()
>> - dropped goto in rz_dmac_irq_handler_thread() as it is not needed
>> anymore; dropped also the local variable desc
>>
>> Changes in v2:
>> - none
>>
>> drivers/dma/sh/rz-dmac.c | 144 +++++++++++++++++++++++++++++++++++++--
>> 1 file changed, 138 insertions(+), 6 deletions(-)
>>
[ ... ]
>> @@ -500,6 +562,8 @@ static void rz_dmac_free_chan_resources(struct dma_chan *chan)
>> channel->mid_rid = -EINVAL;
>> }
>>
>> + channel->status = 0;
>> +
>> spin_unlock_irqrestore(&channel->vc.lock, flags);
>
> Maybe create a patch to convert all the spin_{lock,unlock} with guard()
> in this driver.
This series already has to many patches and I want to keep only what is
necessary for the cyclic support. My plan is to do the guard conversion after
cyclic support gets merged.
Thank you,
Claudiu
^ permalink raw reply [flat|nested] 21+ messages in thread* RE: [PATCH v3 11/15] dmaengine: sh: rz-dmac: Add cyclic DMA support
2026-04-07 15:12 ` Claudiu Beznea
@ 2026-04-07 15:16 ` Biju Das
0 siblings, 0 replies; 21+ messages in thread
From: Biju Das @ 2026-04-07 15:16 UTC (permalink / raw)
To: Claudiu.Beznea, vkoul@kernel.org, Frank.Li@kernel.org,
lgirdwood@gmail.com, broonie@kernel.org, perex@perex.cz,
tiwai@suse.com, Prabhakar Mahadev Lad, p.zabel@pengutronix.de,
geert+renesas@glider.be, Fabrizio Castro
Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-sound@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
Claudiu Beznea
Hi Claudiu Beznea,
> -----Original Message-----
> From: Claudiu Beznea <claudiu.beznea@tuxon.dev>
> Sent: 07 April 2026 16:13
> Subject: Re: [PATCH v3 11/15] dmaengine: sh: rz-dmac: Add cyclic DMA support
>
> Hi, Biju,
>
> On 4/7/26 17:36, Biju Das wrote:
> >
> > Hi Claudiu,
> >
> > Thanks for the patch.
> >
> >> -----Original Message-----
> >> From: Claudiu <claudiu.beznea@tuxon.dev>
> >> Sent: 07 April 2026 14:35
> >> Subject: [PATCH v3 11/15] dmaengine: sh: rz-dmac: Add cyclic DMA
> >> support
> >>
> >> From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> >>
> >> Add cyclic DMA support to the RZ DMAC driver. A per-channel status
> >> bit is introduced to mark cyclic channels and is set during the DMA
> >> prepare callback. The IRQ handler checks this status bit and calls
> >> vchan_cyclic_callback() accordingly.
> >>
> >> Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> >> ---
> >>
> >> Changes in v3:
> >> - updated rz_dmac_lmdesc_recycle() to restore the lmdesc->nxla
> >> - in rz_dmac_prepare_descs_for_cyclic() update directly the
> >> desc->start_lmdesc with the descriptor pointer insted of the
> >> descriptor address
> >> - used rz_dmac_lmdesc_addr() to compute the descritor address
> >> - set channel->status = 0 in rz_dmac_free_chan_resources()
> >> - in rz_dmac_prep_dma_cyclic() check for invalid periods or buffer len
> >> and limit the critical area protected by spinlock
> >> - set channel->status = 0 in rz_dmac_terminate_all()
> >> - updated rz_dmac_calculate_residue_bytes_in_vd() to use
> >> rz_dmac_lmdesc_addr()
> >> - dropped goto in rz_dmac_irq_handler_thread() as it is not needed
> >> anymore; dropped also the local variable desc
> >>
> >> Changes in v2:
> >> - none
> >>
> >> drivers/dma/sh/rz-dmac.c | 144 +++++++++++++++++++++++++++++++++++++--
> >> 1 file changed, 138 insertions(+), 6 deletions(-)
> >>
>
> [ ... ]
>
> >> @@ -500,6 +562,8 @@ static void rz_dmac_free_chan_resources(struct dma_chan *chan)
> >> channel->mid_rid = -EINVAL;
> >> }
> >>
> >> + channel->status = 0;
> >> +
> >> spin_unlock_irqrestore(&channel->vc.lock, flags);
> >
> > Maybe create a patch to convert all the spin_{lock,unlock} with
> > guard() in this driver.
>
> This series already has to many patches and I want to keep only what is necessary for the cyclic
> support. My plan is to do the guard conversion after cyclic support gets merged.
The driver has a mix of guard and spin_lock_unlock variants with this series.
That is the reason for suggestion.
Yes, it can be done later.
Cheers,
Biju
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v3 12/15] dmaengine: sh: rz-dmac: Add suspend to RAM support
2026-04-07 13:34 [PATCH v3 00/15] Renesas: dmaengine and ASoC fixes Claudiu
` (10 preceding siblings ...)
2026-04-07 13:35 ` [PATCH v3 11/15] dmaengine: sh: rz-dmac: Add cyclic DMA support Claudiu
@ 2026-04-07 13:35 ` Claudiu
2026-04-07 13:35 ` [PATCH v3 13/15] ASoC: renesas: rz-ssi: Add pause support Claudiu
` (2 subsequent siblings)
14 siblings, 0 replies; 21+ messages in thread
From: Claudiu @ 2026-04-07 13:35 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
The Renesas RZ/G3S SoC supports a power saving mode in which power to most
of the SoC components is turned off, including the DMA IP. Add suspend to
RAM support to save and restore the DMA IP registers.
Cyclic DMA channels require special handling. Since they can be paused and
resumed during system suspend/resume, the driver restores additional
registers for these channels during the system resume phase. If a channel
was not explicitly paused during suspend, the driver ensures that it is
paused and resumed as part of the system suspend/resume flow. This might be
the case of a serial device being used with no_console_suspend.
The cyclic DMA channels used by the sound IPs may be paused during system
suspend. In this case, since rz_dmac_device_synchronize() is called
through the ASoC PCM dmaengine APIs after the channel has been paused,
the CHSTAT.EN bit never goes to zero because the channel remains paused
and enabled.
As a result, the read_poll_timeout() call in rz_dmac_device_synchronize()
times out during system suspend. Since vchan_synchronize() is called to
complete any ongoing transfers and stop descriptor queuing, it should be
safe to drop the read_poll_timeout() from rz_dmac_device_synchronize().
For non-cyclic channels, the dev_pm_ops::prepare callback waits for all
the ongoing transfers to complete before allowing suspend-to-RAM to
proceed.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v3:
- dropped RZ_DMAC_CHAN_STATUS_SYS_SUSPENDED
- dropped read_poll_timeout() from rz_dmac_device_synchronze() as
with audio drivers this times out all the time on suspend because
the audio DMA is already paused when the rz_dmac_device_synchronize()
is called; updated the commit description to describe this change
- call rz_dmac_device_pause_internal() only if RZ_DMAC_CHAN_STATUS_PAUSED
bit is not set or the device is enabled in HW
- updated rz_dmac_device_resume_set() to have it simpler and cover
the cases when it is called with the channel enabled or paused;
updated the comment describing the covered use cases
- call rz_dmac_device_resume_internal() only if
RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL bit is set
- in rz_dmac_chan_is_enabled() return -EAGAIN only if the channel is
enabled in HW
- in rz_dmac_suspend_recover() drop the update of
RZ_DMAC_CHAN_STATUS_SYS_SUSPENDED as this is not available anymore
- in rz_dmac_suspend() call rz_dmac_device_pause_internal() unconditionally
as the logic is now handled inside the called function; also, do not
ignore anymore the failure of internal suspend and abort the suspend
instead
- report channel internal resume failures in rz_dmac_resume()
- use rz_dmac_disable_hw() instead of open coding it in rz_dmac_resume()
- call rz_dmac_device_resume_internal() uncoditionally as the skip
logic is now handled in the function itself
- use NOIRQ_SYSTEM_SLEEP_PM_OPS()
- didn't collect Tommaso's Tb tag as the series was changed a lot since
v2
Changes in v2:
- fixed typos in patch description
- in rz_dmac_suspend_prepare(): return -EAGAIN based on the value returned
by vchan_issue_pending()
- in rz_dmac_suspend_recover(): clear RZ_DMAC_CHAN_STATUS_SYS_SUSPENDED for
non cyclic channels
- in rz_dmac_resume(): call rz_dmac_set_dma_req_no() only for cyclic channels
drivers/dma/sh/rz-dmac.c | 191 ++++++++++++++++++++++++++++++++++++---
1 file changed, 179 insertions(+), 12 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index f7133ac6af60..3265c7b3ab83 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -69,10 +69,12 @@ struct rz_dmac_desc {
* enum rz_dmac_chan_status: RZ DMAC channel status
* @RZ_DMAC_CHAN_STATUS_PAUSED: Channel is paused though DMA engine callbacks
* @RZ_DMAC_CHAN_STATUS_CYCLIC: Channel is cyclic
+ * @RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL: Channel is paused through driver internal logic
*/
enum rz_dmac_chan_status {
RZ_DMAC_CHAN_STATUS_PAUSED,
RZ_DMAC_CHAN_STATUS_CYCLIC,
+ RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL,
};
struct rz_dmac_chan {
@@ -92,6 +94,10 @@ struct rz_dmac_chan {
u32 chctrl;
int mid_rid;
+ struct {
+ u32 nxla;
+ } pm_state;
+
struct list_head ld_free;
struct {
@@ -803,16 +809,9 @@ static void rz_dmac_device_synchronize(struct dma_chan *chan)
{
struct rz_dmac_chan *channel = to_rz_dmac_chan(chan);
struct rz_dmac *dmac = to_rz_dmac(chan->device);
- u32 chstat;
- int ret;
vchan_synchronize(&channel->vc);
- ret = read_poll_timeout(rz_dmac_ch_readl, chstat, !(chstat & CHSTAT_EN),
- 100, 100000, false, channel, CHSTAT, 1);
- if (ret < 0)
- dev_warn(dmac->dev, "DMA Timeout");
-
rz_dmac_set_dma_req_no(dmac, channel->index, dmac->info->default_dma_req_no);
}
@@ -960,20 +959,57 @@ static int rz_dmac_device_pause(struct dma_chan *chan)
return rz_dmac_device_pause_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED));
}
+static int rz_dmac_device_pause_internal(struct rz_dmac_chan *channel)
+{
+ lockdep_assert_held(&channel->vc.lock);
+
+ /* Skip channels explicitly paused by consummers or disabled. */
+ if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_PAUSED) ||
+ !rz_dmac_chan_is_enabled(channel))
+ return 0;
+
+ return rz_dmac_device_pause_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL));
+}
+
static int rz_dmac_device_resume_set(struct rz_dmac_chan *channel,
unsigned long clear_bitmask)
{
- int ret = 0;
u32 val;
+ int ret;
lockdep_assert_held(&channel->vc.lock);
- /* Do not check CHSTAT_SUS but rely on HW capabilities. */
+ /*
+ * We can be:
+ *
+ * 1/ after the channel was paused by a consummer and now it
+ * needs to be resummed
+ * 2/ after the channel was paused internally (as a result of
+ * a system suspend with power loss or not)
+ * 3/ after the channel was paused by a consummer, the system
+ * went through a system suspend (with power loss or not)
+ * and the consummer wants to resume the channel
+ *
+ * To cover all the above cases we set both CLRSUS and SETEN.
+ *
+ * In case 1/ setting SETEN while the channel is still enabled
+ * is harmless for the controller.
+ *
+ * In case 2/ the channel is disabled when calling this function
+ * and setting CLRSUS is harmless for the controller as the
+ * channel is disabled anyway.
+ *
+ * In case 3/ the channel is disabled/enabled if the system
+ * went though a suspend with power loss/or not and setting
+ * CLRSUS/SETEN is harmless for the controller as the channel
+ * is enabled/disabled anyway.
+ */
+
+ rz_dmac_ch_writel(channel, CHCTRL_CLRSUS | CHCTRL_SETEN, CHCTRL, 1);
- rz_dmac_ch_writel(channel, CHCTRL_CLRSUS, CHCTRL, 1);
ret = read_poll_timeout_atomic(rz_dmac_ch_readl, val,
- !(val & CHSTAT_SUS), 1, 1024, false,
- channel, CHSTAT, 1);
+ ((val & (CHSTAT_SUS | CHSTAT_EN)) == CHSTAT_EN),
+ 1, 1024, false, channel, CHSTAT, 1);
channel->status &= ~clear_bitmask;
@@ -992,6 +1028,16 @@ static int rz_dmac_device_resume(struct dma_chan *chan)
return rz_dmac_device_resume_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED));
}
+static int rz_dmac_device_resume_internal(struct rz_dmac_chan *channel)
+{
+ lockdep_assert_held(&channel->vc.lock);
+
+ if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL)))
+ return 0;
+
+ return rz_dmac_device_resume_set(channel, BIT(RZ_DMAC_CHAN_STATUS_PAUSED_INTERNAL));
+}
+
/*
* -----------------------------------------------------------------------------
* IRQ handling
@@ -1374,6 +1420,126 @@ static void rz_dmac_remove(struct platform_device *pdev)
pm_runtime_disable(&pdev->dev);
}
+static int rz_dmac_suspend_prepare(struct device *dev)
+{
+ struct rz_dmac *dmac = dev_get_drvdata(dev);
+
+ for (unsigned int i = 0; i < dmac->n_channels; i++) {
+ struct rz_dmac_chan *channel = &dmac->channels[i];
+
+ guard(spinlock_irqsave)(&channel->vc.lock);
+
+ /* Wait for transfer completion, except in cyclic case. */
+ if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC))
+ continue;
+
+ if (rz_dmac_chan_is_enabled(channel))
+ return -EAGAIN;
+ }
+
+ return 0;
+}
+
+static void rz_dmac_suspend_recover(struct rz_dmac *dmac)
+{
+ for (unsigned int i = 0; i < dmac->n_channels; i++) {
+ struct rz_dmac_chan *channel = &dmac->channels[i];
+
+ guard(spinlock_irqsave)(&channel->vc.lock);
+
+ if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)))
+ continue;
+
+ rz_dmac_device_resume_internal(channel);
+ }
+}
+
+static int rz_dmac_suspend(struct device *dev)
+{
+ struct rz_dmac *dmac = dev_get_drvdata(dev);
+ int ret;
+
+ for (unsigned int i = 0; i < dmac->n_channels; i++) {
+ struct rz_dmac_chan *channel = &dmac->channels[i];
+
+ guard(spinlock_irqsave)(&channel->vc.lock);
+
+ if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)))
+ continue;
+
+ ret = rz_dmac_device_pause_internal(channel);
+ if (ret) {
+ dev_err(dev, "Failed to suspend channel %s\n",
+ dma_chan_name(&channel->vc.chan));
+ goto recover;
+ }
+
+ channel->pm_state.nxla = rz_dmac_ch_readl(channel, NXLA, 1);
+ }
+
+ pm_runtime_put_sync(dmac->dev);
+
+ ret = reset_control_assert(dmac->rstc);
+ if (ret) {
+ pm_runtime_resume_and_get(dmac->dev);
+recover:
+ rz_dmac_suspend_recover(dmac);
+ }
+
+ return ret;
+}
+
+static int rz_dmac_resume(struct device *dev)
+{
+ struct rz_dmac *dmac = dev_get_drvdata(dev);
+ int errors = 0, ret;
+
+ ret = reset_control_deassert(dmac->rstc);
+ if (ret)
+ return ret;
+
+ ret = pm_runtime_resume_and_get(dmac->dev);
+ if (ret) {
+ reset_control_assert(dmac->rstc);
+ return ret;
+ }
+
+ rz_dmac_writel(dmac, DCTRL_DEFAULT, CHANNEL_0_7_COMMON_BASE + DCTRL);
+ rz_dmac_writel(dmac, DCTRL_DEFAULT, CHANNEL_8_15_COMMON_BASE + DCTRL);
+
+ for (unsigned int i = 0; i < dmac->n_channels; i++) {
+ struct rz_dmac_chan *channel = &dmac->channels[i];
+
+ guard(spinlock_irqsave)(&channel->vc.lock);
+
+ rz_dmac_disable_hw(&dmac->channels[i]);
+
+ if (!(channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)))
+ continue;
+
+ rz_dmac_set_dma_req_no(dmac, channel->index, channel->mid_rid);
+
+ rz_dmac_ch_writel(channel, channel->pm_state.nxla, NXLA, 1);
+ rz_dmac_ch_writel(channel, channel->chcfg, CHCFG, 1);
+ rz_dmac_ch_writel(channel, CHCTRL_SWRST, CHCTRL, 1);
+ rz_dmac_ch_writel(channel, channel->chctrl, CHCTRL, 1);
+
+ ret = rz_dmac_device_resume_internal(channel);
+ if (ret) {
+ errors = ret;
+ dev_err(dev, "Failed to resume channel %s\n",
+ dma_chan_name(&channel->vc.chan));
+ }
+ }
+
+ return errors ? : ret;
+}
+
+static const struct dev_pm_ops rz_dmac_pm_ops = {
+ .prepare = rz_dmac_suspend_prepare,
+ NOIRQ_SYSTEM_SLEEP_PM_OPS(rz_dmac_suspend, rz_dmac_resume)
+};
+
static const struct rz_dmac_info rz_dmac_v2h_info = {
.icu_register_dma_req = rzv2h_icu_register_dma_req,
.default_dma_req_no = RZV2H_ICU_DMAC_REQ_NO_DEFAULT,
@@ -1400,6 +1566,7 @@ static struct platform_driver rz_dmac_driver = {
.driver = {
.name = "rz-dmac",
.of_match_table = of_rz_dmac_match,
+ .pm = pm_sleep_ptr(&rz_dmac_pm_ops),
},
.probe = rz_dmac_probe,
.remove = rz_dmac_remove,
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread* [PATCH v3 13/15] ASoC: renesas: rz-ssi: Add pause support
2026-04-07 13:34 [PATCH v3 00/15] Renesas: dmaengine and ASoC fixes Claudiu
` (11 preceding siblings ...)
2026-04-07 13:35 ` [PATCH v3 12/15] dmaengine: sh: rz-dmac: Add suspend to RAM support Claudiu
@ 2026-04-07 13:35 ` Claudiu
2026-04-08 17:33 ` Mark Brown
2026-04-07 13:35 ` [PATCH v3 14/15] ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs Claudiu
2026-04-07 13:35 ` [PATCH v3 15/15] dmaengine: sh: rz-dmac: Set the Link End (LE) bit on the last descriptor Claudiu
14 siblings, 1 reply; 21+ messages in thread
From: Claudiu @ 2026-04-07 13:35 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Add pause support as a preparatory step to switch to PCM dmaengine APIs.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v3:
- none, this patch is new
sound/soc/renesas/rz-ssi.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/sound/soc/renesas/rz-ssi.c b/sound/soc/renesas/rz-ssi.c
index 71e434cfe07b..d4e1dded3a9c 100644
--- a/sound/soc/renesas/rz-ssi.c
+++ b/sound/soc/renesas/rz-ssi.c
@@ -847,6 +847,7 @@ static int rz_ssi_dai_trigger(struct snd_pcm_substream *substream, int cmd,
switch (cmd) {
case SNDRV_PCM_TRIGGER_RESUME:
+ case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
ret = rz_ssi_trigger_resume(ssi, strm);
if (ret)
return ret;
@@ -888,6 +889,7 @@ static int rz_ssi_dai_trigger(struct snd_pcm_substream *substream, int cmd,
break;
case SNDRV_PCM_TRIGGER_SUSPEND:
+ case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
rz_ssi_stop(ssi, strm);
break;
@@ -1054,7 +1056,8 @@ static const struct snd_pcm_hardware rz_ssi_pcm_hardware = {
.info = SNDRV_PCM_INFO_INTERLEAVED |
SNDRV_PCM_INFO_MMAP |
SNDRV_PCM_INFO_MMAP_VALID |
- SNDRV_PCM_INFO_RESUME,
+ SNDRV_PCM_INFO_RESUME |
+ SNDRV_PCM_INFO_PAUSE,
.buffer_bytes_max = PREALLOC_BUFFER,
.period_bytes_min = 32,
.period_bytes_max = 8192,
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread* Re: [PATCH v3 13/15] ASoC: renesas: rz-ssi: Add pause support
2026-04-07 13:35 ` [PATCH v3 13/15] ASoC: renesas: rz-ssi: Add pause support Claudiu
@ 2026-04-08 17:33 ` Mark Brown
0 siblings, 0 replies; 21+ messages in thread
From: Mark Brown @ 2026-04-08 17:33 UTC (permalink / raw)
To: Claudiu
Cc: vkoul, Frank.Li, lgirdwood, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
[-- Attachment #1: Type: text/plain, Size: 242 bytes --]
On Tue, Apr 07, 2026 at 04:35:05PM +0300, Claudiu wrote:
> From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
>
> Add pause support as a preparatory step to switch to PCM dmaengine APIs.
Acked-by: Mark Brown <broonie@kernel.org>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v3 14/15] ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs
2026-04-07 13:34 [PATCH v3 00/15] Renesas: dmaengine and ASoC fixes Claudiu
` (12 preceding siblings ...)
2026-04-07 13:35 ` [PATCH v3 13/15] ASoC: renesas: rz-ssi: Add pause support Claudiu
@ 2026-04-07 13:35 ` Claudiu
2026-04-08 17:34 ` Mark Brown
2026-04-07 13:35 ` [PATCH v3 15/15] dmaengine: sh: rz-dmac: Set the Link End (LE) bit on the last descriptor Claudiu
14 siblings, 1 reply; 21+ messages in thread
From: Claudiu @ 2026-04-07 13:35 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
On Renesas RZ/G2L and RZ/G3S SoCs (where this was tested), captured audio
files occasionally contained random spikes when viewed with a tool such
as Audacity. These spikes were also audible as popping noises.
Using cyclic DMA resolves this issue. The driver was reworked to use the
existing support provided by the generic PCM dmaengine APIs. In addition
to eliminating the random spikes, the following issues were addressed:
- blank periods at the beginning of recorded files, which occurred
intermittently, are no longer present
- no overruns or underruns were observed when continuously recording
short audio files (e.g. 5 seconds long) in a loop
- concurrency issues in the SSI driver when enqueuing DMA requests were
eliminated; previously, DMA requests could be prepared and submitted
both from the DMA completion callback and the interrupt handler, which
led to crashes after several hours of testing
- the SSI driver logic is simplified
- the number of generated interrupts is reduced by approximately 250%
In the SSI platform driver probe function, the following changes were
made:
- the driver-specific DMA configuration was removed in favor of the
generic PCM dmaengine APIs. As a result, explicit cleanup goto labels
are no longer required and the driver remove callback was dropped,
since resource management is now handled via devres helpers
- special handling was added for IP variants operating in half-duplex
mode, where the DMA channel name in the device tree is "rt"; this DMA
channel name is taken into account and passed to the generic PCM
dmaengine configuration data
All code previously responsible for preparing and completing DMA
transfers was removed, as this functionality is now handled entirely by
the generic PCM dmaengine APIs.
Since DMA channels must be paused and resumed during recovery paths
(overruns and underruns reported by the hardware), the DMA channel
references are stored in rz_ssi_hw_params().
The logic in rz_ssi_is_dma_enabled() was updated to reflect that the
driver no longer manages DMA transfers directly.
To avoid software reported underruns (e.g. when running aplay during
consecutive suspend/resume cycles, or when the CPU is nearly 100%
loaded), rz_ssi_pcm_hardware.buffer_bytes_max was increased to 192K.
At the same time, rz_ssi_pcm_hardware.period_bytes_max was set to 48K
to reduce interrupt overhead.
Finally, rz_ssi_stream_is_play() was removed, as it had only a single
remaining user after this rework, and its logic was inlined at the call
site.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v3:
- s/CONFIG_SND_SOC_GENERIC_DMAENGINE_PCM/SND_SOC_GENERIC_DMAENGINE_PCM
in Kconfig
- in rz_ssi_clk_setup(): drop the update of dma_dai->maxburst
- in rz_ssi_interrupt(): pause the DMA channels in case of HW over/underruns
- add different open APIs for rz_ssi_soc_component_pio and
rz_ssi_soc_component_dma
- set rz_ssi_pcm_hardware to rz_ssi_dmaengine_pcm_conf.pcm_hardware
and updated the buffer_bytes_max to avoid underruns detected by
applications just before suspending; along with it updated
period_bytes_max for lower interrupt overhead; updated the patch
description for this; with it updated the snd_pcm_set_managed_buffer_all()
arguments to use the rz_ssi_pcm_hardware
- added back rz_ssi_soc_component_pio.pcm_new instantiation as the
PIO mode was broken w/o it
- use specific rz_ssi_soc_component_dma.open implementation for DMA
- updated rz_ssi_dmaengine_pcm_conf.chan_names[].{tx, rx} either if
there is about full or half duplex instantiation and move the flags
variable local to the code block that uses it
- check devm_snd_dmaengine_pcm_register() for defer errors
Changes in v2:
- fixed typos in patch description
- select CONFIG_SND_SOC_GENERIC_DMAENGINE_PCM for rz-ssi driver
- in rz_ssi_dai_hw_params() check if DMA is enabled before calling
snd_dmaengine_pcm_get_chan() to avoid failures for PIO mode
- do not drop rz_ssi_pcm_pointer() and rz_ssi_pcm_new() as these
are necessary for PIO mode
- added 2 struct snd_soc_component_driver, one for PIO mode, one for
DMA and updated probe() to register the proper
snd_soc_component_driver based on the working mode
sound/soc/renesas/Kconfig | 1 +
sound/soc/renesas/rz-ssi.c | 370 +++++++++++--------------------------
2 files changed, 112 insertions(+), 259 deletions(-)
diff --git a/sound/soc/renesas/Kconfig b/sound/soc/renesas/Kconfig
index 11c2027c88a7..6520217e7407 100644
--- a/sound/soc/renesas/Kconfig
+++ b/sound/soc/renesas/Kconfig
@@ -56,6 +56,7 @@ config SND_SOC_MSIOF
config SND_SOC_RZ
tristate "RZ/G2L series SSIF-2 support"
depends on ARCH_RZG2L || COMPILE_TEST
+ select SND_SOC_GENERIC_DMAENGINE_PCM
help
This option enables RZ/G2L SSIF-2 sound support.
diff --git a/sound/soc/renesas/rz-ssi.c b/sound/soc/renesas/rz-ssi.c
index d4e1dded3a9c..400e79054ef3 100644
--- a/sound/soc/renesas/rz-ssi.c
+++ b/sound/soc/renesas/rz-ssi.c
@@ -13,6 +13,8 @@
#include <linux/module.h>
#include <linux/pm_runtime.h>
#include <linux/reset.h>
+#include <sound/dmaengine_pcm.h>
+#include <sound/pcm.h>
#include <sound/pcm_params.h>
#include <sound/soc.h>
@@ -87,8 +89,6 @@ struct rz_ssi_stream {
struct rz_ssi_priv *priv;
struct snd_pcm_substream *substream;
int fifo_sample_size; /* sample capacity of SSI FIFO */
- int dma_buffer_pos; /* The address for the next DMA descriptor */
- int completed_dma_buf_pos; /* The address of the last completed DMA descriptor. */
int period_counter; /* for keeping track of periods transferred */
int buffer_pos; /* current frame position in the buffer */
int running; /* 0=stopped, 1=running */
@@ -96,8 +96,6 @@ struct rz_ssi_stream {
int uerr_num;
int oerr_num;
- struct dma_chan *dma_ch;
-
int (*transfer)(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm);
};
@@ -108,7 +106,6 @@ struct rz_ssi_priv {
struct clk *sfr_clk;
struct clk *clk;
- phys_addr_t phys;
int irq_int;
int irq_tx;
int irq_rx;
@@ -148,9 +145,10 @@ struct rz_ssi_priv {
unsigned int sample_width;
unsigned int sample_bits;
} hw_params_cache;
-};
-static void rz_ssi_dma_complete(void *data);
+ struct snd_dmaengine_dai_dma_data dma_dais[SNDRV_PCM_STREAM_LAST + 1];
+ struct dma_chan *dmas[SNDRV_PCM_STREAM_LAST + 1];
+};
static void rz_ssi_reg_writel(struct rz_ssi_priv *priv, uint reg, u32 data)
{
@@ -172,11 +170,6 @@ static void rz_ssi_reg_mask_setl(struct rz_ssi_priv *priv, uint reg,
writel(val, (priv->base + reg));
}
-static inline bool rz_ssi_stream_is_play(struct snd_pcm_substream *substream)
-{
- return substream->stream == SNDRV_PCM_STREAM_PLAYBACK;
-}
-
static inline struct rz_ssi_stream *
rz_ssi_stream_get(struct rz_ssi_priv *ssi, struct snd_pcm_substream *substream)
{
@@ -185,7 +178,7 @@ rz_ssi_stream_get(struct rz_ssi_priv *ssi, struct snd_pcm_substream *substream)
static inline bool rz_ssi_is_dma_enabled(struct rz_ssi_priv *ssi)
{
- return (ssi->playback.dma_ch && (ssi->dma_rt || ssi->capture.dma_ch));
+ return !ssi->playback.transfer && !ssi->capture.transfer;
}
static void rz_ssi_set_substream(struct rz_ssi_stream *strm,
@@ -215,8 +208,6 @@ static void rz_ssi_stream_init(struct rz_ssi_stream *strm,
struct snd_pcm_substream *substream)
{
rz_ssi_set_substream(strm, substream);
- strm->dma_buffer_pos = 0;
- strm->completed_dma_buf_pos = 0;
strm->period_counter = 0;
strm->buffer_pos = 0;
@@ -242,12 +233,13 @@ static void rz_ssi_stream_quit(struct rz_ssi_priv *ssi,
dev_info(dev, "underrun = %d\n", strm->uerr_num);
}
-static int rz_ssi_clk_setup(struct rz_ssi_priv *ssi, unsigned int rate,
- unsigned int channels)
+static int rz_ssi_clk_setup(struct rz_ssi_priv *ssi, struct snd_pcm_substream *substream,
+ unsigned int rate, unsigned int channels)
{
static u8 ckdv[] = { 1, 2, 4, 8, 16, 32, 64, 128, 6, 12, 24, 48, 96 };
unsigned int channel_bits = 32; /* System Word Length */
unsigned long bclk_rate = rate * channels * channel_bits;
+ struct snd_dmaengine_dai_dma_data *dma_dai;
unsigned int div;
unsigned int i;
u32 ssicr = 0;
@@ -290,6 +282,8 @@ static int rz_ssi_clk_setup(struct rz_ssi_priv *ssi, unsigned int rate,
return -EINVAL;
}
+ dma_dai = &ssi->dma_dais[substream->stream];
+
/*
* DWL: Data Word Length = {16, 24, 32} bits
* SWL: System Word Length = 32 bits
@@ -298,12 +292,15 @@ static int rz_ssi_clk_setup(struct rz_ssi_priv *ssi, unsigned int rate,
switch (ssi->hw_params_cache.sample_width) {
case 16:
ssicr |= SSICR_DWL(1);
+ dma_dai->addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES;
break;
case 24:
ssicr |= SSICR_DWL(5) | SSICR_PDTA;
+ dma_dai->addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
break;
case 32:
ssicr |= SSICR_DWL(6);
+ dma_dai->addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
break;
default:
dev_err(ssi->dev, "Not support %u data width",
@@ -344,7 +341,7 @@ static void rz_ssi_set_idle(struct rz_ssi_priv *ssi)
static int rz_ssi_start(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm)
{
- bool is_play = rz_ssi_stream_is_play(strm->substream);
+ bool is_play = strm->substream->stream == SNDRV_PCM_STREAM_PLAYBACK;
bool is_full_duplex;
u32 ssicr, ssifcr;
@@ -423,14 +420,6 @@ static int rz_ssi_stop(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm)
/* Disable TX/RX */
rz_ssi_reg_mask_setl(ssi, SSICR, SSICR_TEN | SSICR_REN, 0);
- /* Cancel all remaining DMA transactions */
- if (rz_ssi_is_dma_enabled(ssi)) {
- if (ssi->playback.dma_ch)
- dmaengine_terminate_async(ssi->playback.dma_ch);
- if (ssi->capture.dma_ch)
- dmaengine_terminate_async(ssi->capture.dma_ch);
- }
-
rz_ssi_set_idle(ssi);
return 0;
@@ -458,10 +447,6 @@ static void rz_ssi_pointer_update(struct rz_ssi_stream *strm, int frames)
snd_pcm_period_elapsed(strm->substream);
strm->period_counter = current_period;
}
-
- strm->completed_dma_buf_pos += runtime->period_size;
- if (strm->completed_dma_buf_pos >= runtime->buffer_size)
- strm->completed_dma_buf_pos = 0;
}
static int rz_ssi_pio_recv(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm)
@@ -606,12 +591,6 @@ static irqreturn_t rz_ssi_interrupt(int irq, void *data)
if (irq == ssi->irq_int) { /* error or idle */
bool is_stopped = !!(ssisr & (SSISR_RUIRQ | SSISR_ROIRQ |
SSISR_TUIRQ | SSISR_TOIRQ));
- int i, count;
-
- if (rz_ssi_is_dma_enabled(ssi))
- count = 4;
- else
- count = 1;
if (ssi->capture.substream && is_stopped) {
if (ssisr & SSISR_RUIRQ)
@@ -631,18 +610,29 @@ static irqreturn_t rz_ssi_interrupt(int irq, void *data)
rz_ssi_stop(ssi, strm_playback);
}
+ if (!rz_ssi_is_stream_running(&ssi->playback) &&
+ !rz_ssi_is_stream_running(&ssi->capture) &&
+ rz_ssi_is_dma_enabled(ssi)) {
+ dmaengine_pause(ssi->dmas[SNDRV_PCM_STREAM_PLAYBACK]);
+ dmaengine_pause(ssi->dmas[SNDRV_PCM_STREAM_CAPTURE]);
+ }
+
/* Clear all flags */
rz_ssi_reg_mask_setl(ssi, SSISR, SSISR_TOIRQ | SSISR_TUIRQ |
SSISR_ROIRQ | SSISR_RUIRQ, 0);
/* Add/remove more data */
if (ssi->capture.substream && is_stopped) {
- for (i = 0; i < count; i++)
+ if (rz_ssi_is_dma_enabled(ssi))
+ dmaengine_resume(ssi->dmas[SNDRV_PCM_STREAM_CAPTURE]);
+ else
strm_capture->transfer(ssi, strm_capture);
}
if (ssi->playback.substream && is_stopped) {
- for (i = 0; i < count; i++)
+ if (rz_ssi_is_dma_enabled(ssi))
+ dmaengine_resume(ssi->dmas[SNDRV_PCM_STREAM_PLAYBACK]);
+ else
strm_playback->transfer(ssi, strm_playback);
}
@@ -679,153 +669,11 @@ static irqreturn_t rz_ssi_interrupt(int irq, void *data)
return IRQ_HANDLED;
}
-static int rz_ssi_dma_slave_config(struct rz_ssi_priv *ssi,
- struct dma_chan *dma_ch, bool is_play)
-{
- struct dma_slave_config cfg;
-
- memset(&cfg, 0, sizeof(cfg));
-
- cfg.direction = is_play ? DMA_MEM_TO_DEV : DMA_DEV_TO_MEM;
- cfg.dst_addr = ssi->phys + SSIFTDR;
- cfg.src_addr = ssi->phys + SSIFRDR;
- if (ssi->hw_params_cache.sample_width == 16) {
- cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES;
- cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES;
- } else {
- cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
- cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
- }
-
- return dmaengine_slave_config(dma_ch, &cfg);
-}
-
-static int rz_ssi_dma_transfer(struct rz_ssi_priv *ssi,
- struct rz_ssi_stream *strm)
-{
- struct snd_pcm_substream *substream = strm->substream;
- struct dma_async_tx_descriptor *desc;
- struct snd_pcm_runtime *runtime;
- enum dma_transfer_direction dir;
- u32 dma_paddr, dma_size;
- int amount;
-
- if (!rz_ssi_stream_is_valid(ssi, strm))
- return -EINVAL;
-
- runtime = substream->runtime;
- if (runtime->state == SNDRV_PCM_STATE_DRAINING)
- /*
- * Stream is ending, so do not queue up any more DMA
- * transfers otherwise we play partial sound clips
- * because we can't shut off the DMA quick enough.
- */
- return 0;
-
- dir = rz_ssi_stream_is_play(substream) ? DMA_MEM_TO_DEV : DMA_DEV_TO_MEM;
-
- /* Always transfer 1 period */
- amount = runtime->period_size;
-
- /* DMA physical address and size */
- dma_paddr = runtime->dma_addr + frames_to_bytes(runtime,
- strm->dma_buffer_pos);
- dma_size = frames_to_bytes(runtime, amount);
- desc = dmaengine_prep_slave_single(strm->dma_ch, dma_paddr, dma_size,
- dir,
- DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
- if (!desc) {
- dev_err(ssi->dev, "dmaengine_prep_slave_single() fail\n");
- return -ENOMEM;
- }
-
- desc->callback = rz_ssi_dma_complete;
- desc->callback_param = strm;
-
- if (dmaengine_submit(desc) < 0) {
- dev_err(ssi->dev, "dmaengine_submit() fail\n");
- return -EIO;
- }
-
- /* Update DMA pointer */
- strm->dma_buffer_pos += amount;
- if (strm->dma_buffer_pos >= runtime->buffer_size)
- strm->dma_buffer_pos = 0;
-
- /* Start DMA */
- dma_async_issue_pending(strm->dma_ch);
-
- return 0;
-}
-
-static void rz_ssi_dma_complete(void *data)
-{
- struct rz_ssi_stream *strm = (struct rz_ssi_stream *)data;
-
- if (!strm->running || !strm->substream || !strm->substream->runtime)
- return;
-
- /* Note that next DMA transaction has probably already started */
- rz_ssi_pointer_update(strm, strm->substream->runtime->period_size);
-
- /* Queue up another DMA transaction */
- rz_ssi_dma_transfer(strm->priv, strm);
-}
-
-static void rz_ssi_release_dma_channels(struct rz_ssi_priv *ssi)
-{
- if (ssi->playback.dma_ch) {
- dma_release_channel(ssi->playback.dma_ch);
- ssi->playback.dma_ch = NULL;
- if (ssi->dma_rt)
- ssi->dma_rt = false;
- }
-
- if (ssi->capture.dma_ch) {
- dma_release_channel(ssi->capture.dma_ch);
- ssi->capture.dma_ch = NULL;
- }
-}
-
-static int rz_ssi_dma_request(struct rz_ssi_priv *ssi, struct device *dev)
-{
- ssi->playback.dma_ch = dma_request_chan(dev, "tx");
- if (IS_ERR(ssi->playback.dma_ch))
- ssi->playback.dma_ch = NULL;
-
- ssi->capture.dma_ch = dma_request_chan(dev, "rx");
- if (IS_ERR(ssi->capture.dma_ch))
- ssi->capture.dma_ch = NULL;
-
- if (!ssi->playback.dma_ch && !ssi->capture.dma_ch) {
- ssi->playback.dma_ch = dma_request_chan(dev, "rt");
- if (IS_ERR(ssi->playback.dma_ch)) {
- ssi->playback.dma_ch = NULL;
- goto no_dma;
- }
-
- ssi->dma_rt = true;
- }
-
- if (!rz_ssi_is_dma_enabled(ssi))
- goto no_dma;
-
- return 0;
-
-no_dma:
- rz_ssi_release_dma_channels(ssi);
-
- return -ENODEV;
-}
-
static int rz_ssi_trigger_resume(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm)
{
struct snd_pcm_substream *substream = strm->substream;
- struct snd_pcm_runtime *runtime = substream->runtime;
int ret;
- strm->dma_buffer_pos = strm->completed_dma_buf_pos + runtime->period_size;
-
if (rz_ssi_is_stream_running(&ssi->playback) ||
rz_ssi_is_stream_running(&ssi->capture))
return 0;
@@ -834,7 +682,7 @@ static int rz_ssi_trigger_resume(struct rz_ssi_priv *ssi, struct rz_ssi_stream *
if (ret)
return ret;
- return rz_ssi_clk_setup(ssi, ssi->hw_params_cache.rate,
+ return rz_ssi_clk_setup(ssi, substream, ssi->hw_params_cache.rate,
ssi->hw_params_cache.channels);
}
@@ -843,7 +691,7 @@ static int rz_ssi_dai_trigger(struct snd_pcm_substream *substream, int cmd,
{
struct rz_ssi_priv *ssi = snd_soc_dai_get_drvdata(dai);
struct rz_ssi_stream *strm = rz_ssi_stream_get(ssi, substream);
- int ret = 0, i, num_transfer = 1;
+ int ret = 0;
switch (cmd) {
case SNDRV_PCM_TRIGGER_RESUME:
@@ -858,28 +706,7 @@ static int rz_ssi_dai_trigger(struct snd_pcm_substream *substream, int cmd,
if (cmd == SNDRV_PCM_TRIGGER_START)
rz_ssi_stream_init(strm, substream);
- if (rz_ssi_is_dma_enabled(ssi)) {
- bool is_playback = rz_ssi_stream_is_play(substream);
-
- if (ssi->dma_rt)
- ret = rz_ssi_dma_slave_config(ssi, ssi->playback.dma_ch,
- is_playback);
- else
- ret = rz_ssi_dma_slave_config(ssi, strm->dma_ch,
- is_playback);
-
- /* Fallback to pio */
- if (ret < 0) {
- ssi->playback.transfer = rz_ssi_pio_send;
- ssi->capture.transfer = rz_ssi_pio_recv;
- rz_ssi_release_dma_channels(ssi);
- } else {
- /* For DMA, queue up multiple DMA descriptors */
- num_transfer = 4;
- }
- }
-
- for (i = 0; i < num_transfer; i++) {
+ if (!rz_ssi_is_dma_enabled(ssi)) {
ret = strm->transfer(ssi, strm);
if (ret)
return ret;
@@ -1026,6 +853,10 @@ static int rz_ssi_dai_hw_params(struct snd_pcm_substream *substream,
return -EINVAL;
}
+ /* Save the DMA channels for recovery. */
+ if (rz_ssi_is_dma_enabled(ssi))
+ ssi->dmas[substream->stream] = snd_dmaengine_pcm_get_chan(substream);
+
if (rz_ssi_is_stream_running(&ssi->playback) ||
rz_ssi_is_stream_running(&ssi->capture)) {
if (rz_ssi_is_valid_hw_params(ssi, rate, channels, sample_width, sample_bits))
@@ -1041,10 +872,21 @@ static int rz_ssi_dai_hw_params(struct snd_pcm_substream *substream,
if (ret)
return ret;
- return rz_ssi_clk_setup(ssi, rate, channels);
+ return rz_ssi_clk_setup(ssi, substream, rate, channels);
+}
+
+static int rz_ssi_dai_probe(struct snd_soc_dai *dai)
+{
+ struct rz_ssi_priv *ssi = snd_soc_dai_get_drvdata(dai);
+
+ snd_soc_dai_init_dma_data(dai, &ssi->dma_dais[SNDRV_PCM_STREAM_PLAYBACK],
+ &ssi->dma_dais[SNDRV_PCM_STREAM_CAPTURE]);
+
+ return 0;
}
static const struct snd_soc_dai_ops rz_ssi_dai_ops = {
+ .probe = rz_ssi_dai_probe,
.startup = rz_ssi_startup,
.shutdown = rz_ssi_shutdown,
.trigger = rz_ssi_dai_trigger,
@@ -1058,9 +900,9 @@ static const struct snd_pcm_hardware rz_ssi_pcm_hardware = {
SNDRV_PCM_INFO_MMAP_VALID |
SNDRV_PCM_INFO_RESUME |
SNDRV_PCM_INFO_PAUSE,
- .buffer_bytes_max = PREALLOC_BUFFER,
+ .buffer_bytes_max = 192 * 1024,
.period_bytes_min = 32,
- .period_bytes_max = 8192,
+ .period_bytes_max = 48 * 1024,
.channels_min = SSI_CHAN_MIN,
.channels_max = SSI_CHAN_MAX,
.periods_min = 1,
@@ -1068,8 +910,8 @@ static const struct snd_pcm_hardware rz_ssi_pcm_hardware = {
.fifo_size = 32 * 2,
};
-static int rz_ssi_pcm_open(struct snd_soc_component *component,
- struct snd_pcm_substream *substream)
+static int rz_ssi_pcm_open_pio(struct snd_soc_component *component,
+ struct snd_pcm_substream *substream)
{
snd_soc_set_runtime_hwparams(substream, &rz_ssi_pcm_hardware);
@@ -1077,6 +919,13 @@ static int rz_ssi_pcm_open(struct snd_soc_component *component,
SNDRV_PCM_HW_PARAM_PERIODS);
}
+static int rz_ssi_pcm_open_dma(struct snd_soc_component *component,
+ struct snd_pcm_substream *substream)
+{
+ return snd_pcm_hw_constraint_integer(substream->runtime,
+ SNDRV_PCM_HW_PARAM_PERIODS);
+}
+
static snd_pcm_uframes_t rz_ssi_pcm_pointer(struct snd_soc_component *component,
struct snd_pcm_substream *substream)
{
@@ -1093,7 +942,8 @@ static int rz_ssi_pcm_new(struct snd_soc_component *component,
{
snd_pcm_set_managed_buffer_all(rtd->pcm, SNDRV_DMA_TYPE_DEV,
rtd->card->snd_card->dev,
- PREALLOC_BUFFER, PREALLOC_BUFFER_MAX);
+ rz_ssi_pcm_hardware.buffer_bytes_max,
+ rz_ssi_pcm_hardware.buffer_bytes_max);
return 0;
}
@@ -1116,16 +966,29 @@ static struct snd_soc_dai_driver rz_ssi_soc_dai[] = {
},
};
-static const struct snd_soc_component_driver rz_ssi_soc_component = {
+static const struct snd_soc_component_driver rz_ssi_soc_component_pio = {
.name = "rz-ssi",
- .open = rz_ssi_pcm_open,
+ .open = rz_ssi_pcm_open_pio,
.pointer = rz_ssi_pcm_pointer,
.pcm_new = rz_ssi_pcm_new,
.legacy_dai_naming = 1,
};
+static const struct snd_soc_component_driver rz_ssi_soc_component_dma = {
+ .name = "rz-ssi",
+ .open = rz_ssi_pcm_open_dma,
+ .legacy_dai_naming = 1,
+};
+
+static struct snd_dmaengine_pcm_config rz_ssi_dmaengine_pcm_conf = {
+ .pcm_hardware = &rz_ssi_pcm_hardware,
+ .prepare_slave_config = snd_dmaengine_pcm_prepare_slave_config,
+};
+
static int rz_ssi_probe(struct platform_device *pdev)
{
+ const struct snd_soc_component_driver *component_driver;
+ struct device_node *np = pdev->dev.of_node;
struct device *dev = &pdev->dev;
struct rz_ssi_priv *ssi;
struct clk *audio_clk;
@@ -1141,7 +1004,6 @@ static int rz_ssi_probe(struct platform_device *pdev)
if (IS_ERR(ssi->base))
return PTR_ERR(ssi->base);
- ssi->phys = res->start;
ssi->clk = devm_clk_get(dev, "ssi");
if (IS_ERR(ssi->clk))
return PTR_ERR(ssi->clk);
@@ -1165,16 +1027,35 @@ static int rz_ssi_probe(struct platform_device *pdev)
ssi->audio_mck = ssi->audio_clk_1 ? ssi->audio_clk_1 : ssi->audio_clk_2;
- /* Detect DMA support */
- ret = rz_ssi_dma_request(ssi, dev);
- if (ret < 0) {
+ ssi->dma_dais[SNDRV_PCM_STREAM_PLAYBACK].addr = (dma_addr_t)res->start + SSIFTDR;
+ ssi->dma_dais[SNDRV_PCM_STREAM_CAPTURE].addr = (dma_addr_t)res->start + SSIFRDR;
+
+ if (of_property_present(np, "dma-names")) {
+ unsigned int flags = 0;
+
+ if (of_property_match_string(np, "dma-names", "rt") == 0) {
+ flags = SND_DMAENGINE_PCM_FLAG_HALF_DUPLEX;
+ rz_ssi_dmaengine_pcm_conf.chan_names[SNDRV_PCM_STREAM_PLAYBACK] = "rt";
+ } else {
+ rz_ssi_dmaengine_pcm_conf.chan_names[SNDRV_PCM_STREAM_PLAYBACK] = "tx";
+ rz_ssi_dmaengine_pcm_conf.chan_names[SNDRV_PCM_STREAM_CAPTURE] = "rx";
+ }
+ ret = devm_snd_dmaengine_pcm_register(&pdev->dev, &rz_ssi_dmaengine_pcm_conf,
+ flags);
+ } else {
+ ret = -ENODEV;
+ }
+
+ if (ret == -EPROBE_DEFER) {
+ return ret;
+ } else if (ret) {
dev_warn(dev, "DMA not available, using PIO\n");
ssi->playback.transfer = rz_ssi_pio_send;
ssi->capture.transfer = rz_ssi_pio_recv;
+ component_driver = &rz_ssi_soc_component_pio;
} else {
- dev_info(dev, "DMA enabled");
- ssi->playback.transfer = rz_ssi_dma_transfer;
- ssi->capture.transfer = rz_ssi_dma_transfer;
+ dev_info(dev, "DMA enabled\n");
+ component_driver = &rz_ssi_soc_component_dma;
}
ssi->playback.priv = ssi;
@@ -1185,17 +1066,13 @@ static int rz_ssi_probe(struct platform_device *pdev)
/* Error Interrupt */
ssi->irq_int = platform_get_irq_byname(pdev, "int_req");
- if (ssi->irq_int < 0) {
- ret = ssi->irq_int;
- goto err_release_dma_chs;
- }
+ if (ssi->irq_int < 0)
+ return ssi->irq_int;
ret = devm_request_irq(dev, ssi->irq_int, rz_ssi_interrupt,
0, dev_name(dev), ssi);
- if (ret < 0) {
- dev_err_probe(dev, ret, "irq request error (int_req)\n");
- goto err_release_dma_chs;
- }
+ if (ret < 0)
+ return dev_err_probe(dev, ret, "irq request error (int_req)\n");
if (!rz_ssi_is_dma_enabled(ssi)) {
/* Tx and Rx interrupts (pio only) */
@@ -1236,43 +1113,19 @@ static int rz_ssi_probe(struct platform_device *pdev)
}
ssi->rstc = devm_reset_control_get_exclusive(dev, NULL);
- if (IS_ERR(ssi->rstc)) {
- ret = PTR_ERR(ssi->rstc);
- goto err_release_dma_chs;
- }
+ if (IS_ERR(ssi->rstc))
+ return dev_err_probe(dev, PTR_ERR(ssi->rstc), "Failed to get reset\n");
/* Default 0 for power saving. Can be overridden via sysfs. */
pm_runtime_set_autosuspend_delay(dev, 0);
pm_runtime_use_autosuspend(dev);
ret = devm_pm_runtime_enable(dev);
- if (ret < 0) {
- dev_err(dev, "Failed to enable runtime PM!\n");
- goto err_release_dma_chs;
- }
-
- ret = devm_snd_soc_register_component(dev, &rz_ssi_soc_component,
- rz_ssi_soc_dai,
- ARRAY_SIZE(rz_ssi_soc_dai));
- if (ret < 0) {
- dev_err(dev, "failed to register snd component\n");
- goto err_release_dma_chs;
- }
-
- return 0;
-
-err_release_dma_chs:
- rz_ssi_release_dma_channels(ssi);
-
- return ret;
-}
-
-static void rz_ssi_remove(struct platform_device *pdev)
-{
- struct rz_ssi_priv *ssi = dev_get_drvdata(&pdev->dev);
-
- rz_ssi_release_dma_channels(ssi);
+ if (ret < 0)
+ return dev_err_probe(dev, ret, "Failed to enable runtime PM!\n");
- reset_control_assert(ssi->rstc);
+ return devm_snd_soc_register_component(dev, component_driver,
+ rz_ssi_soc_dai,
+ ARRAY_SIZE(rz_ssi_soc_dai));
}
static const struct of_device_id rz_ssi_of_match[] = {
@@ -1307,7 +1160,6 @@ static struct platform_driver rz_ssi_driver = {
.pm = pm_ptr(&rz_ssi_pm_ops),
},
.probe = rz_ssi_probe,
- .remove = rz_ssi_remove,
};
module_platform_driver(rz_ssi_driver);
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread* Re: [PATCH v3 14/15] ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs
2026-04-07 13:35 ` [PATCH v3 14/15] ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs Claudiu
@ 2026-04-08 17:34 ` Mark Brown
0 siblings, 0 replies; 21+ messages in thread
From: Mark Brown @ 2026-04-08 17:34 UTC (permalink / raw)
To: Claudiu
Cc: vkoul, Frank.Li, lgirdwood, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
[-- Attachment #1: Type: text/plain, Size: 384 bytes --]
On Tue, Apr 07, 2026 at 04:35:06PM +0300, Claudiu wrote:
> From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
>
> On Renesas RZ/G2L and RZ/G3S SoCs (where this was tested), captured audio
> files occasionally contained random spikes when viewed with a tool such
> as Audacity. These spikes were also audible as popping noises.
Acked-by: Mark Brown <broonie@kernel.org>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v3 15/15] dmaengine: sh: rz-dmac: Set the Link End (LE) bit on the last descriptor
2026-04-07 13:34 [PATCH v3 00/15] Renesas: dmaengine and ASoC fixes Claudiu
` (13 preceding siblings ...)
2026-04-07 13:35 ` [PATCH v3 14/15] ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs Claudiu
@ 2026-04-07 13:35 ` Claudiu
14 siblings, 0 replies; 21+ messages in thread
From: Claudiu @ 2026-04-07 13:35 UTC (permalink / raw)
To: vkoul, Frank.Li, lgirdwood, broonie, perex, tiwai, biju.das.jz,
prabhakar.mahadev-lad.rj, p.zabel, geert+renesas,
fabrizio.castro.jz
Cc: claudiu.beznea, dmaengine, linux-kernel, linux-sound,
linux-renesas-soc, Claudiu Beznea
From: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
On an RZ/G2L-based system, it has been observed that when the DMA channels
for all enabled IPs are active (TX and RX for one serial IP, TX and RX for
one audio IP, and TX and RX for one SPI IP), shortly after all of them are
started, the system can become irrecoverably blocked. In one debug session
the system did not block, and the DMA HW registers were inspected. It was
found that the DER (Descriptor Error) bit in the CHSTAT register for one of
the SPI DMA channels was set.
According to the RZ/G2L HW Manual, Rev. 1.30, chapter 14.4.7 Channel
Status Register n/nS (CHSTAT_n/nS), description of the DER bit, the DER
bit is set when the LV (Link Valid) value loaded with a descriptor in link
mode is 0. This means that the DMA engine has loaded an invalid
descriptor (as defined in Table 14.14, Header Area, of the same manual).
The same chapter states that when a descriptor error occurs, the transfer
is stopped, but no DMA error interrupt is generated.
Set the LE bit on the last descriptor of a transfer. This informs the DMA
engine that this is the final descriptor for the transfer.
Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
---
Changes in v3:
- none
Changes in v2:
- none
drivers/dma/sh/rz-dmac.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
index 3265c7b3ab83..ac388e7607df 100644
--- a/drivers/dma/sh/rz-dmac.c
+++ b/drivers/dma/sh/rz-dmac.c
@@ -200,6 +200,7 @@ struct rz_dmac {
/* LINK MODE DESCRIPTOR */
#define HEADER_LV BIT(0)
+#define HEADER_LE BIT(1)
#define HEADER_WBD BIT(2)
#define RZ_DMAC_MAX_CHAN_DESCRIPTORS 16
@@ -385,7 +386,7 @@ static void rz_dmac_prepare_desc_for_memcpy(struct rz_dmac_chan *channel)
lmdesc->chcfg = chcfg;
lmdesc->chitvl = 0;
lmdesc->chext = 0;
- lmdesc->header = HEADER_LV;
+ lmdesc->header = HEADER_LV | HEADER_LE;
rz_dmac_set_dma_req_no(dmac, channel->index, dmac->info->default_dma_req_no);
@@ -428,7 +429,7 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct rz_dmac_chan *channel)
lmdesc->chext = 0;
if (i == (sg_len - 1)) {
lmdesc->chcfg = (channel->chcfg & ~CHCFG_DEM);
- lmdesc->header = HEADER_LV;
+ lmdesc->header = HEADER_LV | HEADER_LE;
} else {
lmdesc->chcfg = channel->chcfg;
lmdesc->header = HEADER_LV;
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread