From: Claudiu Beznea <claudiu.beznea@tuxon.dev>
To: vkoul@kernel.org, Frank.Li@kernel.org, lgirdwood@gmail.com,
broonie@kernel.org, perex@perex.cz, tiwai@suse.com,
biju.das.jz@bp.renesas.com,
prabhakar.mahadev-lad.rj@bp.renesas.com, p.zabel@pengutronix.de,
geert+renesas@glider.be, fabrizio.castro.jz@renesas.com,
long.luu.ur@renesas.com
Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-sound@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Subject: sashiko.dev review (Re: [PATCH v4 10/17] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing)
Date: Mon, 20 Apr 2026 15:41:50 +0300 [thread overview]
Message-ID: <51d92b2a-152e-4b7a-a066-3bb43687ac5f@tuxon.dev> (raw)
In-Reply-To: <20260411114303.2814115-11-claudiu.beznea.uj@bp.renesas.com>
> commit c2a00d64377d19d3ae2ce858af0fab6e9fae6f1a
> Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
> Subject: dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor
processing
> This commit converts the rz-dmac driver to fully use virt-dma APIs for
> descriptor processing. It removes custom queue management (ld_queue,
> ld_active), keeping only ld_free to track pre-allocated descriptors. It also
> simplifies residue calculation and synchronization mechanisms.
> > diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c
> > index f35ff57397189..04eb1a7f1e627 100644
> > --- a/drivers/dma/sh/rz-dmac.c
> > +++ b/drivers/dma/sh/rz-dmac.c
> [ ... ]
> > @@ -494,8 +485,6 @@ static void rz_dmac_free_chan_resources(struct dma_chan
*chan)
> > rz_lmdesc_setup(channel, channel->lmdesc.base);
> >
> > rz_dmac_disable_hw(channel);
> > - list_splice_tail_init(&channel->ld_active, &channel->ld_free);
> > - list_splice_tail_init(&channel->ld_queue, &channel->ld_free);
> >
> > if (channel->mid_rid >= 0) {
> > clear_bit(channel->mid_rid, dmac->modules);
> > @@ -504,13 +493,19 @@ static void rz_dmac_free_chan_resources(struct
dma_chan *chan)
> >
> > spin_unlock_irqrestore(&channel->vc.lock, flags);
> >
> > + vchan_free_chan_resources(&channel->vc);
> > +
> > + spin_lock_irqsave(&channel->vc.lock, flags);
> Can this result in a memory leak and use-after-free regression?
> The patch removes the old list splicing but does not clear channel->desc.
> If rz_dmac_free_chan_resources() is called without a prior terminate,
> channel->desc is not returned to ld_free or freed by
> vchan_free_chan_resources(), leading to a memory leak of the descriptor.
AFAIK, it is expected hat device_terminate_all() is called before before
device_free_chan_resources().
Vinod, Frank,
Could you please let me know if my assumption is wrong?
Thank you!
> Additionally, the hardware is disabled and the lock is temporarily dropped
> before calling vchan_free_chan_resources(). During this unlocked window, the
> rz_dmac_irq_handler_thread could execute since channel->desc is not NULL.
> This would complete the descriptor, call rz_dmac_xfer_desc(), fetch the next
> descriptor, and restart the hardware while the channel is being destroyed.
If device_terminate_all() is called before before device_free_chan_resources()
this shouldn't happen. channel->desc is set to null in device_terminate_all().
> Should vchan_terminate_vdesc() be called to clear channel->desc under the
> lock before dropping it?
This is already done in device_terminate_all().
> [ ... ]
> > @@ -558,22 +553,22 @@ rz_dmac_prep_slave_sg(struct dma_chan *chan, struct
scatterlist *sgl,
> > if (!desc)
> > return NULL;
> >
> > - for_each_sg(sgl, sg, sg_len, i)
> > - dma_length += sg_dma_len(sg);
> > + list_del(&desc->node);
> > + }
> >
> > - desc->type = RZ_DMAC_DESC_SLAVE_SG;
> > - desc->sg = sgl;
> > - desc->sgcount = sg_len;
> This issue is not introduced by this commit, but could this assignment cause a
> buffer overflow?
> The variable sg_len is assigned to desc->sgcount without validating that it is
> less than or equal to the hardware limit (DMAC_NR_LMDESC, which is 64). Later
> in rz_dmac_prepare_descs_for_slave_sg(), the code iterates sgcount times to
> populate the hardware descriptor ring. If an SG list with more than 64 entries
> is provided, the loop will wrap around and overwrite the beginning of the DMA
> ring, corrupting the DMA descriptors.
As mentioned in a previous review comment, this can happen but, as it is not
introduced by this series, I'll prepare a cleanup patch later, after this series
will be integrated, as it is already big enough.
> [ ... ]
> > @@ -692,6 +681,8 @@ static void rz_dmac_device_synchronize(struct dma_chan
*chan)
> > u32 chstat;
> > int ret;
> >
> > + vchan_synchronize(&channel->vc);
> > +
> > ret = read_poll_timeout(rz_dmac_ch_readl, chstat, !(chstat & CHSTAT_EN),
> > 100, 100000, false, channel, CHSTAT, 1);
> Could calling vchan_synchronize() before polling the hardware lead to a
> use-after-free?
> If the hardware is actively processing a transfer, vchan_synchronize() will
> return immediately because no completion tasklet has been scheduled yet.
> The hardware is then polled until it finishes, at which point it asserts an
> IRQ and schedules the virt-dma tasklet.
> However, the caller of rz_dmac_device_synchronize() would have already
> resumed assuming all activity is finished, potentially freeing the transfer
> context before the delayed tasklet runs.
> Should vchan_synchronize() be called after read_poll_timeout() ensures the
> hardware has actually stopped?
AFAIK, it is expected at this point that the user alredy called
device_terminate_all(). That frees the active transfer and sets channel->desc =
NULL which don't allow scheduling any tasklet.
Vinod, Frank,
Could you please let me know if I'm wrong here?
> [ ... ]
> > @@ -739,58 +730,22 @@ static u32
rz_dmac_calculate_residue_bytes_in_vd(struct rz_dmac_chan *channel,
> > static u32 rz_dmac_chan_get_residue(struct rz_dmac_chan *channel,
> > dma_cookie_t cookie)
> > {
> > - struct rz_dmac_desc *current_desc, *desc;
> > - enum dma_status status;
> > + struct rz_dmac_desc *desc = NULL;
> > + struct virt_dma_desc *vd;
> > u32 crla, crtb, i;
> >
> > - /* Get current processing virtual descriptor */
> > - current_desc = list_first_entry_or_null(&channel->ld_active,
> > - struct rz_dmac_desc, node);
> > - if (!current_desc)
> > - return 0;
> [ ... ]
> > + vd = vchan_find_desc(&channel->vc, cookie);
> > + if (vd) {
> > + /* Descriptor has been issued but not yet processed. */
> > + desc = to_rz_dmac_desc(vd);
> > + return desc->len;
> > + } else if (channel->desc && channel->desc->vd.tx.cookie == cookie) {
> > + /* Descriptor is currently processed. */
> > + desc = channel->desc;
> > + }
> >
> > - /*
> > - * No descriptor found for the cookie, there's thus no residue.
> > - * This shouldn't happen if the calling driver passes a correct
> > - * cookie value.
> > - */
> > - WARN(1, "No descriptor for cookie!");
> > + if (!desc) {
> > + /* Descriptor was not found. May be already completed by now. */
> > return 0;
> > }
> Does this logic correctly handle descriptors that have been submitted but not
> yet issued?
> The vchan_find_desc() function only searches the desc_issued list. If a client
> has submitted a transfer using vchan_tx_submit() but has not yet called
> dma_async_issue_pending(), the descriptor will be in the desc_submitted list.
> As a result, vd will be NULL, and the function will fall through to return
> 0 bytes instead of the descriptor's true length. This changes the behavior
> from the previous implementation and may break clients that rely on
> dmaengine_tx_status() to check the progress of submitted transfers.
Investigating the already existing drivers using the pattern proposed in this
set found that this is the way the checked drivers are implemeinting the
tx_status API. Based on that, I've adjusted it like this.
Vinod, Frank,
Could you please let me know what is expected here?
Thank you!
> [ ... ]
> > @@ -914,28 +865,18 @@ static irqreturn_t rz_dmac_irq_handler(int irq, void
*dev_id)
> > static irqreturn_t rz_dmac_irq_handler_thread(int irq, void *dev_id)
> > {
> > struct rz_dmac_chan *channel = dev_id;
> > - struct rz_dmac_desc *desc = NULL;
> > - unsigned long flags;
> > + struct rz_dmac_desc *desc;
> >
> > - spin_lock_irqsave(&channel->vc.lock, flags);
> > + guard(spinlock_irqsave)(&channel->vc.lock);
> >
> > - if (list_empty(&channel->ld_active)) {
> > - /* Someone might have called terminate all */
> > - goto out;
> > - }
> > + desc = channel->desc;
> > + if (!desc)
> > + return IRQ_HANDLED;
> >
> > - desc = list_first_entry(&channel->ld_active, struct rz_dmac_desc, node);
> > vchan_cookie_complete(&desc->vd);
> > - list_move_tail(channel->ld_active.next, &channel->ld_free);
> This issue is not introduced by this commit, but can a race condition here
> lead to premature completion of new transfers?
> If a transfer completes and asserts the hardirq, the threaded handler is
> scheduled. Before the thread runs, the client could call
> dmaengine_terminate_all() and immediately issue a new transfer. The
> rz_dmac_issue_pending() function would start the new transfer and assign it
> to channel->desc.
> When the delayed IRQ thread finally runs, it takes the lock and sees
> channel->desc pointing to the new transfer. It then unconditionally completes
> it via vchan_cookie_complete(). This could result in a transfer being
> prematurely reported as complete while it is still running.
I think this can happen in theory, but, as it is a behaviour not introduced by
this commit I will keep the series as is.
next prev parent reply other threads:[~2026-04-20 12:41 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-11 11:42 [PATCH v4 00/17] Renesas: dmaengine and ASoC fixes Claudiu
2026-04-11 11:42 ` [PATCH v4 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up Claudiu
2026-04-11 12:17 ` Biju Das
2026-04-11 12:34 ` Claudiu Beznea
2026-04-20 12:33 ` sashiko.dev review (Re: [PATCH v4 01/17] dmaengine: sh: rz-dmac: Move interrupt request after everything is set up) Claudiu Beznea
2026-04-11 11:42 ` [PATCH v4 02/17] dmaengine: sh: rz-dmac: Fix incorrect NULL check on list_first_entry() Claudiu
2026-04-11 11:42 ` [PATCH v4 03/17] dmaengine: sh: rz-dmac: Use list_first_entry_or_null() Claudiu
2026-04-11 11:42 ` [PATCH v4 04/17] dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw() Claudiu
2026-04-20 12:34 ` sashiko.dev review (Re: [PATCH v4 04/17] dmaengine: sh: rz-dmac: Use rz_dmac_disable_hw()) Claudiu Beznea
2026-04-11 11:42 ` [PATCH v4 05/17] dmaengine: sh: rz-dmac: Do not disable the channel on error Claudiu
2026-04-11 12:30 ` Biju Das
2026-04-14 8:28 ` Claudiu Beznea
2026-04-11 11:42 ` [PATCH v4 06/17] dmaengine: sh: rz-dmac: Add helper to compute the lmdesc address Claudiu
2026-04-11 11:42 ` [PATCH v4 07/17] dmaengine: sh: rz-dmac: Save the start LM descriptor Claudiu
2026-04-11 12:34 ` Biju Das
2026-04-11 12:38 ` Claudiu Beznea
2026-04-11 12:50 ` Biju Das
2026-04-20 12:37 ` sashiko.dev review (Re: [PATCH v4 07/17] dmaengine: sh: rz-dmac: Save the start LM descriptor) Claudiu Beznea
2026-04-11 11:42 ` [PATCH v4 08/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is enabled Claudiu
2026-04-11 11:42 ` [PATCH v4 09/17] dmaengine: sh: rz-dmac: Add helper to check if the channel is paused Claudiu
2026-04-11 11:42 ` [PATCH v4 10/17] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing Claudiu
2026-04-20 12:41 ` Claudiu Beznea [this message]
2026-04-11 11:42 ` [PATCH v4 11/17] dmaengine: sh: rz-dmac: Refactor pause/resume code Claudiu
2026-04-20 12:42 ` sashiko.dev review (Re: [PATCH v4 11/17] dmaengine: sh: rz-dmac: Refactor pause/resume code) Claudiu Beznea
2026-04-11 11:42 ` [PATCH v4 12/17] dmaengine: sh: rz-dmac: Drop the update of channel->chctrl with CHCTRL_SETEN Claudiu
2026-04-11 11:42 ` [PATCH v4 13/17] dmaengine: sh: rz-dmac: Add cyclic DMA support Claudiu
2026-04-20 12:51 ` sashiko.dev review (Re: [PATCH v4 13/17] dmaengine: sh: rz-dmac: Add cyclic DMA support) Claudiu Beznea
2026-04-11 11:43 ` [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support Claudiu
2026-04-20 7:42 ` Biju Das
2026-04-20 14:15 ` Claudiu Beznea
2026-04-20 14:21 ` Biju Das
2026-04-20 14:37 ` Claudiu Beznea
2026-04-20 14:53 ` Biju Das
2026-04-20 12:52 ` sashiko.dev review (Re: [PATCH v4 14/17] dmaengine: sh: rz-dmac: Add suspend to RAM support) Claudiu Beznea
2026-04-11 11:43 ` [PATCH v4 15/17] ASoC: renesas: rz-ssi: Add pause support Claudiu
2026-04-11 11:43 ` [PATCH v4 16/17] ASoC: renesas: rz-ssi: Use generic PCM dmaengine APIs Claudiu
2026-04-11 11:43 ` [PATCH v4 17/17] dmaengine: sh: rz-dmac: Set the Link End (LE) bit on the last descriptor Claudiu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=51d92b2a-152e-4b7a-a066-3bb43687ac5f@tuxon.dev \
--to=claudiu.beznea@tuxon.dev \
--cc=Frank.Li@kernel.org \
--cc=biju.das.jz@bp.renesas.com \
--cc=broonie@kernel.org \
--cc=claudiu.beznea.uj@bp.renesas.com \
--cc=dmaengine@vger.kernel.org \
--cc=fabrizio.castro.jz@renesas.com \
--cc=geert+renesas@glider.be \
--cc=lgirdwood@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-renesas-soc@vger.kernel.org \
--cc=linux-sound@vger.kernel.org \
--cc=long.luu.ur@renesas.com \
--cc=p.zabel@pengutronix.de \
--cc=perex@perex.cz \
--cc=prabhakar.mahadev-lad.rj@bp.renesas.com \
--cc=tiwai@suse.com \
--cc=vkoul@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox