* [PATCH v2 0/2] Xilinx DPDMA fixes and cyclic dma mode support
@ 2024-02-28 4:21 Vishal Sagar
2024-02-28 4:21 ` [PATCH v2 1/2] dmaengine: xilinx: dpdma: Fix race condition in vsync IRQ Vishal Sagar
2024-02-28 4:21 ` [PATCH v2 2/2] dmaengine: xilinx: dpdma: Add support for cyclic dma mode Vishal Sagar
0 siblings, 2 replies; 9+ messages in thread
From: Vishal Sagar @ 2024-02-28 4:21 UTC (permalink / raw)
To: laurent.pinchart, vkoul
Cc: michal.simek, dmaengine, linux-arm-kernel, linux-kernel,
varunkumar.allagadapa
This patch series fixes the race condition in vsync IRQ and also adds
the support for cyclic dma mode. The cyclic dma mode is added as part of
preparation for adding audio streaming support.
v2
- 1/2
- Fix as per Laurent's comment in
https://lore.kernel.org/lkml/YftWeLCpAfN%2FJnFj@pendragon.ideasonboard.com/
- Add taking lock in another place
Neel Gandhi (1):
dmaengine: xilinx: dpdma: Fix race condition in vsync IRQ
Rohit Visavalia (1):
dmaengine: xilinx: dpdma: Add support for cyclic dma mode
drivers/dma/xilinx/xilinx_dpdma.c | 107 +++++++++++++++++++++++++++++-
1 file changed, 105 insertions(+), 2 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v2 1/2] dmaengine: xilinx: dpdma: Fix race condition in vsync IRQ
2024-02-28 4:21 [PATCH v2 0/2] Xilinx DPDMA fixes and cyclic dma mode support Vishal Sagar
@ 2024-02-28 4:21 ` Vishal Sagar
2024-03-12 8:33 ` Tomi Valkeinen
` (2 more replies)
2024-02-28 4:21 ` [PATCH v2 2/2] dmaengine: xilinx: dpdma: Add support for cyclic dma mode Vishal Sagar
1 sibling, 3 replies; 9+ messages in thread
From: Vishal Sagar @ 2024-02-28 4:21 UTC (permalink / raw)
To: laurent.pinchart, vkoul
Cc: michal.simek, dmaengine, linux-arm-kernel, linux-kernel,
varunkumar.allagadapa
From: Neel Gandhi <neel.gandhi@xilinx.com>
The vchan_next_desc() function, called from
xilinx_dpdma_chan_queue_transfer(), must be called with
virt_dma_chan.lock held. This isn't correctly handled in all code paths,
resulting in a race condition between the .device_issue_pending()
handler and the IRQ handler which causes DMA to randomly stop. Fix it by
taking the lock around xilinx_dpdma_chan_queue_transfer() calls that are
missing it.
Signed-off-by: Neel Gandhi <neel.gandhi@amd.com>
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
Signed-off-by: Vishal Sagar <vishal.sagar@amd.com>
Link: https://lore.kernel.org/all/20220122121407.11467-1-neel.gandhi@xilinx.com
---
drivers/dma/xilinx/xilinx_dpdma.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/dma/xilinx/xilinx_dpdma.c b/drivers/dma/xilinx/xilinx_dpdma.c
index b82815e64d24..28d9af8f00f0 100644
--- a/drivers/dma/xilinx/xilinx_dpdma.c
+++ b/drivers/dma/xilinx/xilinx_dpdma.c
@@ -1097,12 +1097,14 @@ static void xilinx_dpdma_chan_vsync_irq(struct xilinx_dpdma_chan *chan)
* Complete the active descriptor, if any, promote the pending
* descriptor to active, and queue the next transfer, if any.
*/
+ spin_lock(&chan->vchan.lock);
if (chan->desc.active)
vchan_cookie_complete(&chan->desc.active->vdesc);
chan->desc.active = pending;
chan->desc.pending = NULL;
xilinx_dpdma_chan_queue_transfer(chan);
+ spin_unlock(&chan->vchan.lock);
out:
spin_unlock_irqrestore(&chan->lock, flags);
@@ -1264,10 +1266,12 @@ static void xilinx_dpdma_issue_pending(struct dma_chan *dchan)
struct xilinx_dpdma_chan *chan = to_xilinx_chan(dchan);
unsigned long flags;
- spin_lock_irqsave(&chan->vchan.lock, flags);
+ spin_lock_irqsave(&chan->lock, flags);
+ spin_lock(&chan->vchan.lock);
if (vchan_issue_pending(&chan->vchan))
xilinx_dpdma_chan_queue_transfer(chan);
- spin_unlock_irqrestore(&chan->vchan.lock, flags);
+ spin_unlock(&chan->vchan.lock);
+ spin_unlock_irqrestore(&chan->lock, flags);
}
static int xilinx_dpdma_config(struct dma_chan *dchan,
@@ -1495,7 +1499,9 @@ static void xilinx_dpdma_chan_err_task(struct tasklet_struct *t)
XILINX_DPDMA_EINTR_CHAN_ERR_MASK << chan->id);
spin_lock_irqsave(&chan->lock, flags);
+ spin_lock(&chan->vchan.lock);
xilinx_dpdma_chan_queue_transfer(chan);
+ spin_unlock(&chan->vchan.lock);
spin_unlock_irqrestore(&chan->lock, flags);
}
--
2.25.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v2 2/2] dmaengine: xilinx: dpdma: Add support for cyclic dma mode
2024-02-28 4:21 [PATCH v2 0/2] Xilinx DPDMA fixes and cyclic dma mode support Vishal Sagar
2024-02-28 4:21 ` [PATCH v2 1/2] dmaengine: xilinx: dpdma: Fix race condition in vsync IRQ Vishal Sagar
@ 2024-02-28 4:21 ` Vishal Sagar
2024-03-13 7:19 ` Tomi Valkeinen
2024-03-27 12:53 ` Tomi Valkeinen
1 sibling, 2 replies; 9+ messages in thread
From: Vishal Sagar @ 2024-02-28 4:21 UTC (permalink / raw)
To: laurent.pinchart, vkoul
Cc: michal.simek, dmaengine, linux-arm-kernel, linux-kernel,
varunkumar.allagadapa
From: Rohit Visavalia <rohit.visavalia@xilinx.com>
This patch adds support for DPDMA cyclic dma mode,
DMA cyclic transfers are required by audio streaming.
Signed-off-by: Rohit Visavalia <rohit.visavalia@amd.com>
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
Signed-off-by: Vishal Sagar <vishal.sagar@amd.com>
---
drivers/dma/xilinx/xilinx_dpdma.c | 97 +++++++++++++++++++++++++++++++
1 file changed, 97 insertions(+)
diff --git a/drivers/dma/xilinx/xilinx_dpdma.c b/drivers/dma/xilinx/xilinx_dpdma.c
index 28d9af8f00f0..88ad2f35538a 100644
--- a/drivers/dma/xilinx/xilinx_dpdma.c
+++ b/drivers/dma/xilinx/xilinx_dpdma.c
@@ -669,6 +669,84 @@ static void xilinx_dpdma_chan_free_tx_desc(struct virt_dma_desc *vdesc)
kfree(desc);
}
+/**
+ * xilinx_dpdma_chan_prep_cyclic - Prepare a cyclic dma descriptor
+ * @chan: DPDMA channel
+ * @buf_addr: buffer address
+ * @buf_len: buffer length
+ * @period_len: number of periods
+ * @flags: tx flags argument passed in to prepare function
+ *
+ * Prepare a tx descriptor incudling internal software/hardware descriptors
+ * for the given cyclic transaction.
+ *
+ * Return: A dma async tx descriptor on success, or NULL.
+ */
+static struct dma_async_tx_descriptor *
+xilinx_dpdma_chan_prep_cyclic(struct xilinx_dpdma_chan *chan,
+ dma_addr_t buf_addr, size_t buf_len,
+ size_t period_len, unsigned long flags)
+{
+ struct xilinx_dpdma_tx_desc *tx_desc;
+ struct xilinx_dpdma_sw_desc *sw_desc, *last = NULL;
+ unsigned int periods = buf_len / period_len;
+ unsigned int i;
+
+ tx_desc = xilinx_dpdma_chan_alloc_tx_desc(chan);
+ if (!tx_desc)
+ return (void *)tx_desc;
+
+ for (i = 0; i < periods; i++) {
+ struct xilinx_dpdma_hw_desc *hw_desc;
+
+ if (!IS_ALIGNED(buf_addr, XILINX_DPDMA_ALIGN_BYTES)) {
+ dev_err(chan->xdev->dev,
+ "buffer should be aligned at %d B\n",
+ XILINX_DPDMA_ALIGN_BYTES);
+ goto error;
+ }
+
+ sw_desc = xilinx_dpdma_chan_alloc_sw_desc(chan);
+ if (!sw_desc)
+ goto error;
+
+ xilinx_dpdma_sw_desc_set_dma_addrs(chan->xdev, sw_desc, last,
+ &buf_addr, 1);
+ hw_desc = &sw_desc->hw;
+ hw_desc->xfer_size = period_len;
+ hw_desc->hsize_stride =
+ FIELD_PREP(XILINX_DPDMA_DESC_HSIZE_STRIDE_HSIZE_MASK,
+ period_len) |
+ FIELD_PREP(XILINX_DPDMA_DESC_HSIZE_STRIDE_STRIDE_MASK,
+ period_len);
+ hw_desc->control |= XILINX_DPDMA_DESC_CONTROL_PREEMBLE;
+ hw_desc->control |= XILINX_DPDMA_DESC_CONTROL_IGNORE_DONE;
+ hw_desc->control |= XILINX_DPDMA_DESC_CONTROL_COMPLETE_INTR;
+
+ list_add_tail(&sw_desc->node, &tx_desc->descriptors);
+
+ buf_addr += period_len;
+ last = sw_desc;
+ }
+
+ sw_desc = list_first_entry(&tx_desc->descriptors,
+ struct xilinx_dpdma_sw_desc, node);
+ last->hw.next_desc = lower_32_bits(sw_desc->dma_addr);
+ if (chan->xdev->ext_addr)
+ last->hw.addr_ext |=
+ FIELD_PREP(XILINX_DPDMA_DESC_ADDR_EXT_NEXT_ADDR_MASK,
+ upper_32_bits(sw_desc->dma_addr));
+
+ last->hw.control |= XILINX_DPDMA_DESC_CONTROL_LAST_OF_FRAME;
+
+ return vchan_tx_prep(&chan->vchan, &tx_desc->vdesc, flags);
+
+error:
+ xilinx_dpdma_chan_free_tx_desc(&tx_desc->vdesc);
+
+ return NULL;
+}
+
/**
* xilinx_dpdma_chan_prep_interleaved_dma - Prepare an interleaved dma
* descriptor
@@ -1190,6 +1268,23 @@ static void xilinx_dpdma_chan_handle_err(struct xilinx_dpdma_chan *chan)
/* -----------------------------------------------------------------------------
* DMA Engine Operations
*/
+static struct dma_async_tx_descriptor *
+xilinx_dpdma_prep_dma_cyclic(struct dma_chan *dchan, dma_addr_t buf_addr,
+ size_t buf_len, size_t period_len,
+ enum dma_transfer_direction direction,
+ unsigned long flags)
+{
+ struct xilinx_dpdma_chan *chan = to_xilinx_chan(dchan);
+
+ if (direction != DMA_MEM_TO_DEV)
+ return NULL;
+
+ if (buf_len % period_len)
+ return NULL;
+
+ return xilinx_dpdma_chan_prep_cyclic(chan, buf_addr, buf_len,
+ period_len, flags);
+}
static struct dma_async_tx_descriptor *
xilinx_dpdma_prep_interleaved_dma(struct dma_chan *dchan,
@@ -1673,6 +1768,7 @@ static int xilinx_dpdma_probe(struct platform_device *pdev)
dma_cap_set(DMA_SLAVE, ddev->cap_mask);
dma_cap_set(DMA_PRIVATE, ddev->cap_mask);
+ dma_cap_set(DMA_CYCLIC, ddev->cap_mask);
dma_cap_set(DMA_INTERLEAVE, ddev->cap_mask);
dma_cap_set(DMA_REPEAT, ddev->cap_mask);
dma_cap_set(DMA_LOAD_EOT, ddev->cap_mask);
@@ -1680,6 +1776,7 @@ static int xilinx_dpdma_probe(struct platform_device *pdev)
ddev->device_alloc_chan_resources = xilinx_dpdma_alloc_chan_resources;
ddev->device_free_chan_resources = xilinx_dpdma_free_chan_resources;
+ ddev->device_prep_dma_cyclic = xilinx_dpdma_prep_dma_cyclic;
ddev->device_prep_interleaved_dma = xilinx_dpdma_prep_interleaved_dma;
/* TODO: Can we achieve better granularity ? */
ddev->device_tx_status = dma_cookie_status;
--
2.25.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v2 1/2] dmaengine: xilinx: dpdma: Fix race condition in vsync IRQ
2024-02-28 4:21 ` [PATCH v2 1/2] dmaengine: xilinx: dpdma: Fix race condition in vsync IRQ Vishal Sagar
@ 2024-03-12 8:33 ` Tomi Valkeinen
2024-03-12 15:56 ` Sagar, Vishal
2024-03-12 17:46 ` Sean Anderson
2024-03-27 12:32 ` Tomi Valkeinen
2 siblings, 1 reply; 9+ messages in thread
From: Tomi Valkeinen @ 2024-03-12 8:33 UTC (permalink / raw)
To: Vishal Sagar, laurent.pinchart, vkoul
Cc: michal.simek, dmaengine, linux-arm-kernel, linux-kernel,
varunkumar.allagadapa
Hi,
On 28/02/2024 06:21, Vishal Sagar wrote:
> From: Neel Gandhi <neel.gandhi@xilinx.com>
>
> The vchan_next_desc() function, called from
> xilinx_dpdma_chan_queue_transfer(), must be called with
> virt_dma_chan.lock held. This isn't correctly handled in all code paths,
> resulting in a race condition between the .device_issue_pending()
> handler and the IRQ handler which causes DMA to randomly stop. Fix it by
> taking the lock around xilinx_dpdma_chan_queue_transfer() calls that are
> missing it.
>
> Signed-off-by: Neel Gandhi <neel.gandhi@amd.com>
> Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
> Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
> Signed-off-by: Vishal Sagar <vishal.sagar@amd.com>
>
> Link: https://lore.kernel.org/all/20220122121407.11467-1-neel.gandhi@xilinx.com
> ---
> drivers/dma/xilinx/xilinx_dpdma.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
This fixes a lockdep warning:
WARNING: CPU: 1 PID: 466 at drivers/dma/xilinx/xilinx_dpdma.c:834
Afaics, this issue has been around since the initial commit, in v5.10,
and the fix applies on top of v5.10. I have tested this on v6.2, which
is where the DP support was added to the board I have.
So I think you can add:
Fixes: 7cbb0c63de3f ("dmaengine: xilinx: dpdma: Add the Xilinx
DisplayPort DMA engine driver")
Tomi
> diff --git a/drivers/dma/xilinx/xilinx_dpdma.c b/drivers/dma/xilinx/xilinx_dpdma.c
> index b82815e64d24..28d9af8f00f0 100644
> --- a/drivers/dma/xilinx/xilinx_dpdma.c
> +++ b/drivers/dma/xilinx/xilinx_dpdma.c
> @@ -1097,12 +1097,14 @@ static void xilinx_dpdma_chan_vsync_irq(struct xilinx_dpdma_chan *chan)
> * Complete the active descriptor, if any, promote the pending
> * descriptor to active, and queue the next transfer, if any.
> */
> + spin_lock(&chan->vchan.lock);
> if (chan->desc.active)
> vchan_cookie_complete(&chan->desc.active->vdesc);
> chan->desc.active = pending;
> chan->desc.pending = NULL;
>
> xilinx_dpdma_chan_queue_transfer(chan);
> + spin_unlock(&chan->vchan.lock);
>
> out:
> spin_unlock_irqrestore(&chan->lock, flags);
> @@ -1264,10 +1266,12 @@ static void xilinx_dpdma_issue_pending(struct dma_chan *dchan)
> struct xilinx_dpdma_chan *chan = to_xilinx_chan(dchan);
> unsigned long flags;
>
> - spin_lock_irqsave(&chan->vchan.lock, flags);
> + spin_lock_irqsave(&chan->lock, flags);
> + spin_lock(&chan->vchan.lock);
> if (vchan_issue_pending(&chan->vchan))
> xilinx_dpdma_chan_queue_transfer(chan);
> - spin_unlock_irqrestore(&chan->vchan.lock, flags);
> + spin_unlock(&chan->vchan.lock);
> + spin_unlock_irqrestore(&chan->lock, flags);
> }
>
> static int xilinx_dpdma_config(struct dma_chan *dchan,
> @@ -1495,7 +1499,9 @@ static void xilinx_dpdma_chan_err_task(struct tasklet_struct *t)
> XILINX_DPDMA_EINTR_CHAN_ERR_MASK << chan->id);
>
> spin_lock_irqsave(&chan->lock, flags);
> + spin_lock(&chan->vchan.lock);
> xilinx_dpdma_chan_queue_transfer(chan);
> + spin_unlock(&chan->vchan.lock);
> spin_unlock_irqrestore(&chan->lock, flags);
> }
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: [PATCH v2 1/2] dmaengine: xilinx: dpdma: Fix race condition in vsync IRQ
2024-03-12 8:33 ` Tomi Valkeinen
@ 2024-03-12 15:56 ` Sagar, Vishal
0 siblings, 0 replies; 9+ messages in thread
From: Sagar, Vishal @ 2024-03-12 15:56 UTC (permalink / raw)
To: Tomi Valkeinen, laurent.pinchart@ideasonboard.com,
vkoul@kernel.org
Cc: Simek, Michal, dmaengine@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, Allagadapa, Varunkumar
[AMD Official Use Only - General]
Hi Tomi,
> -----Original Message-----
> From: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
> Sent: Tuesday, March 12, 2024 1:34 AM
> To: Sagar, Vishal <vishal.sagar@amd.com>;
> laurent.pinchart@ideasonboard.com; vkoul@kernel.org
> Cc: Simek, Michal <michal.simek@amd.com>; dmaengine@vger.kernel.org;
> linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> Allagadapa, Varunkumar <varunkumar.allagadapa@amd.com>
> Subject: Re: [PATCH v2 1/2] dmaengine: xilinx: dpdma: Fix race condition in
> vsync IRQ
>
> Hi,
>
> On 28/02/2024 06:21, Vishal Sagar wrote:
> > From: Neel Gandhi <neel.gandhi@xilinx.com>
> >
> > The vchan_next_desc() function, called from
> > xilinx_dpdma_chan_queue_transfer(), must be called with
> > virt_dma_chan.lock held. This isn't correctly handled in all code paths,
> > resulting in a race condition between the .device_issue_pending()
> > handler and the IRQ handler which causes DMA to randomly stop. Fix it by
> > taking the lock around xilinx_dpdma_chan_queue_transfer() calls that are
> > missing it.
> >
> > Signed-off-by: Neel Gandhi <neel.gandhi@amd.com>
> > Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
> > Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
> > Signed-off-by: Vishal Sagar <vishal.sagar@amd.com>
> >
> > Link: https://lore.kernel.org/all/20220122121407.11467-1-
> neel.gandhi@xilinx.com
> > ---
> > drivers/dma/xilinx/xilinx_dpdma.c | 10 ++++++++--
> > 1 file changed, 8 insertions(+), 2 deletions(-)
>
> This fixes a lockdep warning:
>
> WARNING: CPU: 1 PID: 466 at drivers/dma/xilinx/xilinx_dpdma.c:834
>
> Afaics, this issue has been around since the initial commit, in v5.10,
> and the fix applies on top of v5.10. I have tested this on v6.2, which
> is where the DP support was added to the board I have.
>
> So I think you can add:
>
> Fixes: 7cbb0c63de3f ("dmaengine: xilinx: dpdma: Add the Xilinx
> DisplayPort DMA engine driver")
>
> Tomi
>
<snip>
Thanks for going through the patch.
I will add this to the commit message and resend v3.
I am still waiting for more reviews to happen.
Regards
Vishal Sagar
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 1/2] dmaengine: xilinx: dpdma: Fix race condition in vsync IRQ
2024-02-28 4:21 ` [PATCH v2 1/2] dmaengine: xilinx: dpdma: Fix race condition in vsync IRQ Vishal Sagar
2024-03-12 8:33 ` Tomi Valkeinen
@ 2024-03-12 17:46 ` Sean Anderson
2024-03-27 12:32 ` Tomi Valkeinen
2 siblings, 0 replies; 9+ messages in thread
From: Sean Anderson @ 2024-03-12 17:46 UTC (permalink / raw)
To: Vishal Sagar, laurent.pinchart, vkoul
Cc: michal.simek, dmaengine, linux-arm-kernel, linux-kernel,
varunkumar.allagadapa
Hi Vishal,
On 2/27/24 23:21, Vishal Sagar wrote:
> From: Neel Gandhi <neel.gandhi@xilinx.com>
>
> The vchan_next_desc() function, called from
> xilinx_dpdma_chan_queue_transfer(), must be called with
> virt_dma_chan.lock held. This isn't correctly handled in all code paths,
> resulting in a race condition between the .device_issue_pending()
> handler and the IRQ handler which causes DMA to randomly stop. Fix it by
> taking the lock around xilinx_dpdma_chan_queue_transfer() calls that are
> missing it.
>
> Signed-off-by: Neel Gandhi <neel.gandhi@amd.com>
> Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
> Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
> Signed-off-by: Vishal Sagar <vishal.sagar@amd.com>
>
> Link: https://cas5-0-urlprotect.trendmicro.com:443/wis/clicktime/v1/query?url=https%3a%2f%2flore.kernel.org%2fall%2f20220122121407.11467%2d1%2dneel.gandhi%40xilinx.com&umid=a486940f-2fe3-47f4-9b3f-416e59036eab&auth=d807158c60b7d2502abde8a2fc01f40662980862-a75e22540e8429d70f26093b45d38995a0e6e1e8
> ---
> drivers/dma/xilinx/xilinx_dpdma.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/dma/xilinx/xilinx_dpdma.c b/drivers/dma/xilinx/xilinx_dpdma.c
> index b82815e64d24..28d9af8f00f0 100644
> --- a/drivers/dma/xilinx/xilinx_dpdma.c
> +++ b/drivers/dma/xilinx/xilinx_dpdma.c
> @@ -1097,12 +1097,14 @@ static void xilinx_dpdma_chan_vsync_irq(struct xilinx_dpdma_chan *chan)
> * Complete the active descriptor, if any, promote the pending
> * descriptor to active, and queue the next transfer, if any.
> */
> + spin_lock(&chan->vchan.lock);
> if (chan->desc.active)
> vchan_cookie_complete(&chan->desc.active->vdesc);
> chan->desc.active = pending;
> chan->desc.pending = NULL;
>
> xilinx_dpdma_chan_queue_transfer(chan);
> + spin_unlock(&chan->vchan.lock);
>
> out:
> spin_unlock_irqrestore(&chan->lock, flags);
> @@ -1264,10 +1266,12 @@ static void xilinx_dpdma_issue_pending(struct dma_chan *dchan)
> struct xilinx_dpdma_chan *chan = to_xilinx_chan(dchan);
> unsigned long flags;
>
> - spin_lock_irqsave(&chan->vchan.lock, flags);
> + spin_lock_irqsave(&chan->lock, flags);
> + spin_lock(&chan->vchan.lock);
> if (vchan_issue_pending(&chan->vchan))
> xilinx_dpdma_chan_queue_transfer(chan);
> - spin_unlock_irqrestore(&chan->vchan.lock, flags);
> + spin_unlock(&chan->vchan.lock);
> + spin_unlock_irqrestore(&chan->lock, flags);
> }
>
> static int xilinx_dpdma_config(struct dma_chan *dchan,
> @@ -1495,7 +1499,9 @@ static void xilinx_dpdma_chan_err_task(struct tasklet_struct *t)
> XILINX_DPDMA_EINTR_CHAN_ERR_MASK << chan->id);
>
> spin_lock_irqsave(&chan->lock, flags);
> + spin_lock(&chan->vchan.lock);
> xilinx_dpdma_chan_queue_transfer(chan);
> + spin_unlock(&chan->vchan.lock);
> spin_unlock_irqrestore(&chan->lock, flags);
> }
I also ran into this issue and came up with the same fix [1].
Reviewed-by: Sean Anderson <sean.anderson@linux.dev>
[1] https://lore.kernel.org/dmaengine/20240308210034.3634938-2-sean.anderson@linux.dev/
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 2/2] dmaengine: xilinx: dpdma: Add support for cyclic dma mode
2024-02-28 4:21 ` [PATCH v2 2/2] dmaengine: xilinx: dpdma: Add support for cyclic dma mode Vishal Sagar
@ 2024-03-13 7:19 ` Tomi Valkeinen
2024-03-27 12:53 ` Tomi Valkeinen
1 sibling, 0 replies; 9+ messages in thread
From: Tomi Valkeinen @ 2024-03-13 7:19 UTC (permalink / raw)
To: Vishal Sagar, laurent.pinchart, vkoul
Cc: michal.simek, dmaengine, linux-arm-kernel, linux-kernel,
varunkumar.allagadapa
On 28/02/2024 06:21, Vishal Sagar wrote:
> From: Rohit Visavalia <rohit.visavalia@xilinx.com>
>
> This patch adds support for DPDMA cyclic dma mode,
> DMA cyclic transfers are required by audio streaming.
>
> Signed-off-by: Rohit Visavalia <rohit.visavalia@amd.com>
> Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
> Signed-off-by: Vishal Sagar <vishal.sagar@amd.com>
>
> ---
> drivers/dma/xilinx/xilinx_dpdma.c | 97 +++++++++++++++++++++++++++++++
> 1 file changed, 97 insertions(+)
This works fine with the DP audio series I posted, so:
Tested-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
Tomi
> diff --git a/drivers/dma/xilinx/xilinx_dpdma.c b/drivers/dma/xilinx/xilinx_dpdma.c
> index 28d9af8f00f0..88ad2f35538a 100644
> --- a/drivers/dma/xilinx/xilinx_dpdma.c
> +++ b/drivers/dma/xilinx/xilinx_dpdma.c
> @@ -669,6 +669,84 @@ static void xilinx_dpdma_chan_free_tx_desc(struct virt_dma_desc *vdesc)
> kfree(desc);
> }
>
> +/**
> + * xilinx_dpdma_chan_prep_cyclic - Prepare a cyclic dma descriptor
> + * @chan: DPDMA channel
> + * @buf_addr: buffer address
> + * @buf_len: buffer length
> + * @period_len: number of periods
> + * @flags: tx flags argument passed in to prepare function
> + *
> + * Prepare a tx descriptor incudling internal software/hardware descriptors
> + * for the given cyclic transaction.
> + *
> + * Return: A dma async tx descriptor on success, or NULL.
> + */
> +static struct dma_async_tx_descriptor *
> +xilinx_dpdma_chan_prep_cyclic(struct xilinx_dpdma_chan *chan,
> + dma_addr_t buf_addr, size_t buf_len,
> + size_t period_len, unsigned long flags)
> +{
> + struct xilinx_dpdma_tx_desc *tx_desc;
> + struct xilinx_dpdma_sw_desc *sw_desc, *last = NULL;
> + unsigned int periods = buf_len / period_len;
> + unsigned int i;
> +
> + tx_desc = xilinx_dpdma_chan_alloc_tx_desc(chan);
> + if (!tx_desc)
> + return (void *)tx_desc;
> +
> + for (i = 0; i < periods; i++) {
> + struct xilinx_dpdma_hw_desc *hw_desc;
> +
> + if (!IS_ALIGNED(buf_addr, XILINX_DPDMA_ALIGN_BYTES)) {
> + dev_err(chan->xdev->dev,
> + "buffer should be aligned at %d B\n",
> + XILINX_DPDMA_ALIGN_BYTES);
> + goto error;
> + }
> +
> + sw_desc = xilinx_dpdma_chan_alloc_sw_desc(chan);
> + if (!sw_desc)
> + goto error;
> +
> + xilinx_dpdma_sw_desc_set_dma_addrs(chan->xdev, sw_desc, last,
> + &buf_addr, 1);
> + hw_desc = &sw_desc->hw;
> + hw_desc->xfer_size = period_len;
> + hw_desc->hsize_stride =
> + FIELD_PREP(XILINX_DPDMA_DESC_HSIZE_STRIDE_HSIZE_MASK,
> + period_len) |
> + FIELD_PREP(XILINX_DPDMA_DESC_HSIZE_STRIDE_STRIDE_MASK,
> + period_len);
> + hw_desc->control |= XILINX_DPDMA_DESC_CONTROL_PREEMBLE;
> + hw_desc->control |= XILINX_DPDMA_DESC_CONTROL_IGNORE_DONE;
> + hw_desc->control |= XILINX_DPDMA_DESC_CONTROL_COMPLETE_INTR;
> +
> + list_add_tail(&sw_desc->node, &tx_desc->descriptors);
> +
> + buf_addr += period_len;
> + last = sw_desc;
> + }
> +
> + sw_desc = list_first_entry(&tx_desc->descriptors,
> + struct xilinx_dpdma_sw_desc, node);
> + last->hw.next_desc = lower_32_bits(sw_desc->dma_addr);
> + if (chan->xdev->ext_addr)
> + last->hw.addr_ext |=
> + FIELD_PREP(XILINX_DPDMA_DESC_ADDR_EXT_NEXT_ADDR_MASK,
> + upper_32_bits(sw_desc->dma_addr));
> +
> + last->hw.control |= XILINX_DPDMA_DESC_CONTROL_LAST_OF_FRAME;
> +
> + return vchan_tx_prep(&chan->vchan, &tx_desc->vdesc, flags);
> +
> +error:
> + xilinx_dpdma_chan_free_tx_desc(&tx_desc->vdesc);
> +
> + return NULL;
> +}
> +
> /**
> * xilinx_dpdma_chan_prep_interleaved_dma - Prepare an interleaved dma
> * descriptor
> @@ -1190,6 +1268,23 @@ static void xilinx_dpdma_chan_handle_err(struct xilinx_dpdma_chan *chan)
> /* -----------------------------------------------------------------------------
> * DMA Engine Operations
> */
> +static struct dma_async_tx_descriptor *
> +xilinx_dpdma_prep_dma_cyclic(struct dma_chan *dchan, dma_addr_t buf_addr,
> + size_t buf_len, size_t period_len,
> + enum dma_transfer_direction direction,
> + unsigned long flags)
> +{
> + struct xilinx_dpdma_chan *chan = to_xilinx_chan(dchan);
> +
> + if (direction != DMA_MEM_TO_DEV)
> + return NULL;
> +
> + if (buf_len % period_len)
> + return NULL;
> +
> + return xilinx_dpdma_chan_prep_cyclic(chan, buf_addr, buf_len,
> + period_len, flags);
> +}
>
> static struct dma_async_tx_descriptor *
> xilinx_dpdma_prep_interleaved_dma(struct dma_chan *dchan,
> @@ -1673,6 +1768,7 @@ static int xilinx_dpdma_probe(struct platform_device *pdev)
>
> dma_cap_set(DMA_SLAVE, ddev->cap_mask);
> dma_cap_set(DMA_PRIVATE, ddev->cap_mask);
> + dma_cap_set(DMA_CYCLIC, ddev->cap_mask);
> dma_cap_set(DMA_INTERLEAVE, ddev->cap_mask);
> dma_cap_set(DMA_REPEAT, ddev->cap_mask);
> dma_cap_set(DMA_LOAD_EOT, ddev->cap_mask);
> @@ -1680,6 +1776,7 @@ static int xilinx_dpdma_probe(struct platform_device *pdev)
>
> ddev->device_alloc_chan_resources = xilinx_dpdma_alloc_chan_resources;
> ddev->device_free_chan_resources = xilinx_dpdma_free_chan_resources;
> + ddev->device_prep_dma_cyclic = xilinx_dpdma_prep_dma_cyclic;
> ddev->device_prep_interleaved_dma = xilinx_dpdma_prep_interleaved_dma;
> /* TODO: Can we achieve better granularity ? */
> ddev->device_tx_status = dma_cookie_status;
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 1/2] dmaengine: xilinx: dpdma: Fix race condition in vsync IRQ
2024-02-28 4:21 ` [PATCH v2 1/2] dmaengine: xilinx: dpdma: Fix race condition in vsync IRQ Vishal Sagar
2024-03-12 8:33 ` Tomi Valkeinen
2024-03-12 17:46 ` Sean Anderson
@ 2024-03-27 12:32 ` Tomi Valkeinen
2 siblings, 0 replies; 9+ messages in thread
From: Tomi Valkeinen @ 2024-03-27 12:32 UTC (permalink / raw)
To: Vishal Sagar
Cc: michal.simek, dmaengine, linux-arm-kernel, linux-kernel,
varunkumar.allagadapa, laurent.pinchart, vkoul, Sean Anderson
On 28/02/2024 06:21, Vishal Sagar wrote:
> From: Neel Gandhi <neel.gandhi@xilinx.com>
>
> The vchan_next_desc() function, called from
> xilinx_dpdma_chan_queue_transfer(), must be called with
> virt_dma_chan.lock held. This isn't correctly handled in all code paths,
> resulting in a race condition between the .device_issue_pending()
> handler and the IRQ handler which causes DMA to randomly stop. Fix it by
> taking the lock around xilinx_dpdma_chan_queue_transfer() calls that are
> missing it.
>
> Signed-off-by: Neel Gandhi <neel.gandhi@amd.com>
> Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
> Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
> Signed-off-by: Vishal Sagar <vishal.sagar@amd.com>
Sean posted almost identical, but very slightly better patch, for this,
so I think we can pick that one instead.
Tomi
>
> Link: https://lore.kernel.org/all/20220122121407.11467-1-neel.gandhi@xilinx.com
> ---
> drivers/dma/xilinx/xilinx_dpdma.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/dma/xilinx/xilinx_dpdma.c b/drivers/dma/xilinx/xilinx_dpdma.c
> index b82815e64d24..28d9af8f00f0 100644
> --- a/drivers/dma/xilinx/xilinx_dpdma.c
> +++ b/drivers/dma/xilinx/xilinx_dpdma.c
> @@ -1097,12 +1097,14 @@ static void xilinx_dpdma_chan_vsync_irq(struct xilinx_dpdma_chan *chan)
> * Complete the active descriptor, if any, promote the pending
> * descriptor to active, and queue the next transfer, if any.
> */
> + spin_lock(&chan->vchan.lock);
> if (chan->desc.active)
> vchan_cookie_complete(&chan->desc.active->vdesc);
> chan->desc.active = pending;
> chan->desc.pending = NULL;
>
> xilinx_dpdma_chan_queue_transfer(chan);
> + spin_unlock(&chan->vchan.lock);
>
> out:
> spin_unlock_irqrestore(&chan->lock, flags);
> @@ -1264,10 +1266,12 @@ static void xilinx_dpdma_issue_pending(struct dma_chan *dchan)
> struct xilinx_dpdma_chan *chan = to_xilinx_chan(dchan);
> unsigned long flags;
>
> - spin_lock_irqsave(&chan->vchan.lock, flags);
> + spin_lock_irqsave(&chan->lock, flags);
> + spin_lock(&chan->vchan.lock);
> if (vchan_issue_pending(&chan->vchan))
> xilinx_dpdma_chan_queue_transfer(chan);
> - spin_unlock_irqrestore(&chan->vchan.lock, flags);
> + spin_unlock(&chan->vchan.lock);
> + spin_unlock_irqrestore(&chan->lock, flags);
> }
>
> static int xilinx_dpdma_config(struct dma_chan *dchan,
> @@ -1495,7 +1499,9 @@ static void xilinx_dpdma_chan_err_task(struct tasklet_struct *t)
> XILINX_DPDMA_EINTR_CHAN_ERR_MASK << chan->id);
>
> spin_lock_irqsave(&chan->lock, flags);
> + spin_lock(&chan->vchan.lock);
> xilinx_dpdma_chan_queue_transfer(chan);
> + spin_unlock(&chan->vchan.lock);
> spin_unlock_irqrestore(&chan->lock, flags);
> }
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 2/2] dmaengine: xilinx: dpdma: Add support for cyclic dma mode
2024-02-28 4:21 ` [PATCH v2 2/2] dmaengine: xilinx: dpdma: Add support for cyclic dma mode Vishal Sagar
2024-03-13 7:19 ` Tomi Valkeinen
@ 2024-03-27 12:53 ` Tomi Valkeinen
1 sibling, 0 replies; 9+ messages in thread
From: Tomi Valkeinen @ 2024-03-27 12:53 UTC (permalink / raw)
To: Vishal Sagar
Cc: michal.simek, dmaengine, linux-arm-kernel, linux-kernel,
varunkumar.allagadapa, laurent.pinchart, vkoul
On 28/02/2024 06:21, Vishal Sagar wrote:
> From: Rohit Visavalia <rohit.visavalia@xilinx.com>
>
> This patch adds support for DPDMA cyclic dma mode,
> DMA cyclic transfers are required by audio streaming.
>
> Signed-off-by: Rohit Visavalia <rohit.visavalia@amd.com>
> Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
> Signed-off-by: Vishal Sagar <vishal.sagar@amd.com>
>
> ---
> drivers/dma/xilinx/xilinx_dpdma.c | 97 +++++++++++++++++++++++++++++++
> 1 file changed, 97 insertions(+)
>
> diff --git a/drivers/dma/xilinx/xilinx_dpdma.c b/drivers/dma/xilinx/xilinx_dpdma.c
> index 28d9af8f00f0..88ad2f35538a 100644
> --- a/drivers/dma/xilinx/xilinx_dpdma.c
> +++ b/drivers/dma/xilinx/xilinx_dpdma.c
> @@ -669,6 +669,84 @@ static void xilinx_dpdma_chan_free_tx_desc(struct virt_dma_desc *vdesc)
> kfree(desc);
> }
>
> +/**
> + * xilinx_dpdma_chan_prep_cyclic - Prepare a cyclic dma descriptor
> + * @chan: DPDMA channel
> + * @buf_addr: buffer address
> + * @buf_len: buffer length
> + * @period_len: number of periods
> + * @flags: tx flags argument passed in to prepare function
> + *
> + * Prepare a tx descriptor incudling internal software/hardware descriptors
> + * for the given cyclic transaction.
> + *
> + * Return: A dma async tx descriptor on success, or NULL.
> + */
> +static struct dma_async_tx_descriptor *
> +xilinx_dpdma_chan_prep_cyclic(struct xilinx_dpdma_chan *chan,
> + dma_addr_t buf_addr, size_t buf_len,
> + size_t period_len, unsigned long flags)
> +{
> + struct xilinx_dpdma_tx_desc *tx_desc;
> + struct xilinx_dpdma_sw_desc *sw_desc, *last = NULL;
> + unsigned int periods = buf_len / period_len;
> + unsigned int i;
> +
> + tx_desc = xilinx_dpdma_chan_alloc_tx_desc(chan);
> + if (!tx_desc)
> + return (void *)tx_desc;
Just return NULL here?
> +
> + for (i = 0; i < periods; i++) {
> + struct xilinx_dpdma_hw_desc *hw_desc;
> +
> + if (!IS_ALIGNED(buf_addr, XILINX_DPDMA_ALIGN_BYTES)) {
> + dev_err(chan->xdev->dev,
> + "buffer should be aligned at %d B\n",
> + XILINX_DPDMA_ALIGN_BYTES);
> + goto error;
> + }
> +
> + sw_desc = xilinx_dpdma_chan_alloc_sw_desc(chan);
> + if (!sw_desc)
> + goto error;
> +
> + xilinx_dpdma_sw_desc_set_dma_addrs(chan->xdev, sw_desc, last,
> + &buf_addr, 1);
> + hw_desc = &sw_desc->hw;
> + hw_desc->xfer_size = period_len;
> + hw_desc->hsize_stride =
> + FIELD_PREP(XILINX_DPDMA_DESC_HSIZE_STRIDE_HSIZE_MASK,
> + period_len) |
> + FIELD_PREP(XILINX_DPDMA_DESC_HSIZE_STRIDE_STRIDE_MASK,
> + period_len);
> + hw_desc->control |= XILINX_DPDMA_DESC_CONTROL_PREEMBLE;
> + hw_desc->control |= XILINX_DPDMA_DESC_CONTROL_IGNORE_DONE;
> + hw_desc->control |= XILINX_DPDMA_DESC_CONTROL_COMPLETE_INTR;
You could:
hw_desc->control |= XILINX_DPDMA_DESC_CONTROL_PREEMBLE |
XILINX_DPDMA_DESC_CONTROL_IGNORE_DONE |
XILINX_DPDMA_DESC_CONTROL_COMPLETE_INTR;
Although... Shouldn't control always be 0 here, so you can just
hw_desc->control = ...;
> +
> + list_add_tail(&sw_desc->node, &tx_desc->descriptors);
> +
> + buf_addr += period_len;
> + last = sw_desc;
> + }
> +
> + sw_desc = list_first_entry(&tx_desc->descriptors,
> + struct xilinx_dpdma_sw_desc, node);
> + last->hw.next_desc = lower_32_bits(sw_desc->dma_addr);
> + if (chan->xdev->ext_addr)
> + last->hw.addr_ext |=
> + FIELD_PREP(XILINX_DPDMA_DESC_ADDR_EXT_NEXT_ADDR_MASK,
> + upper_32_bits(sw_desc->dma_addr));
> +
> + last->hw.control |= XILINX_DPDMA_DESC_CONTROL_LAST_OF_FRAME;
> +
> + return vchan_tx_prep(&chan->vchan, &tx_desc->vdesc, flags);
> +
> +error:
> + xilinx_dpdma_chan_free_tx_desc(&tx_desc->vdesc);
> +
> + return NULL;
> +}
> +
> /**
> * xilinx_dpdma_chan_prep_interleaved_dma - Prepare an interleaved dma
> * descriptor
> @@ -1190,6 +1268,23 @@ static void xilinx_dpdma_chan_handle_err(struct xilinx_dpdma_chan *chan)
> /* -----------------------------------------------------------------------------
> * DMA Engine Operations
> */
> +static struct dma_async_tx_descriptor *
> +xilinx_dpdma_prep_dma_cyclic(struct dma_chan *dchan, dma_addr_t buf_addr,
> + size_t buf_len, size_t period_len,
> + enum dma_transfer_direction direction,
> + unsigned long flags)
> +{
> + struct xilinx_dpdma_chan *chan = to_xilinx_chan(dchan);
> +
> + if (direction != DMA_MEM_TO_DEV)
> + return NULL;
> +
> + if (buf_len % period_len)
> + return NULL;
> +
> + return xilinx_dpdma_chan_prep_cyclic(chan, buf_addr, buf_len,
> + period_len, flags);
The parameters should be aligned above.
> +}
>
> static struct dma_async_tx_descriptor *
> xilinx_dpdma_prep_interleaved_dma(struct dma_chan *dchan,
> @@ -1673,6 +1768,7 @@ static int xilinx_dpdma_probe(struct platform_device *pdev)
>
> dma_cap_set(DMA_SLAVE, ddev->cap_mask);
> dma_cap_set(DMA_PRIVATE, ddev->cap_mask);
> + dma_cap_set(DMA_CYCLIC, ddev->cap_mask);
> dma_cap_set(DMA_INTERLEAVE, ddev->cap_mask);
> dma_cap_set(DMA_REPEAT, ddev->cap_mask);
> dma_cap_set(DMA_LOAD_EOT, ddev->cap_mask);
> @@ -1680,6 +1776,7 @@ static int xilinx_dpdma_probe(struct platform_device *pdev)
>
> ddev->device_alloc_chan_resources = xilinx_dpdma_alloc_chan_resources;
> ddev->device_free_chan_resources = xilinx_dpdma_free_chan_resources;
> + ddev->device_prep_dma_cyclic = xilinx_dpdma_prep_dma_cyclic;
> ddev->device_prep_interleaved_dma = xilinx_dpdma_prep_interleaved_dma;
> /* TODO: Can we achieve better granularity ? */
> ddev->device_tx_status = dma_cookie_status;
While I'm not too familiar with dma engines, this looks fine to me. So,
other than the few cosmetics comments:
Reviewed-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
Tomi
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2024-03-27 12:53 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-02-28 4:21 [PATCH v2 0/2] Xilinx DPDMA fixes and cyclic dma mode support Vishal Sagar
2024-02-28 4:21 ` [PATCH v2 1/2] dmaengine: xilinx: dpdma: Fix race condition in vsync IRQ Vishal Sagar
2024-03-12 8:33 ` Tomi Valkeinen
2024-03-12 15:56 ` Sagar, Vishal
2024-03-12 17:46 ` Sean Anderson
2024-03-27 12:32 ` Tomi Valkeinen
2024-02-28 4:21 ` [PATCH v2 2/2] dmaengine: xilinx: dpdma: Add support for cyclic dma mode Vishal Sagar
2024-03-13 7:19 ` Tomi Valkeinen
2024-03-27 12:53 ` Tomi Valkeinen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).