public inbox for dmaengine@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/4] dmaengine: xilinx: Misc (cyclic) transfers fixes
@ 2023-11-30 11:13 Miquel Raynal
  2023-11-30 11:13 ` [PATCH v2 1/4] dmaengine: xilinx: xdma: Fix the count of elapsed periods in cyclic mode Miquel Raynal
                   ` (4 more replies)
  0 siblings, 5 replies; 15+ messages in thread
From: Miquel Raynal @ 2023-11-30 11:13 UTC (permalink / raw)
  To: Lizhi Hou, Brian Xu, Raj Kumar Rampelli, Vinod Koul
  Cc: Thomas Petazzoni, Michal Simek, dmaengine, Miquel Raynal

Hello,

So far all my testing was performed by looping the playback output to
the recording input and comparing the files using FFTs. Unfortunately,
when the DMA engine suffers from the same issue on both sides, some
issues may appear un-noticed. I now have proper hardware to really
listen to the actual sound, so here are a couple of fixes and
improvements.

Cheers,
Miquèl

Changes in v2:
* Added a patch to clarify the logic in the interrupt handler between
  cyclic and sg transfers.
* Fixed the count of elapsed periods without breaking SG.

Miquel Raynal (4):
  dmaengine: xilinx: xdma: Fix the count of elapsed periods in cyclic
    mode
  dmaengine: xilinx: xdma: Clarify the logic between cyclic/sg modes
  dmaengine: xilinx: xdma: Better handling of the busy variable
  dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks

 drivers/dma/xilinx/xdma.c | 103 +++++++++++++++++++++++++++++++-------
 1 file changed, 85 insertions(+), 18 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 1/4] dmaengine: xilinx: xdma: Fix the count of elapsed periods in cyclic mode
  2023-11-30 11:13 [PATCH v2 0/4] dmaengine: xilinx: Misc (cyclic) transfers fixes Miquel Raynal
@ 2023-11-30 11:13 ` Miquel Raynal
  2023-11-30 11:13 ` [PATCH v2 2/4] dmaengine: xilinx: xdma: Clarify the logic between cyclic/sg modes Miquel Raynal
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 15+ messages in thread
From: Miquel Raynal @ 2023-11-30 11:13 UTC (permalink / raw)
  To: Lizhi Hou, Brian Xu, Raj Kumar Rampelli, Vinod Koul
  Cc: Thomas Petazzoni, Michal Simek, dmaengine, Miquel Raynal

Xilinx DMA engine is capable of keeping track of the number of elapsed
periods and this is an increasing 32-bit counter which is only reset
when turning off the engine. No need to add this value to our local
counter.

Fixes: cd8c732ce1a5 ("dmaengine: xilinx: xdma: Support cyclic transfers")
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
---

Hello, so far all my testing was performed by looping the playback
output to the recording input and comparing the files using
FFTs. Unfortunately, when the DMA engine suffers from the same issue on
both sides, some issues may appear un-noticed, which is likely what
happened here as the tooling did not report any issue while analyzing
the output until I actually listened to real audio now that I have in my
hands the relevant hardware/connectors to do so.
---
 drivers/dma/xilinx/xdma.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/dma/xilinx/xdma.c b/drivers/dma/xilinx/xdma.c
index 84a88029226f..2c9c72d4b5a2 100644
--- a/drivers/dma/xilinx/xdma.c
+++ b/drivers/dma/xilinx/xdma.c
@@ -754,9 +754,9 @@ static irqreturn_t xdma_channel_isr(int irq, void *dev_id)
 	if (ret)
 		goto out;
 
-	desc->completed_desc_num += complete_desc_num;
-
 	if (desc->cyclic) {
+		desc->completed_desc_num = complete_desc_num;
+
 		ret = regmap_read(xdev->rmap, xchan->base + XDMA_CHAN_STATUS,
 				  &st);
 		if (ret)
@@ -768,6 +768,8 @@ static irqreturn_t xdma_channel_isr(int irq, void *dev_id)
 		goto out;
 	}
 
+	desc->completed_desc_num += complete_desc_num;
+
 	/*
 	 * if all data blocks are transferred, remove and complete the request
 	 */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 2/4] dmaengine: xilinx: xdma: Clarify the logic between cyclic/sg modes
  2023-11-30 11:13 [PATCH v2 0/4] dmaengine: xilinx: Misc (cyclic) transfers fixes Miquel Raynal
  2023-11-30 11:13 ` [PATCH v2 1/4] dmaengine: xilinx: xdma: Fix the count of elapsed periods in cyclic mode Miquel Raynal
@ 2023-11-30 11:13 ` Miquel Raynal
  2023-11-30 11:13 ` [PATCH v2 3/4] dmaengine: xilinx: xdma: Better handling of the busy variable Miquel Raynal
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 15+ messages in thread
From: Miquel Raynal @ 2023-11-30 11:13 UTC (permalink / raw)
  To: Lizhi Hou, Brian Xu, Raj Kumar Rampelli, Vinod Koul
  Cc: Thomas Petazzoni, Michal Simek, dmaengine, Miquel Raynal

We support both modes, but they perform totally different taks in the
interrupt handler. Clarify what shall be done in each case.

Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
---
 drivers/dma/xilinx/xdma.c | 31 ++++++++++++++-----------------
 1 file changed, 14 insertions(+), 17 deletions(-)

diff --git a/drivers/dma/xilinx/xdma.c b/drivers/dma/xilinx/xdma.c
index 2c9c72d4b5a2..4efef1b5f89c 100644
--- a/drivers/dma/xilinx/xdma.c
+++ b/drivers/dma/xilinx/xdma.c
@@ -765,26 +765,23 @@ static irqreturn_t xdma_channel_isr(int irq, void *dev_id)
 		regmap_write(xdev->rmap, xchan->base + XDMA_CHAN_STATUS, st);
 
 		vchan_cyclic_callback(vd);
-		goto out;
-	}
-
-	desc->completed_desc_num += complete_desc_num;
+	} else {
+		desc->completed_desc_num += complete_desc_num;
 
-	/*
-	 * if all data blocks are transferred, remove and complete the request
-	 */
-	if (desc->completed_desc_num == desc->desc_num) {
-		list_del(&vd->node);
-		vchan_cookie_complete(vd);
-		goto out;
-	}
+		/* if all data blocks are transferred, remove and complete the request */
+		if (desc->completed_desc_num == desc->desc_num) {
+			list_del(&vd->node);
+			vchan_cookie_complete(vd);
+			goto out;
+		}
 
-	if (desc->completed_desc_num > desc->desc_num ||
-	    complete_desc_num != XDMA_DESC_BLOCK_NUM * XDMA_DESC_ADJACENT)
-		goto out;
+		if (desc->completed_desc_num > desc->desc_num ||
+		    complete_desc_num != XDMA_DESC_BLOCK_NUM * XDMA_DESC_ADJACENT)
+			goto out;
 
-	/* transfer the rest of data (SG only) */
-	xdma_xfer_start(xchan);
+		/* transfer the rest of data */
+		xdma_xfer_start(xchan);
+	}
 
 out:
 	spin_unlock(&xchan->vchan.lock);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 3/4] dmaengine: xilinx: xdma: Better handling of the busy variable
  2023-11-30 11:13 [PATCH v2 0/4] dmaengine: xilinx: Misc (cyclic) transfers fixes Miquel Raynal
  2023-11-30 11:13 ` [PATCH v2 1/4] dmaengine: xilinx: xdma: Fix the count of elapsed periods in cyclic mode Miquel Raynal
  2023-11-30 11:13 ` [PATCH v2 2/4] dmaengine: xilinx: xdma: Clarify the logic between cyclic/sg modes Miquel Raynal
@ 2023-11-30 11:13 ` Miquel Raynal
  2023-11-30 11:13 ` [PATCH v2 4/4] dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks Miquel Raynal
  2023-12-21 16:30 ` [PATCH v2 0/4] dmaengine: xilinx: Misc (cyclic) transfers fixes Vinod Koul
  4 siblings, 0 replies; 15+ messages in thread
From: Miquel Raynal @ 2023-11-30 11:13 UTC (permalink / raw)
  To: Lizhi Hou, Brian Xu, Raj Kumar Rampelli, Vinod Koul
  Cc: Thomas Petazzoni, Michal Simek, dmaengine, Miquel Raynal

The driver internal scatter-gather logic is:
* set busy to true
* start transfer
<irq>
  * set busy to false
  * trigger next transfer if any
    * set busy to true
</irq>

Setting busy to false in cyclic transfers does not make any sense and is
conceptually wrong. In order to ease the integration of additional
callbacks let's move this change to the scatter-gather path.

Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
---
---
 drivers/dma/xilinx/xdma.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/dma/xilinx/xdma.c b/drivers/dma/xilinx/xdma.c
index 4efef1b5f89c..e931ff42209c 100644
--- a/drivers/dma/xilinx/xdma.c
+++ b/drivers/dma/xilinx/xdma.c
@@ -745,7 +745,6 @@ static irqreturn_t xdma_channel_isr(int irq, void *dev_id)
 	if (!vd)
 		goto out;
 
-	xchan->busy = false;
 	desc = to_xdma_desc(vd);
 	xdev = xchan->xdev_hdl;
 
@@ -766,6 +765,7 @@ static irqreturn_t xdma_channel_isr(int irq, void *dev_id)
 
 		vchan_cyclic_callback(vd);
 	} else {
+		xchan->busy = false;
 		desc->completed_desc_num += complete_desc_num;
 
 		/* if all data blocks are transferred, remove and complete the request */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 4/4] dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks
  2023-11-30 11:13 [PATCH v2 0/4] dmaengine: xilinx: Misc (cyclic) transfers fixes Miquel Raynal
                   ` (2 preceding siblings ...)
  2023-11-30 11:13 ` [PATCH v2 3/4] dmaengine: xilinx: xdma: Better handling of the busy variable Miquel Raynal
@ 2023-11-30 11:13 ` Miquel Raynal
  2023-11-30 17:28   ` Lizhi Hou
  2023-12-21 16:30 ` [PATCH v2 0/4] dmaengine: xilinx: Misc (cyclic) transfers fixes Vinod Koul
  4 siblings, 1 reply; 15+ messages in thread
From: Miquel Raynal @ 2023-11-30 11:13 UTC (permalink / raw)
  To: Lizhi Hou, Brian Xu, Raj Kumar Rampelli, Vinod Koul
  Cc: Thomas Petazzoni, Michal Simek, dmaengine, Miquel Raynal

The driver is capable of starting scatter-gather transfers and needs to
wait until their end. It is also capable of starting cyclic transfers
and will only be "reset" next time the channel will be reused. In
practice most of the time we hear no audio glitch because the sound card
stops the flow on its side so the DMA transfers are just
discarded. There are however some cases (when playing a bit with a
number of frames and with a discontinuous sound file) when the sound
card seems to be slightly too slow at stopping the flow, leading to a
glitch that can be heard.

In all cases, we need to earn better control of the DMA engine and
adding proper ->device_terminate_all() and ->device_synchronize()
callbacks feels totally relevant. With these two callbacks, no glitch
can be heard anymore.

Fixes: cd8c732ce1a5 ("dmaengine: xilinx: xdma: Support cyclic transfers")
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
---

This was only tested with cyclic transfers.
---
 drivers/dma/xilinx/xdma.c | 68 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 68 insertions(+)

diff --git a/drivers/dma/xilinx/xdma.c b/drivers/dma/xilinx/xdma.c
index e931ff42209c..290bb5d2d1e2 100644
--- a/drivers/dma/xilinx/xdma.c
+++ b/drivers/dma/xilinx/xdma.c
@@ -371,6 +371,31 @@ static int xdma_xfer_start(struct xdma_chan *xchan)
 		return ret;
 
 	xchan->busy = true;
+
+	return 0;
+}
+
+/**
+ * xdma_xfer_stop - Stop DMA transfer
+ * @xchan: DMA channel pointer
+ */
+static int xdma_xfer_stop(struct xdma_chan *xchan)
+{
+	struct virt_dma_desc *vd = vchan_next_desc(&xchan->vchan);
+	struct xdma_device *xdev = xchan->xdev_hdl;
+	int ret;
+
+	if (!vd || !xchan->busy)
+		return -EINVAL;
+
+	/* clear run stop bit to prevent any further auto-triggering */
+	ret = regmap_write(xdev->rmap, xchan->base + XDMA_CHAN_CONTROL_W1C,
+			   CHAN_CTRL_RUN_STOP);
+	if (ret)
+		return ret;
+
+	xchan->busy = false;
+
 	return 0;
 }
 
@@ -475,6 +500,47 @@ static void xdma_issue_pending(struct dma_chan *chan)
 	spin_unlock_irqrestore(&xdma_chan->vchan.lock, flags);
 }
 
+/**
+ * xdma_terminate_all - Terminate all transactions
+ * @chan: DMA channel pointer
+ */
+static int xdma_terminate_all(struct dma_chan *chan)
+{
+	struct xdma_chan *xdma_chan = to_xdma_chan(chan);
+	struct xdma_desc *desc = NULL;
+	struct virt_dma_desc *vd;
+	unsigned long flags;
+	LIST_HEAD(head);
+
+	spin_lock_irqsave(&xdma_chan->vchan.lock, flags);
+	xdma_xfer_stop(xdma_chan);
+
+	vd = vchan_next_desc(&xdma_chan->vchan);
+	if (vd)
+		desc = to_xdma_desc(vd);
+	if (desc) {
+		dma_cookie_complete(&desc->vdesc.tx);
+		vchan_terminate_vdesc(&desc->vdesc);
+	}
+
+	vchan_get_all_descriptors(&xdma_chan->vchan, &head);
+	spin_unlock_irqrestore(&xdma_chan->vchan.lock, flags);
+	vchan_dma_desc_free_list(&xdma_chan->vchan, &head);
+
+	return 0;
+}
+
+/**
+ * xdma_synchronize - Synchronize terminated transactions
+ * @chan: DMA channel pointer
+ */
+static void xdma_synchronize(struct dma_chan *chan)
+{
+	struct xdma_chan *xdma_chan = to_xdma_chan(chan);
+
+	vchan_synchronize(&xdma_chan->vchan);
+}
+
 /**
  * xdma_prep_device_sg - prepare a descriptor for a DMA transaction
  * @chan: DMA channel pointer
@@ -1088,6 +1154,8 @@ static int xdma_probe(struct platform_device *pdev)
 	xdev->dma_dev.device_prep_slave_sg = xdma_prep_device_sg;
 	xdev->dma_dev.device_config = xdma_device_config;
 	xdev->dma_dev.device_issue_pending = xdma_issue_pending;
+	xdev->dma_dev.device_terminate_all = xdma_terminate_all;
+	xdev->dma_dev.device_synchronize = xdma_synchronize;
 	xdev->dma_dev.filter.map = pdata->device_map;
 	xdev->dma_dev.filter.mapcnt = pdata->device_map_cnt;
 	xdev->dma_dev.filter.fn = xdma_filter_fn;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 4/4] dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks
  2023-11-30 11:13 ` [PATCH v2 4/4] dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks Miquel Raynal
@ 2023-11-30 17:28   ` Lizhi Hou
  2023-11-30 19:06     ` Jan Kuliga
  0 siblings, 1 reply; 15+ messages in thread
From: Lizhi Hou @ 2023-11-30 17:28 UTC (permalink / raw)
  To: Miquel Raynal, Brian Xu, Raj Kumar Rampelli, Vinod Koul,
	Jan Kuliga
  Cc: Thomas Petazzoni, Michal Simek, dmaengine

Added Jan Kuliga who submitted a similar change.

https://lore.kernel.org/dmaengine/20231124192524.134989-1-jankul@alatek.krakow.pl/T/#m20c1ca4bba291f6ca07a8e5fbcaeed9fd0a6f008


Thanks,

Lizhi

On 11/30/23 03:13, Miquel Raynal wrote:
> The driver is capable of starting scatter-gather transfers and needs to
> wait until their end. It is also capable of starting cyclic transfers
> and will only be "reset" next time the channel will be reused. In
> practice most of the time we hear no audio glitch because the sound card
> stops the flow on its side so the DMA transfers are just
> discarded. There are however some cases (when playing a bit with a
> number of frames and with a discontinuous sound file) when the sound
> card seems to be slightly too slow at stopping the flow, leading to a
> glitch that can be heard.
>
> In all cases, we need to earn better control of the DMA engine and
> adding proper ->device_terminate_all() and ->device_synchronize()
> callbacks feels totally relevant. With these two callbacks, no glitch
> can be heard anymore.
>
> Fixes: cd8c732ce1a5 ("dmaengine: xilinx: xdma: Support cyclic transfers")
> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
> ---
>
> This was only tested with cyclic transfers.
> ---
>   drivers/dma/xilinx/xdma.c | 68 +++++++++++++++++++++++++++++++++++++++
>   1 file changed, 68 insertions(+)
>
> diff --git a/drivers/dma/xilinx/xdma.c b/drivers/dma/xilinx/xdma.c
> index e931ff42209c..290bb5d2d1e2 100644
> --- a/drivers/dma/xilinx/xdma.c
> +++ b/drivers/dma/xilinx/xdma.c
> @@ -371,6 +371,31 @@ static int xdma_xfer_start(struct xdma_chan *xchan)
>   		return ret;
>   
>   	xchan->busy = true;
> +
> +	return 0;
> +}
> +
> +/**
> + * xdma_xfer_stop - Stop DMA transfer
> + * @xchan: DMA channel pointer
> + */
> +static int xdma_xfer_stop(struct xdma_chan *xchan)
> +{
> +	struct virt_dma_desc *vd = vchan_next_desc(&xchan->vchan);
> +	struct xdma_device *xdev = xchan->xdev_hdl;
> +	int ret;
> +
> +	if (!vd || !xchan->busy)
> +		return -EINVAL;
> +
> +	/* clear run stop bit to prevent any further auto-triggering */
> +	ret = regmap_write(xdev->rmap, xchan->base + XDMA_CHAN_CONTROL_W1C,
> +			   CHAN_CTRL_RUN_STOP);
> +	if (ret)
> +		return ret;
> +
> +	xchan->busy = false;
> +
>   	return 0;
>   }
>   
> @@ -475,6 +500,47 @@ static void xdma_issue_pending(struct dma_chan *chan)
>   	spin_unlock_irqrestore(&xdma_chan->vchan.lock, flags);
>   }
>   
> +/**
> + * xdma_terminate_all - Terminate all transactions
> + * @chan: DMA channel pointer
> + */
> +static int xdma_terminate_all(struct dma_chan *chan)
> +{
> +	struct xdma_chan *xdma_chan = to_xdma_chan(chan);
> +	struct xdma_desc *desc = NULL;
> +	struct virt_dma_desc *vd;
> +	unsigned long flags;
> +	LIST_HEAD(head);
> +
> +	spin_lock_irqsave(&xdma_chan->vchan.lock, flags);
> +	xdma_xfer_stop(xdma_chan);
> +
> +	vd = vchan_next_desc(&xdma_chan->vchan);
> +	if (vd)
> +		desc = to_xdma_desc(vd);
> +	if (desc) {
> +		dma_cookie_complete(&desc->vdesc.tx);
> +		vchan_terminate_vdesc(&desc->vdesc);
> +	}
> +
> +	vchan_get_all_descriptors(&xdma_chan->vchan, &head);
> +	spin_unlock_irqrestore(&xdma_chan->vchan.lock, flags);
> +	vchan_dma_desc_free_list(&xdma_chan->vchan, &head);
> +
> +	return 0;
> +}
> +
> +/**
> + * xdma_synchronize - Synchronize terminated transactions
> + * @chan: DMA channel pointer
> + */
> +static void xdma_synchronize(struct dma_chan *chan)
> +{
> +	struct xdma_chan *xdma_chan = to_xdma_chan(chan);
> +
> +	vchan_synchronize(&xdma_chan->vchan);
> +}
> +
>   /**
>    * xdma_prep_device_sg - prepare a descriptor for a DMA transaction
>    * @chan: DMA channel pointer
> @@ -1088,6 +1154,8 @@ static int xdma_probe(struct platform_device *pdev)
>   	xdev->dma_dev.device_prep_slave_sg = xdma_prep_device_sg;
>   	xdev->dma_dev.device_config = xdma_device_config;
>   	xdev->dma_dev.device_issue_pending = xdma_issue_pending;
> +	xdev->dma_dev.device_terminate_all = xdma_terminate_all;
> +	xdev->dma_dev.device_synchronize = xdma_synchronize;
>   	xdev->dma_dev.filter.map = pdata->device_map;
>   	xdev->dma_dev.filter.mapcnt = pdata->device_map_cnt;
>   	xdev->dma_dev.filter.fn = xdma_filter_fn;

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 4/4] dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks
  2023-11-30 17:28   ` Lizhi Hou
@ 2023-11-30 19:06     ` Jan Kuliga
  2023-11-30 19:23       ` Miquel Raynal
  0 siblings, 1 reply; 15+ messages in thread
From: Jan Kuliga @ 2023-11-30 19:06 UTC (permalink / raw)
  To: Lizhi Hou, Miquel Raynal, Brian Xu, Raj Kumar Rampelli,
	Vinod Koul
  Cc: Thomas Petazzoni, Michal Simek, dmaengine

Hi,

On 30.11.2023 18:28, Lizhi Hou wrote:
> Added Jan Kuliga who submitted a similar change.
>
Thanks for CC'ing me to the other patchset. I'm currently working on 
interleaved-DMA transfers implementation for XDMA. While testing it, 
I've come across a flaw in mine patch you mentioned here (and it also 
exists in the Miquel's patch).

> https://lore.kernel.org/dmaengine/20231124192524.134989-1-jankul@alatek.krakow.pl/T/#m20c1ca4bba291f6ca07a8e5fbcaeed9fd0a6f008 >
> Thanks,
> 
> Lizhi
> 
> On 11/30/23 03:13, Miquel Raynal wrote:
>> The driver is capable of starting scatter-gather transfers and needs to
>> wait until their end. It is also capable of starting cyclic transfers
>> and will only be "reset" next time the channel will be reused. In
>> practice most of the time we hear no audio glitch because the sound card
>> stops the flow on its side so the DMA transfers are just
>> discarded. There are however some cases (when playing a bit with a
>> number of frames and with a discontinuous sound file) when the sound
>> card seems to be slightly too slow at stopping the flow, leading to a
>> glitch that can be heard.
>>
>> In all cases, we need to earn better control of the DMA engine and
>> adding proper ->device_terminate_all() and ->device_synchronize()
>> callbacks feels totally relevant. With these two callbacks, no glitch
>> can be heard anymore.
>>
>> Fixes: cd8c732ce1a5 ("dmaengine: xilinx: xdma: Support cyclic transfers")
>> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
>> ---
>>
>> This was only tested with cyclic transfers.
>> ---
>>   drivers/dma/xilinx/xdma.c | 68 +++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 68 insertions(+)
>>
>> diff --git a/drivers/dma/xilinx/xdma.c b/drivers/dma/xilinx/xdma.c
>> index e931ff42209c..290bb5d2d1e2 100644
>> --- a/drivers/dma/xilinx/xdma.c
>> +++ b/drivers/dma/xilinx/xdma.c
>> @@ -371,6 +371,31 @@ static int xdma_xfer_start(struct xdma_chan *xchan)
>>           return ret;
>>       xchan->busy = true;
>> +
>> +    return 0;
>> +}
>> +
>> +/**
>> + * xdma_xfer_stop - Stop DMA transfer
>> + * @xchan: DMA channel pointer
>> + */
>> +static int xdma_xfer_stop(struct xdma_chan *xchan)
>> +{
>> +    struct virt_dma_desc *vd = vchan_next_desc(&xchan->vchan);
>> +    struct xdma_device *xdev = xchan->xdev_hdl;
>> +    int ret;
>> +
>> +    if (!vd || !xchan->busy)
>> +        return -EINVAL;
>> +
>> +    /* clear run stop bit to prevent any further auto-triggering */
>> +    ret = regmap_write(xdev->rmap, xchan->base + XDMA_CHAN_CONTROL_W1C,
>> +               CHAN_CTRL_RUN_STOP);
>> +    if (ret)
>> +        return ret;

Shouldn't status register be cleared prior to using it next time? It can 
be cleared-on-read by doing a read from a separate register (offset 0x44).
>> +
>> +    xchan->busy = false;
>> +
>>       return 0;
>>   }
>> @@ -475,6 +500,47 @@ static void xdma_issue_pending(struct dma_chan 
>> *chan)
>>       spin_unlock_irqrestore(&xdma_chan->vchan.lock, flags);
>>   }
>> +/**
>> + * xdma_terminate_all - Terminate all transactions
>> + * @chan: DMA channel pointer
>> + */
>> +static int xdma_terminate_all(struct dma_chan *chan)
>> +{
>> +    struct xdma_chan *xdma_chan = to_xdma_chan(chan);
>> +    struct xdma_desc *desc = NULL;
>> +    struct virt_dma_desc *vd;
>> +    unsigned long flags;
>> +    LIST_HEAD(head);
>> +
>> +    spin_lock_irqsave(&xdma_chan->vchan.lock, flags);
>> +    xdma_xfer_stop(xdma_chan);
>> +
>> +    vd = vchan_next_desc(&xdma_chan->vchan);
>> +    if (vd)
>> +        desc = to_xdma_desc(vd);
>> +    if (desc) {
>> +        dma_cookie_complete(&desc->vdesc.tx);
Prior to a call to vchan_terminate_vdesc(), the vd node has to be 
deleted from vc.desc_issued list. Otherwise, if there is more than one 
descriptor present on that list, its link with list's head is going to 
be lost and freeing resources associated with it will become impossible 
(doing so results in dma_pool_destroy() failure). I noticed it when I 
was playing with a large number of interleaved DMA TXs.
>> +        vchan_terminate_vdesc(&desc->vdesc);
>> +    }
>> +
>> +    vchan_get_all_descriptors(&xdma_chan->vchan, &head);
>> +    spin_unlock_irqrestore(&xdma_chan->vchan.lock, flags);
>> +    vchan_dma_desc_free_list(&xdma_chan->vchan, &head);
>> +
>> +    return 0;
>> +}
>> +
>> +/**
>> + * xdma_synchronize - Synchronize terminated transactions
>> + * @chan: DMA channel pointer
>> + */
>> +static void xdma_synchronize(struct dma_chan *chan)
>> +{
>> +    struct xdma_chan *xdma_chan = to_xdma_chan(chan);
>> +
>> +    vchan_synchronize(&xdma_chan->vchan);
>> +}
>> +
>>   /**
>>    * xdma_prep_device_sg - prepare a descriptor for a DMA transaction
>>    * @chan: DMA channel pointer
>> @@ -1088,6 +1154,8 @@ static int xdma_probe(struct platform_device *pdev)
>>       xdev->dma_dev.device_prep_slave_sg = xdma_prep_device_sg;
>>       xdev->dma_dev.device_config = xdma_device_config;
>>       xdev->dma_dev.device_issue_pending = xdma_issue_pending;
>> +    xdev->dma_dev.device_terminate_all = xdma_terminate_all;
>> +    xdev->dma_dev.device_synchronize = xdma_synchronize;
>>       xdev->dma_dev.filter.map = pdata->device_map;
>>       xdev->dma_dev.filter.mapcnt = pdata->device_map_cnt;
>>       xdev->dma_dev.filter.fn = xdma_filter_fn;

I have already prepared a patch with an appropriate fix, which I'm going 
to submit with the whole patch series, once I have interleaved DMA 
transfers properly sorted out (hopefully soon). Or maybe should I post 
this patch with fix, immediately as a reply to the already sent one? 
What do you prefer?

Thanks,
Jan

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 4/4] dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks
  2023-11-30 19:06     ` Jan Kuliga
@ 2023-11-30 19:23       ` Miquel Raynal
  2023-12-04  9:34         ` Jan Kuliga
  0 siblings, 1 reply; 15+ messages in thread
From: Miquel Raynal @ 2023-11-30 19:23 UTC (permalink / raw)
  To: Jan Kuliga
  Cc: Lizhi Hou, Brian Xu, Raj Kumar Rampelli, Vinod Koul,
	Thomas Petazzoni, Michal Simek, dmaengine

Hi Jan,

jankul@alatek.krakow.pl wrote on Thu, 30 Nov 2023 20:06:51 +0100:

> Hi,
> 
> On 30.11.2023 18:28, Lizhi Hou wrote:
> > Added Jan Kuliga who submitted a similar change.
> >  
> Thanks for CC'ing me to the other patchset. I'm currently working on interleaved-DMA transfers implementation for XDMA. While testing it, I've come across a flaw in mine patch you mentioned here (and it also exists in the Miquel's patch).
> 
> > https://lore.kernel.org/dmaengine/20231124192524.134989-1-jankul@alatek  
> .krakow.pl/T/#m20c1ca4bba291f6ca07a8e5fbcaeed9fd0a6f008 >
> > Thanks,
> > 
> > Lizhi
> > 
> > On 11/30/23 03:13, Miquel Raynal wrote:  
> >> The driver is capable of starting scatter-gather transfers and needs t  
> o
> >> wait until their end. It is also capable of starting cyclic transfers
> >> and will only be "reset" next time the channel will be reused. In
> >> practice most of the time we hear no audio glitch because the sound ca  
> rd
> >> stops the flow on its side so the DMA transfers are just
> >> discarded. There are however some cases (when playing a bit with a
> >> number of frames and with a discontinuous sound file) when the sound
> >> card seems to be slightly too slow at stopping the flow, leading to a
> >> glitch that can be heard.
> >>
> >> In all cases, we need to earn better control of the DMA engine and
> >> adding proper ->device_terminate_all() and ->device_synchronize()
> >> callbacks feels totally relevant. With these two callbacks, no glitch
> >> can be heard anymore.
> >>
> >> Fixes: cd8c732ce1a5 ("dmaengine: xilinx: xdma: Support cyclic transfer  
> s")
> >> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
> >> ---
> >>
> >> This was only tested with cyclic transfers.
> >> ---
> >>   drivers/dma/xilinx/xdma.c | 68 ++++++++++++++++++++++++++++++++  
> +++++++
> >>   1 file changed, 68 insertions(+)
> >>
> >> diff --git a/drivers/dma/xilinx/xdma.c b/drivers/dma/xilinx/xdma.c
> >> index e931ff42209c..290bb5d2d1e2 100644
> >> --- a/drivers/dma/xilinx/xdma.c
> >> +++ b/drivers/dma/xilinx/xdma.c
> >> @@ -371,6 +371,31 @@ static int xdma_xfer_start(struct xdma_chan *xcha  
> n)
> >>           return ret;
> >>       xchan->busy = true;
> >> +
> >> +    return 0;
> >> +}
> >> +
> >> +/**
> >> + * xdma_xfer_stop - Stop DMA transfer
> >> + * @xchan: DMA channel pointer
> >> + */
> >> +static int xdma_xfer_stop(struct xdma_chan *xchan)
> >> +{
> >> +    struct virt_dma_desc *vd = vchan_next_desc(&xcha  
> n->vchan);
> >> +    struct xdma_device *xdev = xchan->xdev_hdl;
> >> +    int ret;
> >> +
> >> +    if (!vd || !xchan->busy)
> >> +        return -EINVAL;
> >> +
> >> +    /* clear run stop bit to prevent any further auto-  
> triggering */
> >> +    ret = regmap_write(xdev->rmap, xchan->base + XDM  
> A_CHAN_CONTROL_W1C,
> >> +_______________________  
> _____ CHAN_CTRL_RUN_STOP);
> >> +    if (ret)
> >> +        return ret;  
> 
> Shouldn't status register be cleared prior to using it next time? It can be cleared-on-read by doing a read from a separate register (offset 0x44)
> .
> >> +
> >> +    xchan->busy = false;
> >> +
> >>       return 0;
> >>   }
> >> @@ -475,6 +500,47 @@ static void xdma_issue_pending(struct dma_chan >> *chan)
> >>       spin_unlock_irqrestore(&xdma_chan->vcha  
> n.lock, flags);
> >>   }
> >> +/**
> >> + * xdma_terminate_all - Terminate all transactions
> >> + * @chan: DMA channel pointer
> >> + */
> >> +static int xdma_terminate_all(struct dma_chan *chan)
> >> +{
> >> +    struct xdma_chan *xdma_chan = to_xdma_chan(chan)  
> ;
> >> +    struct xdma_desc *desc = NULL;
> >> +    struct virt_dma_desc *vd;
> >> +    unsigned long flags;
> >> +    LIST_HEAD(head);
> >> +
> >> +    spin_lock_irqsave(&xdma_chan->vchan.lock, flags);
> >> +    xdma_xfer_stop(xdma_chan);
> >> +
> >> +    vd = vchan_next_desc(&xdma_chan->vchan);
> >> +    if (vd)
> >> +        desc = to_xdma_desc(vd);
> >> +    if (desc) {
> >> +        dma_cookie_complete(&desc-  
> >vdesc.tx);  
> Prior to a call to vchan_terminate_vdesc(), the vd node has to be deleted from vc.desc_issued list. Otherwise, if there is more than one descriptor present on that list, its link with list's head is going to be lost and freeing resources associated with it will become impossible (doing so results in dma_pool_destroy() failure). I noticed it when I was playing with a large number of interleaved DMA TXs.
> >> +        vchan_terminate_vdesc(&des  
> c->vdesc);
> >> +    }
> >> +
> >> +    vchan_get_all_descriptors(&xdma_chan->vchan, &head  
> );
> >> +    spin_unlock_irqrestore(&xdma_chan->vchan.lock, fla  
> gs);
> >> +    vchan_dma_desc_free_list(&xdma_chan->vchan, &head)  
> ;
> >> +
> >> +    return 0;
> >> +}
> >> +
> >> +/**
> >> + * xdma_synchronize - Synchronize terminated transactions
> >> + * @chan: DMA channel pointer
> >> + */
> >> +static void xdma_synchronize(struct dma_chan *chan)
> >> +{
> >> +    struct xdma_chan *xdma_chan = to_xdma_chan(chan)  
> ;
> >> +
> >> +    vchan_synchronize(&xdma_chan->vchan);
> >> +}
> >> +
> >>   /**
> >>    * xdma_prep_device_sg - prepare a descriptor for a DMA tr  
> ansaction
> >>    * @chan: DMA channel pointer
> >> @@ -1088,6 +1154,8 @@ static int xdma_probe(struct platform_device *pd  
> ev)
> >>       xdev->dma_dev.device_prep_slave_sg = xdma_prep_device_sg;
> >>       xdev->dma_dev.device_config = xdma_de  
> vice_config;
> >>       xdev->dma_dev.device_issue_pending = xdma_issue_pending;
> >> +    xdev->dma_dev.device_terminate_all = xdma_termin  
> ate_all;
> >> +    xdev->dma_dev.device_synchronize = xdma_synchron  
> ize;
> >>       xdev->dma_dev.filter.map = pdata->dev  
> ice_map;
> >>       xdev->dma_dev.filter.mapcnt = pdata->  
> device_map_cnt;
> >>       xdev->dma_dev.filter.fn = xdma_filter  
> _fn;
> 
> I have already prepared a patch with an appropriate fix, which I'm going to submit with the whole patch series, once I have interleaved DMA transfers properly sorted out (hopefully soon). Or maybe should I post this patch with fix, immediately as a reply to the already sent one? What do you prefer?

I see. Well in the case of cyclic transfers it looks like this is enough
(I don't have any way to test interleaved/SG transfers) so maybe
maintainers can take this now as it is ready and fixes cyclic
transfers, so when the interleaved transfers are ready you can
improve these functions with a series on top of it?

Thanks,
Miquèl

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 4/4] dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks
  2023-11-30 19:23       ` Miquel Raynal
@ 2023-12-04  9:34         ` Jan Kuliga
  2023-12-04 11:02           ` Miquel Raynal
  0 siblings, 1 reply; 15+ messages in thread
From: Jan Kuliga @ 2023-12-04  9:34 UTC (permalink / raw)
  To: Miquel Raynal
  Cc: Lizhi Hou, Brian Xu, Raj Kumar Rampelli, Vinod Koul,
	Thomas Petazzoni, Michal Simek, dmaengine

Hi Miquel,

On 30.11.2023 20:23, Miquel Raynal wrote:
> Hi Jan,
> 
> jankul@alatek.krakow.pl wrote on Thu, 30 Nov 2023 20:06:51 +0100:
> 
>> Hi,
>>
>> On 30.11.2023 18:28, Lizhi Hou wrote:
>>> Added Jan Kuliga who submitted a similar change.
>>>   
>> Thanks for CC'ing me to the other patchset. I'm currently working on interleaved-DMA transfers implementation for XDMA. While testing it, I've come across a flaw in mine patch you mentioned here (and it also exists in the Miquel's patch).
>>
>>> https://lore.kernel.org/dmaengine/20231124192524.134989-1-jankul@alatek
>> .krakow.pl/T/#m20c1ca4bba291f6ca07a8e5fbcaeed9fd0a6f008 >
>>> Thanks,
>>>
>>> Lizhi
>>>
>>> On 11/30/23 03:13, Miquel Raynal wrote:
>>>> The driver is capable of starting scatter-gather transfers and needs t
>> o
>>>> wait until their end. It is also capable of starting cyclic transfers
>>>> and will only be "reset" next time the channel will be reused. In
>>>> practice most of the time we hear no audio glitch because the sound ca
>> rd
>>>> stops the flow on its side so the DMA transfers are just
>>>> discarded. There are however some cases (when playing a bit with a
>>>> number of frames and with a discontinuous sound file) when the sound
>>>> card seems to be slightly too slow at stopping the flow, leading to a
>>>> glitch that can be heard.
>>>>
>>>> In all cases, we need to earn better control of the DMA engine and
>>>> adding proper ->device_terminate_all() and ->device_synchronize()
>>>> callbacks feels totally relevant. With these two callbacks, no glitch
>>>> can be heard anymore.
>>>>
>>>> Fixes: cd8c732ce1a5 ("dmaengine: xilinx: xdma: Support cyclic transfer
>> s")
>>>> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
>>>> ---
>>>>
>>>> This was only tested with cyclic transfers.
>>>> ---
>>>>    drivers/dma/xilinx/xdma.c | 68 ++++++++++++++++++++++++++++++++
>> +++++++
>>>>    1 file changed, 68 insertions(+)
>>>>
>>>> diff --git a/drivers/dma/xilinx/xdma.c b/drivers/dma/xilinx/xdma.c
>>>> index e931ff42209c..290bb5d2d1e2 100644
>>>> --- a/drivers/dma/xilinx/xdma.c
>>>> +++ b/drivers/dma/xilinx/xdma.c
>>>> @@ -371,6 +371,31 @@ static int xdma_xfer_start(struct xdma_chan *xcha
>> n)
>>>>            return ret;
>>>>        xchan->busy = true;
>>>> +
>>>> +    return 0;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xdma_xfer_stop - Stop DMA transfer
>>>> + * @xchan: DMA channel pointer
>>>> + */
>>>> +static int xdma_xfer_stop(struct xdma_chan *xchan)
>>>> +{
>>>> +    struct virt_dma_desc *vd = vchan_next_desc(&xcha
>> n->vchan);
>>>> +    struct xdma_device *xdev = xchan->xdev_hdl;
>>>> +    int ret;
>>>> +
>>>> +    if (!vd || !xchan->busy)
>>>> +        return -EINVAL;
>>>> +
>>>> +    /* clear run stop bit to prevent any further auto-
>> triggering */
>>>> +    ret = regmap_write(xdev->rmap, xchan->base + XDM
>> A_CHAN_CONTROL_W1C,
>>>> +_______________________
>> _____ CHAN_CTRL_RUN_STOP);
>>>> +    if (ret)
>>>> +        return ret;
>>
>> Shouldn't status register be cleared prior to using it next time? It can be cleared-on-read by doing a read from a separate register (offset 0x44)
>> .
>>>> +
>>>> +    xchan->busy = false;
>>>> +
>>>>        return 0;
>>>>    }
>>>> @@ -475,6 +500,47 @@ static void xdma_issue_pending(struct dma_chan >> *chan)
>>>>        spin_unlock_irqrestore(&xdma_chan->vcha
>> n.lock, flags);
>>>>    }
>>>> +/**
>>>> + * xdma_terminate_all - Terminate all transactions
>>>> + * @chan: DMA channel pointer
>>>> + */
>>>> +static int xdma_terminate_all(struct dma_chan *chan)
>>>> +{
>>>> +    struct xdma_chan *xdma_chan = to_xdma_chan(chan)
>> ;
>>>> +    struct xdma_desc *desc = NULL;
>>>> +    struct virt_dma_desc *vd;
>>>> +    unsigned long flags;
>>>> +    LIST_HEAD(head);
>>>> +
>>>> +    spin_lock_irqsave(&xdma_chan->vchan.lock, flags);
>>>> +    xdma_xfer_stop(xdma_chan);
>>>> +
>>>> +    vd = vchan_next_desc(&xdma_chan->vchan);
>>>> +    if (vd)
>>>> +        desc = to_xdma_desc(vd);
>>>> +    if (desc) {
>>>> +        dma_cookie_complete(&desc-
>>> vdesc.tx);
>> Prior to a call to vchan_terminate_vdesc(), the vd node has to be deleted from vc.desc_issued list. Otherwise, if there is more than one descriptor present on that list, its link with list's head is going to be lost and freeing resources associated with it will become impossible (doing so results in dma_pool_destroy() failure). I noticed it when I was playing with a large number of interleaved DMA TXs.
>>>> +        vchan_terminate_vdesc(&des
>> c->vdesc);
>>>> +    }
>>>> +
>>>> +    vchan_get_all_descriptors(&xdma_chan->vchan, &head
>> );
>>>> +    spin_unlock_irqrestore(&xdma_chan->vchan.lock, fla
>> gs);
>>>> +    vchan_dma_desc_free_list(&xdma_chan->vchan, &head)
>> ;
>>>> +
>>>> +    return 0;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xdma_synchronize - Synchronize terminated transactions
>>>> + * @chan: DMA channel pointer
>>>> + */
>>>> +static void xdma_synchronize(struct dma_chan *chan)
>>>> +{
>>>> +    struct xdma_chan *xdma_chan = to_xdma_chan(chan)
>> ;
>>>> +
>>>> +    vchan_synchronize(&xdma_chan->vchan);
>>>> +}
>>>> +
>>>>    /**
>>>>     * xdma_prep_device_sg - prepare a descriptor for a DMA tr
>> ansaction
>>>>     * @chan: DMA channel pointer
>>>> @@ -1088,6 +1154,8 @@ static int xdma_probe(struct platform_device *pd
>> ev)
>>>>        xdev->dma_dev.device_prep_slave_sg = xdma_prep_device_sg;
>>>>        xdev->dma_dev.device_config = xdma_de
>> vice_config;
>>>>        xdev->dma_dev.device_issue_pending = xdma_issue_pending;
>>>> +    xdev->dma_dev.device_terminate_all = xdma_termin
>> ate_all;
>>>> +    xdev->dma_dev.device_synchronize = xdma_synchron
>> ize;
>>>>        xdev->dma_dev.filter.map = pdata->dev
>> ice_map;
>>>>        xdev->dma_dev.filter.mapcnt = pdata->
>> device_map_cnt;
>>>>        xdev->dma_dev.filter.fn = xdma_filter
>> _fn;
>>
>> I have already prepared a patch with an appropriate fix, which I'm going to submit with the whole patch series, once I have interleaved DMA transfers properly sorted out (hopefully soon). Or maybe should I post this patch with fix, immediately as a reply to the already sent one? What do you prefer?
> 
> I see. Well in the case of cyclic transfers it looks like this is enough
> (I don't have any way to test interleaved/SG transfers) so maybe
> maintainers can take this now as it is ready and fixes cyclic
> transfers, so when the interleaved transfers are ready you can
> improve these functions with a series on top of it?
> 
So I decided to base my new patchset on my previous one, as I haven't 
seen any ack from any maintainer yet on both mine and your patchset. I'm 
going to submit it this week.

This specific commit of yours (PATCH 4/4) basically does the same thing 
as mine patch, so there will be no difference in its functionality, i.e. 
it will also fix cyclic transfers.

> Thanks,
> Miquèl
Thanks,
Jan

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 4/4] dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks
  2023-12-04  9:34         ` Jan Kuliga
@ 2023-12-04 11:02           ` Miquel Raynal
  2023-12-04 13:13             ` Jan Kuliga
  0 siblings, 1 reply; 15+ messages in thread
From: Miquel Raynal @ 2023-12-04 11:02 UTC (permalink / raw)
  To: Jan Kuliga
  Cc: Lizhi Hou, Brian Xu, Raj Kumar Rampelli, Vinod Koul,
	Thomas Petazzoni, Michal Simek, dmaengine

Hi Jan,

> >>>> +    vchan_synchronize(&xdma_chan->vchan);
> >>>> +}
> >>>> +
> >>>>    /**
> >>>>     * xdma_prep_device_sg - prepare a descriptor for a DMA  
>  tr
> >> ansaction  
> >>>>     * @chan: DMA channel pointer
> >>>> @@ -1088,6 +1154,8 @@ static int xdma_probe(struct platform_device *  
> pd
> >> ev)  
> >>>>        xdev->dma_dev.device_prep_slave_sg =  
>  xdma_prep_device_sg;
> >>>>        xdev->dma_dev.device_config = xdma  
> _de
> >> vice_config;  
> >>>>        xdev->dma_dev.device_issue_pending =  
>  xdma_issue_pending;
> >>>> +    xdev->dma_dev.device_terminate_all = xdma_term  
> in
> >> ate_all;  
> >>>> +    xdev->dma_dev.device_synchronize = xdma_synchr  
> on
> >> ize;  
> >>>>        xdev->dma_dev.filter.map = pdata->  
> dev
> >> ice_map;  
> >>>>        xdev->dma_dev.filter.mapcnt = pdat  
> a->
> >> device_map_cnt;  
> >>>>        xdev->dma_dev.filter.fn = xdma_fil  
> ter
> >> _fn;

Not related, but if you could fix your mailer, it is a bit hard to
track your answers.

> >>
> >> I have already prepared a patch with an appropriate fix, which I'm goi  
> ng to submit with the whole patch series, once I have interleaved DMA tra
> nsfers properly sorted out (hopefully soon). Or maybe should I post this patch with fix, immediately as a reply to the already sent one? What do y
> ou prefer?
> > 
> > I see. Well in the case of cyclic transfers it looks like this is enoug  
> h
> > (I don't have any way to test interleaved/SG transfers) so maybe
> > maintainers can take this now as it is ready and fixes cyclic
> > transfers, so when the interleaved transfers are ready you can
> > improve these functions with a series on top of it?
> >   
> So I decided to base my new patchset on my previous one, as I haven't seen any ack from any maintainer yet on both mine and your patchset. I'm going to submit it this week.

Well, the difference between the two approaches is that I am fixing
something upstream, and you're adding a new feature, which is not
ready yet. I don't mind about using your patch though, I just want
upstream to be fixed.

> This specific commit of yours (PATCH 4/4) basically does the same thing as mine patch, so there will be no difference in its functionality, i.e. it will also fix cyclic transfers.

Thanks,
Miquèl

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 4/4] dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks
  2023-12-04 11:02           ` Miquel Raynal
@ 2023-12-04 13:13             ` Jan Kuliga
  2023-12-04 14:36               ` Miquel Raynal
  0 siblings, 1 reply; 15+ messages in thread
From: Jan Kuliga @ 2023-12-04 13:13 UTC (permalink / raw)
  To: Miquel Raynal
  Cc: Lizhi Hou, Brian Xu, Raj Kumar Rampelli, Vinod Koul,
	Thomas Petazzoni, Michal Simek, dmaengine

Hi Miquel,                                                                                               

On 4.12.2023 12:02, Miquel Raynal wrote:
> Hi Jan,
> 
>>>>>> +    vchan_synchronize(&xdma_chan->vchan); +} + /** * 
>>>>>> xdma_prep_device_sg - prepare a descriptor for a DMA
>> tr
>>>> ansaction
>>>>>> * @chan: DMA channel pointer @@ -1088,6 +1154,8 @@ static 
>>>>>> int xdma_probe(struct platform_device *
>> pd
>>>> ev)
>>>>>> xdev->dma_dev.device_prep_slave_sg =
>> xdma_prep_device_sg;
>>>>>> xdev->dma_dev.device_config = xdma
>> _de
>>>> vice_config;
>>>>>> xdev->dma_dev.device_issue_pending =
>> xdma_issue_pending;
>>>>>> +    xdev->dma_dev.device_terminate_all = xdma_term
>> in
>>>> ate_all;
>>>>>> +    xdev->dma_dev.device_synchronize = xdma_synchr
>> on
>>>> ize;
>>>>>> xdev->dma_dev.filter.map = pdata->
>> dev
>>>> ice_map;
>>>>>> xdev->dma_dev.filter.mapcnt = pdat
>> a->
>>>> device_map_cnt;
>>>>>> xdev->dma_dev.filter.fn = xdma_fil
>> ter
>>>> _fn;
> 
> Not related, but if you could fix your mailer, it is a bit hard to 
> track your answers.
> 
Thanks for pointing this out, I didn't notice it. From now on it should be okay.

>>>> 
>>>> I have already prepared a patch with an appropriate fix, which 
>>>> I'm goi
>> ng to submit with the whole patch series, once I have interleaved 
>> DMA transfers properly sorted out (hopefully soon). Or maybe should
>> I post this patch with fix, immediately as a reply to the already
>> sent one? What do y ou prefer?
>>> 
>>> I see. Well in the case of cyclic transfers it looks like this
>>> is enoug
>> h
>>> (I don't have any way to test interleaved/SG transfers) so maybe
>>>  maintainers can take this now as it is ready and fixes cyclic 
>>> transfers, so when the interleaved transfers are ready you can 
>>> improve these functions with a series on top of it?
>>> 
>> So I decided to base my new patchset on my previous one, as I 
>> haven't seen any ack from any maintainer yet on both mine and your 
>> patchset. I'm going to submit it this week.
> 
> Well, the difference between the two approaches is that I am fixing 
> something upstream, and you're adding a new feature, which is not 
> ready yet. I don't mind about using your patch though, I just want 
> upstream to be fixed.
> 
>> This specific commit of yours (PATCH 4/4) basically does the same 
>> thing as mine patch, so there will be no difference in its 
>> functionality, i.e. it will also fix cyclic transfers.
> 
Okay, so as far as I understand, you'd like me to submit my patchset based on the top of yours.
I guess maintainers will be fine with that (so do I). If so, what is the proper way to post my next
patch series? Should I post it as a reply to your patchset, or as a completely new thread
with a information that it is based on this patchset? I don't want to wait with submission
without getting any feedback until your patches are going to be upstreamed.

> Thanks, MiquèlThanks,
Jan

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 4/4] dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks
  2023-12-04 13:13             ` Jan Kuliga
@ 2023-12-04 14:36               ` Miquel Raynal
  2023-12-04 16:41                 ` Lizhi Hou
  0 siblings, 1 reply; 15+ messages in thread
From: Miquel Raynal @ 2023-12-04 14:36 UTC (permalink / raw)
  To: Jan Kuliga
  Cc: Lizhi Hou, Brian Xu, Raj Kumar Rampelli, Vinod Koul,
	Thomas Petazzoni, Michal Simek, dmaengine

Hi Jan,

jankul@alatek.krakow.pl wrote on Mon, 4 Dec 2023 14:13:13 +0100:

> Hi Miquel,                                                                                               
> 
> On 4.12.2023 12:02, Miquel Raynal wrote:
> > Hi Jan,
> >   
> >>>>>> +    vchan_synchronize(&xdma_chan->vchan); +} + /** * 
> >>>>>> xdma_prep_device_sg - prepare a descriptor for a DMA  
> >> tr  
> >>>> ansaction  
> >>>>>> * @chan: DMA channel pointer @@ -1088,6 +1154,8 @@ static 
> >>>>>> int xdma_probe(struct platform_device *  
> >> pd  
> >>>> ev)  
> >>>>>> xdev->dma_dev.device_prep_slave_sg =  
> >> xdma_prep_device_sg;  
> >>>>>> xdev->dma_dev.device_config = xdma  
> >> _de  
> >>>> vice_config;  
> >>>>>> xdev->dma_dev.device_issue_pending =  
> >> xdma_issue_pending;  
> >>>>>> +    xdev->dma_dev.device_terminate_all = xdma_term  
> >> in  
> >>>> ate_all;  
> >>>>>> +    xdev->dma_dev.device_synchronize = xdma_synchr  
> >> on  
> >>>> ize;  
> >>>>>> xdev->dma_dev.filter.map = pdata->  
> >> dev  
> >>>> ice_map;  
> >>>>>> xdev->dma_dev.filter.mapcnt = pdat  
> >> a->  
> >>>> device_map_cnt;  
> >>>>>> xdev->dma_dev.filter.fn = xdma_fil  
> >> ter  
> >>>> _fn;  
> > 
> > Not related, but if you could fix your mailer, it is a bit hard to 
> > track your answers.
> >   
> Thanks for pointing this out, I didn't notice it. From now on it should be okay.
> 
> >>>> 
> >>>> I have already prepared a patch with an appropriate fix, which 
> >>>> I'm goi  
> >> ng to submit with the whole patch series, once I have interleaved 
> >> DMA transfers properly sorted out (hopefully soon). Or maybe should
> >> I post this patch with fix, immediately as a reply to the already
> >> sent one? What do y ou prefer?  
> >>> 
> >>> I see. Well in the case of cyclic transfers it looks like this
> >>> is enoug  
> >> h  
> >>> (I don't have any way to test interleaved/SG transfers) so maybe
> >>>  maintainers can take this now as it is ready and fixes cyclic 
> >>> transfers, so when the interleaved transfers are ready you can 
> >>> improve these functions with a series on top of it?
> >>>   
> >> So I decided to base my new patchset on my previous one, as I 
> >> haven't seen any ack from any maintainer yet on both mine and your 
> >> patchset. I'm going to submit it this week.  
> > 
> > Well, the difference between the two approaches is that I am fixing 
> > something upstream, and you're adding a new feature, which is not 
> > ready yet. I don't mind about using your patch though, I just want 
> > upstream to be fixed.
> >   
> >> This specific commit of yours (PATCH 4/4) basically does the same 
> >> thing as mine patch, so there will be no difference in its 
> >> functionality, i.e. it will also fix cyclic transfers.  
> >   
> Okay, so as far as I understand, you'd like me to submit my patchset based on the top of yours.

That would be ideal, unless my series get postponed for any reason.
I believe the maintainers will soon give their feedback, we'll do what
they prefer.

I believe Lizhi will also give a Tested-by -or not-.

> I guess maintainers will be fine with that (so do I). If so, what is the proper way to post my next
> patch series? Should I post it as a reply to your patchset, or as a completely new thread
> with a information that it is based on this patchset?

You can definitely send an individual patchset and just point out that
it applies on top of the few fixes I sent.

> I don't want to wait with submission
> without getting any feedback until your patches are going to be upstreamed.

Of course.

Thanks,
Miquèl

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 4/4] dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks
  2023-12-04 14:36               ` Miquel Raynal
@ 2023-12-04 16:41                 ` Lizhi Hou
  2023-12-08 14:27                   ` Jan Kuliga
  0 siblings, 1 reply; 15+ messages in thread
From: Lizhi Hou @ 2023-12-04 16:41 UTC (permalink / raw)
  To: Miquel Raynal, Jan Kuliga
  Cc: Brian Xu, Raj Kumar Rampelli, Vinod Koul, Thomas Petazzoni,
	Michal Simek, dmaengine


On 12/4/23 06:36, Miquel Raynal wrote:
> Hi Jan,
>
> jankul@alatek.krakow.pl wrote on Mon, 4 Dec 2023 14:13:13 +0100:
>
>> Hi Miquel,
>>
>> On 4.12.2023 12:02, Miquel Raynal wrote:
>>> Hi Jan,
>>>    
>>>>>>>> +    vchan_synchronize(&xdma_chan->vchan); +} + /** *
>>>>>>>> xdma_prep_device_sg - prepare a descriptor for a DMA
>>>> tr
>>>>>> ansaction
>>>>>>>> * @chan: DMA channel pointer @@ -1088,6 +1154,8 @@ static
>>>>>>>> int xdma_probe(struct platform_device *
>>>> pd
>>>>>> ev)
>>>>>>>> xdev->dma_dev.device_prep_slave_sg =
>>>> xdma_prep_device_sg;
>>>>>>>> xdev->dma_dev.device_config = xdma
>>>> _de
>>>>>> vice_config;
>>>>>>>> xdev->dma_dev.device_issue_pending =
>>>> xdma_issue_pending;
>>>>>>>> +    xdev->dma_dev.device_terminate_all = xdma_term
>>>> in
>>>>>> ate_all;
>>>>>>>> +    xdev->dma_dev.device_synchronize = xdma_synchr
>>>> on
>>>>>> ize;
>>>>>>>> xdev->dma_dev.filter.map = pdata->
>>>> dev
>>>>>> ice_map;
>>>>>>>> xdev->dma_dev.filter.mapcnt = pdat
>>>> a->
>>>>>> device_map_cnt;
>>>>>>>> xdev->dma_dev.filter.fn = xdma_fil
>>>> ter
>>>>>> _fn;
>>> Not related, but if you could fix your mailer, it is a bit hard to
>>> track your answers.
>>>    
>> Thanks for pointing this out, I didn't notice it. From now on it should be okay.
>>
>>>>>> I have already prepared a patch with an appropriate fix, which
>>>>>> I'm goi
>>>> ng to submit with the whole patch series, once I have interleaved
>>>> DMA transfers properly sorted out (hopefully soon). Or maybe should
>>>> I post this patch with fix, immediately as a reply to the already
>>>> sent one? What do y ou prefer?
>>>>> I see. Well in the case of cyclic transfers it looks like this
>>>>> is enoug
>>>> h
>>>>> (I don't have any way to test interleaved/SG transfers) so maybe
>>>>>   maintainers can take this now as it is ready and fixes cyclic
>>>>> transfers, so when the interleaved transfers are ready you can
>>>>> improve these functions with a series on top of it?
>>>>>    
>>>> So I decided to base my new patchset on my previous one, as I
>>>> haven't seen any ack from any maintainer yet on both mine and your
>>>> patchset. I'm going to submit it this week.
>>> Well, the difference between the two approaches is that I am fixing
>>> something upstream, and you're adding a new feature, which is not
>>> ready yet. I don't mind about using your patch though, I just want
>>> upstream to be fixed.
>>>    
>>>> This specific commit of yours (PATCH 4/4) basically does the same
>>>> thing as mine patch, so there will be no difference in its
>>>> functionality, i.e. it will also fix cyclic transfers.
>>>    
>> Okay, so as far as I understand, you'd like me to submit my patchset based on the top of yours.
> That would be ideal, unless my series get postponed for any reason.
> I believe the maintainers will soon give their feedback, we'll do what
> they prefer.
>
> I believe Lizhi will also give a Tested-by -or not-.

Yes, I verified this patch set for sg list test and it passed.

Tested-by: Lizhi Hou <lizhi.hou@amd.com>

>
>> I guess maintainers will be fine with that (so do I). If so, what is the proper way to post my next
>> patch series? Should I post it as a reply to your patchset, or as a completely new thread
>> with a information that it is based on this patchset?
> You can definitely send an individual patchset and just point out that
> it applies on top of the few fixes I sent.
>
>> I don't want to wait with submission
>> without getting any feedback until your patches are going to be upstreamed.
> Of course.
>
> Thanks,
> Miquèl

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 4/4] dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks
  2023-12-04 16:41                 ` Lizhi Hou
@ 2023-12-08 14:27                   ` Jan Kuliga
  0 siblings, 0 replies; 15+ messages in thread
From: Jan Kuliga @ 2023-12-08 14:27 UTC (permalink / raw)
  To: Lizhi Hou, Miquel Raynal
  Cc: Brian Xu, Raj Kumar Rampelli, Vinod Koul, Thomas Petazzoni,
	Michal Simek, dmaengine

Hi,

Here [1] you can find my new patchset, based on Miquel's one, as we agreed earlier this week. As my patches touch some common driver code, I'd appreciate if you could run them against your testcases and leave some feedback.

Thanks,
Jan

[1] https://lore.kernel.org/dmaengine/20231208134838.49500-1-jankul@alatek.krakow.pl/T/#t 

On 4.12.2023 17:41, Lizhi Hou wrote:
> 
> On 12/4/23 06:36, Miquel Raynal wrote:
>> Hi Jan,
>>
>> jankul@alatek.krakow.pl wrote on Mon, 4 Dec 2023 14:13:13 +0100:
>>
>>> Hi Miquel,
>>>
>>> On 4.12.2023 12:02, Miquel Raynal wrote:
>>>> Hi Jan,
>>>>   
>>>>>>>>> +    vchan_synchronize(&xdma_chan->vchan); +} + /** *
>>>>>>>>> xdma_prep_device_sg - prepare a descriptor for a DMA
>>>>> tr
>>>>>>> ansaction
>>>>>>>>> * @chan: DMA channel pointer @@ -1088,6 +1154,8 @@ static
>>>>>>>>> int xdma_probe(struct platform_device *
>>>>> pd
>>>>>>> ev)
>>>>>>>>> xdev->dma_dev.device_prep_slave_sg =
>>>>> xdma_prep_device_sg;
>>>>>>>>> xdev->dma_dev.device_config = xdma
>>>>> _de
>>>>>>> vice_config;
>>>>>>>>> xdev->dma_dev.device_issue_pending =
>>>>> xdma_issue_pending;
>>>>>>>>> +    xdev->dma_dev.device_terminate_all = xdma_term
>>>>> in
>>>>>>> ate_all;
>>>>>>>>> +    xdev->dma_dev.device_synchronize = xdma_synchr
>>>>> on
>>>>>>> ize;
>>>>>>>>> xdev->dma_dev.filter.map = pdata->
>>>>> dev
>>>>>>> ice_map;
>>>>>>>>> xdev->dma_dev.filter.mapcnt = pdat
>>>>> a->
>>>>>>> device_map_cnt;
>>>>>>>>> xdev->dma_dev.filter.fn = xdma_fil
>>>>> ter
>>>>>>> _fn;
>>>> Not related, but if you could fix your mailer, it is a bit hard to
>>>> track your answers.
>>>>    
>>> Thanks for pointing this out, I didn't notice it. From now on it should be okay.
>>>
>>>>>>> I have already prepared a patch with an appropriate fix, which
>>>>>>> I'm goi
>>>>> ng to submit with the whole patch series, once I have interleaved
>>>>> DMA transfers properly sorted out (hopefully soon). Or maybe should
>>>>> I post this patch with fix, immediately as a reply to the already
>>>>> sent one? What do y ou prefer?
>>>>>> I see. Well in the case of cyclic transfers it looks like this
>>>>>> is enoug
>>>>> h
>>>>>> (I don't have any way to test interleaved/SG transfers) so maybe
>>>>>>   maintainers can take this now as it is ready and fixes cyclic
>>>>>> transfers, so when the interleaved transfers are ready you can
>>>>>> improve these functions with a series on top of it?
>>>>>>    
>>>>> So I decided to base my new patchset on my previous one, as I
>>>>> haven't seen any ack from any maintainer yet on both mine and your
>>>>> patchset. I'm going to submit it this week.
>>>> Well, the difference between the two approaches is that I am fixing
>>>> something upstream, and you're adding a new feature, which is not
>>>> ready yet. I don't mind about using your patch though, I just want
>>>> upstream to be fixed.
>>>>   
>>>>> This specific commit of yours (PATCH 4/4) basically does the same
>>>>> thing as mine patch, so there will be no difference in its
>>>>> functionality, i.e. it will also fix cyclic transfers.
>>>>    
>>> Okay, so as far as I understand, you'd like me to submit my patchset based on the top of yours.
>> That would be ideal, unless my series get postponed for any reason.
>> I believe the maintainers will soon give their feedback, we'll do what
>> they prefer.
>>
>> I believe Lizhi will also give a Tested-by -or not-.
> 
> Yes, I verified this patch set for sg list test and it passed.
> 
> Tested-by: Lizhi Hou <lizhi.hou@amd.com>
> 
>>
>>> I guess maintainers will be fine with that (so do I). If so, what is the proper way to post my next
>>> patch series? Should I post it as a reply to your patchset, or as a completely new thread
>>> with a information that it is based on this patchset?
>> You can definitely send an individual patchset and just point out that
>> it applies on top of the few fixes I sent.
>>
>>> I don't want to wait with submission
>>> without getting any feedback until your patches are going to be upstreamed.
>> Of course.
>>
>> Thanks,
>> Miquèl

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/4] dmaengine: xilinx: Misc (cyclic) transfers fixes
  2023-11-30 11:13 [PATCH v2 0/4] dmaengine: xilinx: Misc (cyclic) transfers fixes Miquel Raynal
                   ` (3 preceding siblings ...)
  2023-11-30 11:13 ` [PATCH v2 4/4] dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks Miquel Raynal
@ 2023-12-21 16:30 ` Vinod Koul
  4 siblings, 0 replies; 15+ messages in thread
From: Vinod Koul @ 2023-12-21 16:30 UTC (permalink / raw)
  To: Lizhi Hou, Brian Xu, Raj Kumar Rampelli, Miquel Raynal
  Cc: Thomas Petazzoni, Michal Simek, dmaengine


On Thu, 30 Nov 2023 12:13:11 +0100, Miquel Raynal wrote:
> So far all my testing was performed by looping the playback output to
> the recording input and comparing the files using FFTs. Unfortunately,
> when the DMA engine suffers from the same issue on both sides, some
> issues may appear un-noticed. I now have proper hardware to really
> listen to the actual sound, so here are a couple of fixes and
> improvements.
> 
> [...]

Applied, thanks!

[1/4] dmaengine: xilinx: xdma: Fix the count of elapsed periods in cyclic mode
      commit: 26ee018ff6d1c326ac9b9be36513e35870ed09db
[2/4] dmaengine: xilinx: xdma: Clarify the logic between cyclic/sg modes
      commit: 58b61fc75ba901b1fd63c911b31249f36d17e9c4
[3/4] dmaengine: xilinx: xdma: Better handling of the busy variable
      commit: b3072be7f955e56789a0508c18e9870f45cd9a11
[4/4] dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks
      commit: f5c392d106e7cc58c7705799ef4c36c3b2f60b31

Best regards,
-- 
~Vinod



^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2023-12-21 16:30 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-11-30 11:13 [PATCH v2 0/4] dmaengine: xilinx: Misc (cyclic) transfers fixes Miquel Raynal
2023-11-30 11:13 ` [PATCH v2 1/4] dmaengine: xilinx: xdma: Fix the count of elapsed periods in cyclic mode Miquel Raynal
2023-11-30 11:13 ` [PATCH v2 2/4] dmaengine: xilinx: xdma: Clarify the logic between cyclic/sg modes Miquel Raynal
2023-11-30 11:13 ` [PATCH v2 3/4] dmaengine: xilinx: xdma: Better handling of the busy variable Miquel Raynal
2023-11-30 11:13 ` [PATCH v2 4/4] dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks Miquel Raynal
2023-11-30 17:28   ` Lizhi Hou
2023-11-30 19:06     ` Jan Kuliga
2023-11-30 19:23       ` Miquel Raynal
2023-12-04  9:34         ` Jan Kuliga
2023-12-04 11:02           ` Miquel Raynal
2023-12-04 13:13             ` Jan Kuliga
2023-12-04 14:36               ` Miquel Raynal
2023-12-04 16:41                 ` Lizhi Hou
2023-12-08 14:27                   ` Jan Kuliga
2023-12-21 16:30 ` [PATCH v2 0/4] dmaengine: xilinx: Misc (cyclic) transfers fixes Vinod Koul

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox