linux-arm-msm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] dmengine: qcom: bam_dma: some driver refactoring
@ 2025-11-06 15:44 Bartosz Golaszewski
  2025-11-06 15:44 ` [PATCH 1/3] dmaengine: qcom: bam_dma: order includes alphabetically Bartosz Golaszewski
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Bartosz Golaszewski @ 2025-11-06 15:44 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-arm-msm, dmaengine, linux-kernel, Bartosz Golaszewski

Here are some changes I figured would be nice to have while working on
the BAM locking feature.

Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
---
Bartosz Golaszewski (3):
      dmaengine: qcom: bam_dma: order includes alphabetically
      dmaengine: qcom: bam_dma: use lock guards
      dmaengine: qcom: bam_dma: convert tasklet to a workqueue

 drivers/dma/qcom/bam_dma.c | 184 +++++++++++++++++++++------------------------
 1 file changed, 85 insertions(+), 99 deletions(-)
---
base-commit: 0297a4fa6f372fd3ed8fe9b4d49b96ff8708ac71
change-id: 20251106-qcom-bam-dma-refactor-4b36275b8c3d

Best regards,
-- 
Bartosz Golaszewski <bartosz.golaszewski@linaro.org>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/3] dmaengine: qcom: bam_dma: order includes alphabetically
  2025-11-06 15:44 [PATCH 0/3] dmengine: qcom: bam_dma: some driver refactoring Bartosz Golaszewski
@ 2025-11-06 15:44 ` Bartosz Golaszewski
  2025-11-12  3:11   ` Dmitry Baryshkov
  2025-11-06 15:44 ` [PATCH 2/3] dmaengine: qcom: bam_dma: use lock guards Bartosz Golaszewski
  2025-11-06 15:44 ` [PATCH 3/3] dmaengine: qcom: bam_dma: convert tasklet to a workqueue Bartosz Golaszewski
  2 siblings, 1 reply; 8+ messages in thread
From: Bartosz Golaszewski @ 2025-11-06 15:44 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-arm-msm, dmaengine, linux-kernel, Bartosz Golaszewski

From: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>

For easier maintenance and better readability order all includes
alphabetically.

Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
---
 drivers/dma/qcom/bam_dma.c | 28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/drivers/dma/qcom/bam_dma.c b/drivers/dma/qcom/bam_dma.c
index 2cf060174795fe326abaf053a7a7a10022455586..2f1f295d3e1ff910a5051c20dc09cb1e8077df82 100644
--- a/drivers/dma/qcom/bam_dma.c
+++ b/drivers/dma/qcom/bam_dma.c
@@ -23,24 +23,24 @@
  * indication of where the hardware is currently working.
  */
 
-#include <linux/kernel.h>
-#include <linux/io.h>
-#include <linux/init.h>
-#include <linux/slab.h>
-#include <linux/module.h>
-#include <linux/interrupt.h>
-#include <linux/dma-mapping.h>
-#include <linux/scatterlist.h>
-#include <linux/device.h>
-#include <linux/platform_device.h>
-#include <linux/of.h>
-#include <linux/of_address.h>
-#include <linux/of_irq.h>
-#include <linux/of_dma.h>
 #include <linux/circ_buf.h>
 #include <linux/clk.h>
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
 #include <linux/dmaengine.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of_address.h>
+#include <linux/of_dma.h>
+#include <linux/of_irq.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
 #include <linux/pm_runtime.h>
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
 
 #include "../dmaengine.h"
 #include "../virt-dma.h"

-- 
2.51.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/3] dmaengine: qcom: bam_dma: use lock guards
  2025-11-06 15:44 [PATCH 0/3] dmengine: qcom: bam_dma: some driver refactoring Bartosz Golaszewski
  2025-11-06 15:44 ` [PATCH 1/3] dmaengine: qcom: bam_dma: order includes alphabetically Bartosz Golaszewski
@ 2025-11-06 15:44 ` Bartosz Golaszewski
  2025-11-06 15:44 ` [PATCH 3/3] dmaengine: qcom: bam_dma: convert tasklet to a workqueue Bartosz Golaszewski
  2 siblings, 0 replies; 8+ messages in thread
From: Bartosz Golaszewski @ 2025-11-06 15:44 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-arm-msm, dmaengine, linux-kernel, Bartosz Golaszewski

From: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>

Simplify locking across the driver with lock guards from cleanup.h.

Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
---
 drivers/dma/qcom/bam_dma.c | 124 ++++++++++++++++++++-------------------------
 1 file changed, 55 insertions(+), 69 deletions(-)

diff --git a/drivers/dma/qcom/bam_dma.c b/drivers/dma/qcom/bam_dma.c
index 2f1f295d3e1ff910a5051c20dc09cb1e8077df82..bcd8de9a9a12621a36b49c31bff96f474468be06 100644
--- a/drivers/dma/qcom/bam_dma.c
+++ b/drivers/dma/qcom/bam_dma.c
@@ -24,6 +24,7 @@
  */
 
 #include <linux/circ_buf.h>
+#include <linux/cleanup.h>
 #include <linux/clk.h>
 #include <linux/device.h>
 #include <linux/dma-mapping.h>
@@ -570,7 +571,6 @@ static void bam_free_chan(struct dma_chan *chan)
 	struct bam_chan *bchan = to_bam_chan(chan);
 	struct bam_device *bdev = bchan->bdev;
 	u32 val;
-	unsigned long flags;
 	int ret;
 
 	ret = pm_runtime_get_sync(bdev->dev);
@@ -584,9 +584,8 @@ static void bam_free_chan(struct dma_chan *chan)
 		goto err;
 	}
 
-	spin_lock_irqsave(&bchan->vc.lock, flags);
-	bam_reset_channel(bchan);
-	spin_unlock_irqrestore(&bchan->vc.lock, flags);
+	scoped_guard(spinlock_irqsave, &bchan->vc.lock)
+		bam_reset_channel(bchan);
 
 	dma_free_wc(bdev->dev, BAM_DESC_FIFO_SIZE, bchan->fifo_virt,
 		    bchan->fifo_phys);
@@ -624,12 +623,11 @@ static int bam_slave_config(struct dma_chan *chan,
 			    struct dma_slave_config *cfg)
 {
 	struct bam_chan *bchan = to_bam_chan(chan);
-	unsigned long flag;
 
-	spin_lock_irqsave(&bchan->vc.lock, flag);
+	guard(spinlock_irqsave)(&bchan->vc.lock);
+
 	memcpy(&bchan->slave, cfg, sizeof(*cfg));
 	bchan->reconfigure = 1;
-	spin_unlock_irqrestore(&bchan->vc.lock, flag);
 
 	return 0;
 }
@@ -726,38 +724,37 @@ static int bam_dma_terminate_all(struct dma_chan *chan)
 {
 	struct bam_chan *bchan = to_bam_chan(chan);
 	struct bam_async_desc *async_desc, *tmp;
-	unsigned long flag;
 	LIST_HEAD(head);
 
 	/* remove all transactions, including active transaction */
-	spin_lock_irqsave(&bchan->vc.lock, flag);
-	/*
-	 * If we have transactions queued, then some might be committed to the
-	 * hardware in the desc fifo.  The only way to reset the desc fifo is
-	 * to do a hardware reset (either by pipe or the entire block).
-	 * bam_chan_init_hw() will trigger a pipe reset, and also reinit the
-	 * pipe.  If the pipe is left disabled (default state after pipe reset)
-	 * and is accessed by a connected hardware engine, a fatal error in
-	 * the BAM will occur.  There is a small window where this could happen
-	 * with bam_chan_init_hw(), but it is assumed that the caller has
-	 * stopped activity on any attached hardware engine.  Make sure to do
-	 * this first so that the BAM hardware doesn't cause memory corruption
-	 * by accessing freed resources.
-	 */
-	if (!list_empty(&bchan->desc_list)) {
-		async_desc = list_first_entry(&bchan->desc_list,
-					      struct bam_async_desc, desc_node);
-		bam_chan_init_hw(bchan, async_desc->dir);
-	}
+	scoped_guard(spinlock_irqsave, &bchan->vc.lock) {
+		/*
+		 * If we have transactions queued, then some might be committed to the
+		 * hardware in the desc fifo.  The only way to reset the desc fifo is
+		 * to do a hardware reset (either by pipe or the entire block).
+		 * bam_chan_init_hw() will trigger a pipe reset, and also reinit the
+		 * pipe.  If the pipe is left disabled (default state after pipe reset)
+		 * and is accessed by a connected hardware engine, a fatal error in
+		 * the BAM will occur.  There is a small window where this could happen
+		 * with bam_chan_init_hw(), but it is assumed that the caller has
+		 * stopped activity on any attached hardware engine.  Make sure to do
+		 * this first so that the BAM hardware doesn't cause memory corruption
+		 * by accessing freed resources.
+		 */
+		if (!list_empty(&bchan->desc_list)) {
+			async_desc = list_first_entry(&bchan->desc_list,
+						      struct bam_async_desc, desc_node);
+			bam_chan_init_hw(bchan, async_desc->dir);
+		}
 
-	list_for_each_entry_safe(async_desc, tmp,
-				 &bchan->desc_list, desc_node) {
-		list_add(&async_desc->vd.node, &bchan->vc.desc_issued);
-		list_del(&async_desc->desc_node);
-	}
+		list_for_each_entry_safe(async_desc, tmp,
+					 &bchan->desc_list, desc_node) {
+			list_add(&async_desc->vd.node, &bchan->vc.desc_issued);
+			list_del(&async_desc->desc_node);
+		}
 
-	vchan_get_all_descriptors(&bchan->vc, &head);
-	spin_unlock_irqrestore(&bchan->vc.lock, flag);
+		vchan_get_all_descriptors(&bchan->vc, &head);
+	}
 
 	vchan_dma_desc_free_list(&bchan->vc, &head);
 
@@ -773,17 +770,16 @@ static int bam_pause(struct dma_chan *chan)
 {
 	struct bam_chan *bchan = to_bam_chan(chan);
 	struct bam_device *bdev = bchan->bdev;
-	unsigned long flag;
 	int ret;
 
 	ret = pm_runtime_get_sync(bdev->dev);
 	if (ret < 0)
 		return ret;
 
-	spin_lock_irqsave(&bchan->vc.lock, flag);
-	writel_relaxed(1, bam_addr(bdev, bchan->id, BAM_P_HALT));
-	bchan->paused = 1;
-	spin_unlock_irqrestore(&bchan->vc.lock, flag);
+	scoped_guard(spinlock_irqsave, &bchan->vc.lock) {
+		writel_relaxed(1, bam_addr(bdev, bchan->id, BAM_P_HALT));
+		bchan->paused = 1;
+	}
 	pm_runtime_mark_last_busy(bdev->dev);
 	pm_runtime_put_autosuspend(bdev->dev);
 
@@ -799,17 +795,16 @@ static int bam_resume(struct dma_chan *chan)
 {
 	struct bam_chan *bchan = to_bam_chan(chan);
 	struct bam_device *bdev = bchan->bdev;
-	unsigned long flag;
 	int ret;
 
 	ret = pm_runtime_get_sync(bdev->dev);
 	if (ret < 0)
 		return ret;
 
-	spin_lock_irqsave(&bchan->vc.lock, flag);
-	writel_relaxed(0, bam_addr(bdev, bchan->id, BAM_P_HALT));
-	bchan->paused = 0;
-	spin_unlock_irqrestore(&bchan->vc.lock, flag);
+	scoped_guard(spinlock_irqsave, &bchan->vc.lock) {
+		writel_relaxed(0, bam_addr(bdev, bchan->id, BAM_P_HALT));
+		bchan->paused = 0;
+	}
 	pm_runtime_mark_last_busy(bdev->dev);
 	pm_runtime_put_autosuspend(bdev->dev);
 
@@ -826,7 +821,6 @@ static int bam_resume(struct dma_chan *chan)
 static u32 process_channel_irqs(struct bam_device *bdev)
 {
 	u32 i, srcs, pipe_stts, offset, avail;
-	unsigned long flags;
 	struct bam_async_desc *async_desc, *tmp;
 
 	srcs = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_SRCS_EE));
@@ -846,7 +840,7 @@ static u32 process_channel_irqs(struct bam_device *bdev)
 
 		writel_relaxed(pipe_stts, bam_addr(bdev, i, BAM_P_IRQ_CLR));
 
-		spin_lock_irqsave(&bchan->vc.lock, flags);
+		guard(spinlock_irqsave)(&bchan->vc.lock);
 
 		offset = readl_relaxed(bam_addr(bdev, i, BAM_P_SW_OFSTS)) &
 				       P_SW_OFSTS_MASK;
@@ -885,8 +879,6 @@ static u32 process_channel_irqs(struct bam_device *bdev)
 			}
 			list_del(&async_desc->desc_node);
 		}
-
-		spin_unlock_irqrestore(&bchan->vc.lock, flags);
 	}
 
 	return srcs;
@@ -950,7 +942,6 @@ static enum dma_status bam_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
 	int ret;
 	size_t residue = 0;
 	unsigned int i;
-	unsigned long flags;
 
 	ret = dma_cookie_status(chan, cookie, txstate);
 	if (ret == DMA_COMPLETE)
@@ -959,23 +950,22 @@ static enum dma_status bam_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
 	if (!txstate)
 		return bchan->paused ? DMA_PAUSED : ret;
 
-	spin_lock_irqsave(&bchan->vc.lock, flags);
-	vd = vchan_find_desc(&bchan->vc, cookie);
-	if (vd) {
-		residue = container_of(vd, struct bam_async_desc, vd)->length;
-	} else {
-		list_for_each_entry(async_desc, &bchan->desc_list, desc_node) {
-			if (async_desc->vd.tx.cookie != cookie)
-				continue;
+	scoped_guard(spinlock_irqsave, &bchan->vc.lock) {
+		vd = vchan_find_desc(&bchan->vc, cookie);
+		if (vd) {
+			residue = container_of(vd, struct bam_async_desc, vd)->length;
+		} else {
+			list_for_each_entry(async_desc, &bchan->desc_list, desc_node) {
+				if (async_desc->vd.tx.cookie != cookie)
+					continue;
 
-			for (i = 0; i < async_desc->num_desc; i++)
-				residue += le16_to_cpu(
-						async_desc->curr_desc[i].size);
+				for (i = 0; i < async_desc->num_desc; i++)
+					residue += le16_to_cpu(
+							async_desc->curr_desc[i].size);
+			}
 		}
 	}
 
-	spin_unlock_irqrestore(&bchan->vc.lock, flags);
-
 	dma_set_residue(txstate, residue);
 
 	if (ret == DMA_IN_PROGRESS && bchan->paused)
@@ -1116,17 +1106,16 @@ static void dma_tasklet(struct tasklet_struct *t)
 {
 	struct bam_device *bdev = from_tasklet(bdev, t, task);
 	struct bam_chan *bchan;
-	unsigned long flags;
 	unsigned int i;
 
 	/* go through the channels and kick off transactions */
 	for (i = 0; i < bdev->num_channels; i++) {
 		bchan = &bdev->channels[i];
-		spin_lock_irqsave(&bchan->vc.lock, flags);
+
+		guard(spinlock_irqsave)(&bchan->vc.lock);
 
 		if (!list_empty(&bchan->vc.desc_issued) && !IS_BUSY(bchan))
 			bam_start_dma(bchan);
-		spin_unlock_irqrestore(&bchan->vc.lock, flags);
 	}
 
 }
@@ -1140,15 +1129,12 @@ static void dma_tasklet(struct tasklet_struct *t)
 static void bam_issue_pending(struct dma_chan *chan)
 {
 	struct bam_chan *bchan = to_bam_chan(chan);
-	unsigned long flags;
 
-	spin_lock_irqsave(&bchan->vc.lock, flags);
+	guard(spinlock_irqsave)(&bchan->vc.lock);
 
 	/* if work pending and idle, start a transaction */
 	if (vchan_issue_pending(&bchan->vc) && !IS_BUSY(bchan))
 		bam_start_dma(bchan);
-
-	spin_unlock_irqrestore(&bchan->vc.lock, flags);
 }
 
 /**

-- 
2.51.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/3] dmaengine: qcom: bam_dma: convert tasklet to a workqueue
  2025-11-06 15:44 [PATCH 0/3] dmengine: qcom: bam_dma: some driver refactoring Bartosz Golaszewski
  2025-11-06 15:44 ` [PATCH 1/3] dmaengine: qcom: bam_dma: order includes alphabetically Bartosz Golaszewski
  2025-11-06 15:44 ` [PATCH 2/3] dmaengine: qcom: bam_dma: use lock guards Bartosz Golaszewski
@ 2025-11-06 15:44 ` Bartosz Golaszewski
  2025-11-12  3:10   ` Dmitry Baryshkov
  2025-11-12 16:19   ` Bjorn Andersson
  2 siblings, 2 replies; 8+ messages in thread
From: Bartosz Golaszewski @ 2025-11-06 15:44 UTC (permalink / raw)
  To: Vinod Koul; +Cc: linux-arm-msm, dmaengine, linux-kernel, Bartosz Golaszewski

From: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>

There is nothing in the interrupt handling that requires us to run in
atomic context so convert the tasklet to a workqueue.

Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
---
 drivers/dma/qcom/bam_dma.c | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/drivers/dma/qcom/bam_dma.c b/drivers/dma/qcom/bam_dma.c
index bcd8de9a9a12621a36b49c31bff96f474468be06..40ad4179177fb7a074776db05b834da012f6a35f 100644
--- a/drivers/dma/qcom/bam_dma.c
+++ b/drivers/dma/qcom/bam_dma.c
@@ -42,6 +42,7 @@
 #include <linux/pm_runtime.h>
 #include <linux/scatterlist.h>
 #include <linux/slab.h>
+#include <linux/workqueue.h>
 
 #include "../dmaengine.h"
 #include "../virt-dma.h"
@@ -397,8 +398,8 @@ struct bam_device {
 	struct clk *bamclk;
 	int irq;
 
-	/* dma start transaction tasklet */
-	struct tasklet_struct task;
+	/* dma start transaction workqueue */
+	struct work_struct work;
 };
 
 /**
@@ -869,7 +870,7 @@ static u32 process_channel_irqs(struct bam_device *bdev)
 			/*
 			 * if complete, process cookie. Otherwise
 			 * push back to front of desc_issued so that
-			 * it gets restarted by the tasklet
+			 * it gets restarted by the work queue.
 			 */
 			if (!async_desc->num_desc) {
 				vchan_cookie_complete(&async_desc->vd);
@@ -899,9 +900,9 @@ static irqreturn_t bam_dma_irq(int irq, void *data)
 
 	srcs |= process_channel_irqs(bdev);
 
-	/* kick off tasklet to start next dma transfer */
+	/* kick off the work queue to start next dma transfer */
 	if (srcs & P_IRQ)
-		tasklet_schedule(&bdev->task);
+		schedule_work(&bdev->work);
 
 	ret = pm_runtime_get_sync(bdev->dev);
 	if (ret < 0)
@@ -1097,14 +1098,14 @@ static void bam_start_dma(struct bam_chan *bchan)
 }
 
 /**
- * dma_tasklet - DMA IRQ tasklet
- * @t: tasklet argument (bam controller structure)
+ * bam_dma_work() - DMA interrupt work queue callback
+ * @work: work queue struct embedded in the BAM controller device struct
  *
  * Sets up next DMA operation and then processes all completed transactions
  */
-static void dma_tasklet(struct tasklet_struct *t)
+static void bam_dma_work(struct work_struct *work)
 {
-	struct bam_device *bdev = from_tasklet(bdev, t, task);
+	struct bam_device *bdev = from_work(bdev, work, work);
 	struct bam_chan *bchan;
 	unsigned int i;
 
@@ -1117,14 +1118,13 @@ static void dma_tasklet(struct tasklet_struct *t)
 		if (!list_empty(&bchan->vc.desc_issued) && !IS_BUSY(bchan))
 			bam_start_dma(bchan);
 	}
-
 }
 
 /**
  * bam_issue_pending - starts pending transactions
  * @chan: dma channel
  *
- * Calls tasklet directly which in turn starts any pending transactions
+ * Calls work queue directly which in turn starts any pending transactions
  */
 static void bam_issue_pending(struct dma_chan *chan)
 {
@@ -1292,14 +1292,14 @@ static int bam_dma_probe(struct platform_device *pdev)
 	if (ret)
 		goto err_disable_clk;
 
-	tasklet_setup(&bdev->task, dma_tasklet);
+	INIT_WORK(&bdev->work, bam_dma_work);
 
 	bdev->channels = devm_kcalloc(bdev->dev, bdev->num_channels,
 				sizeof(*bdev->channels), GFP_KERNEL);
 
 	if (!bdev->channels) {
 		ret = -ENOMEM;
-		goto err_tasklet_kill;
+		goto err_workqueue_cancel;
 	}
 
 	/* allocate and initialize channels */
@@ -1364,8 +1364,8 @@ static int bam_dma_probe(struct platform_device *pdev)
 err_bam_channel_exit:
 	for (i = 0; i < bdev->num_channels; i++)
 		tasklet_kill(&bdev->channels[i].vc.task);
-err_tasklet_kill:
-	tasklet_kill(&bdev->task);
+err_workqueue_cancel:
+	cancel_work_sync(&bdev->work);
 err_disable_clk:
 	clk_disable_unprepare(bdev->bamclk);
 
@@ -1399,7 +1399,7 @@ static void bam_dma_remove(struct platform_device *pdev)
 			    bdev->channels[i].fifo_phys);
 	}
 
-	tasklet_kill(&bdev->task);
+	cancel_work_sync(&bdev->work);
 
 	clk_disable_unprepare(bdev->bamclk);
 }

-- 
2.51.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] dmaengine: qcom: bam_dma: convert tasklet to a workqueue
  2025-11-06 15:44 ` [PATCH 3/3] dmaengine: qcom: bam_dma: convert tasklet to a workqueue Bartosz Golaszewski
@ 2025-11-12  3:10   ` Dmitry Baryshkov
  2025-11-12 16:19   ` Bjorn Andersson
  1 sibling, 0 replies; 8+ messages in thread
From: Dmitry Baryshkov @ 2025-11-12  3:10 UTC (permalink / raw)
  To: Bartosz Golaszewski
  Cc: Vinod Koul, linux-arm-msm, dmaengine, linux-kernel,
	Bartosz Golaszewski

On Thu, Nov 06, 2025 at 04:44:52PM +0100, Bartosz Golaszewski wrote:
> From: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
> 
> There is nothing in the interrupt handling that requires us to run in
> atomic context so convert the tasklet to a workqueue.
> 
> Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
> ---
>  drivers/dma/qcom/bam_dma.c | 32 ++++++++++++++++----------------
>  1 file changed, 16 insertions(+), 16 deletions(-)
> 

Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@oss.qualcomm.com>


-- 
With best wishes
Dmitry

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/3] dmaengine: qcom: bam_dma: order includes alphabetically
  2025-11-06 15:44 ` [PATCH 1/3] dmaengine: qcom: bam_dma: order includes alphabetically Bartosz Golaszewski
@ 2025-11-12  3:11   ` Dmitry Baryshkov
  0 siblings, 0 replies; 8+ messages in thread
From: Dmitry Baryshkov @ 2025-11-12  3:11 UTC (permalink / raw)
  To: Bartosz Golaszewski
  Cc: Vinod Koul, linux-arm-msm, dmaengine, linux-kernel,
	Bartosz Golaszewski

On Thu, Nov 06, 2025 at 04:44:50PM +0100, Bartosz Golaszewski wrote:
> From: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
> 
> For easier maintenance and better readability order all includes
> alphabetically.
> 
> Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
> ---
>  drivers/dma/qcom/bam_dma.c | 28 ++++++++++++++--------------
>  1 file changed, 14 insertions(+), 14 deletions(-)
> 

Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@oss.qualcomm.com>



-- 
With best wishes
Dmitry

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] dmaengine: qcom: bam_dma: convert tasklet to a workqueue
  2025-11-06 15:44 ` [PATCH 3/3] dmaengine: qcom: bam_dma: convert tasklet to a workqueue Bartosz Golaszewski
  2025-11-12  3:10   ` Dmitry Baryshkov
@ 2025-11-12 16:19   ` Bjorn Andersson
  2025-11-13 10:26     ` Bartosz Golaszewski
  1 sibling, 1 reply; 8+ messages in thread
From: Bjorn Andersson @ 2025-11-12 16:19 UTC (permalink / raw)
  To: Bartosz Golaszewski
  Cc: Vinod Koul, linux-arm-msm, dmaengine, linux-kernel,
	Bartosz Golaszewski

On Thu, Nov 06, 2025 at 04:44:52PM +0100, Bartosz Golaszewski wrote:
> From: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
> 
> There is nothing in the interrupt handling that requires us to run in
> atomic context so convert the tasklet to a workqueue.
> 
> Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>

Reviewed-by: Bjorn Andersson <andersson@kernel.org>

I like the patch, getting off the tasklet is nice. But reading the
patch/driver spawned some additional questions (not (necessarily)
feedback to this patch).

> ---
>  drivers/dma/qcom/bam_dma.c | 32 ++++++++++++++++----------------
>  1 file changed, 16 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/dma/qcom/bam_dma.c b/drivers/dma/qcom/bam_dma.c
> index bcd8de9a9a12621a36b49c31bff96f474468be06..40ad4179177fb7a074776db05b834da012f6a35f 100644
> --- a/drivers/dma/qcom/bam_dma.c
> +++ b/drivers/dma/qcom/bam_dma.c
> @@ -42,6 +42,7 @@
>  #include <linux/pm_runtime.h>
>  #include <linux/scatterlist.h>
>  #include <linux/slab.h>
> +#include <linux/workqueue.h>
>  
>  #include "../dmaengine.h"
>  #include "../virt-dma.h"
> @@ -397,8 +398,8 @@ struct bam_device {
>  	struct clk *bamclk;
>  	int irq;
>  
> -	/* dma start transaction tasklet */
> -	struct tasklet_struct task;
> +	/* dma start transaction workqueue */
> +	struct work_struct work;
>  };
>  
>  /**
> @@ -869,7 +870,7 @@ static u32 process_channel_irqs(struct bam_device *bdev)
>  			/*
>  			 * if complete, process cookie. Otherwise
>  			 * push back to front of desc_issued so that
> -			 * it gets restarted by the tasklet
> +			 * it gets restarted by the work queue.
>  			 */
>  			if (!async_desc->num_desc) {
>  				vchan_cookie_complete(&async_desc->vd);
> @@ -899,9 +900,9 @@ static irqreturn_t bam_dma_irq(int irq, void *data)
>  
>  	srcs |= process_channel_irqs(bdev);
>  
> -	/* kick off tasklet to start next dma transfer */
> +	/* kick off the work queue to start next dma transfer */
>  	if (srcs & P_IRQ)
> -		tasklet_schedule(&bdev->task);
> +		schedule_work(&bdev->work);

I'm not entirely familiar with the BAM driver, but wouldn't it be
preferable to make the interrupt handler threaded and just kick off the
next set of transactions directly from here? To reduce the downtime
between transactions when more are ready queued.

It seems this might be of concern when we have queued more transfers
than can fit in the hardware, but I don't have any data indicating how
often this happens.

>  
>  	ret = pm_runtime_get_sync(bdev->dev);

The "bus" clock is tied to the PM runtime state, so I presume this is
here in order to ensure the block is clocked for the following register
accesses(?)

But process_channel_irqs() was just all over the same register space.


Also noteworthy is that none of the pm_runtime_get_sync() in this driver
calls have adequate error handling.

Regards,
Bjorn

>  	if (ret < 0)
> @@ -1097,14 +1098,14 @@ static void bam_start_dma(struct bam_chan *bchan)
>  }
>  
>  /**
> - * dma_tasklet - DMA IRQ tasklet
> - * @t: tasklet argument (bam controller structure)
> + * bam_dma_work() - DMA interrupt work queue callback
> + * @work: work queue struct embedded in the BAM controller device struct
>   *
>   * Sets up next DMA operation and then processes all completed transactions
>   */
> -static void dma_tasklet(struct tasklet_struct *t)
> +static void bam_dma_work(struct work_struct *work)
>  {
> -	struct bam_device *bdev = from_tasklet(bdev, t, task);
> +	struct bam_device *bdev = from_work(bdev, work, work);
>  	struct bam_chan *bchan;
>  	unsigned int i;
>  
> @@ -1117,14 +1118,13 @@ static void dma_tasklet(struct tasklet_struct *t)
>  		if (!list_empty(&bchan->vc.desc_issued) && !IS_BUSY(bchan))
>  			bam_start_dma(bchan);
>  	}
> -
>  }
>  
>  /**
>   * bam_issue_pending - starts pending transactions
>   * @chan: dma channel
>   *
> - * Calls tasklet directly which in turn starts any pending transactions
> + * Calls work queue directly which in turn starts any pending transactions
>   */
>  static void bam_issue_pending(struct dma_chan *chan)
>  {
> @@ -1292,14 +1292,14 @@ static int bam_dma_probe(struct platform_device *pdev)
>  	if (ret)
>  		goto err_disable_clk;
>  
> -	tasklet_setup(&bdev->task, dma_tasklet);
> +	INIT_WORK(&bdev->work, bam_dma_work);
>  
>  	bdev->channels = devm_kcalloc(bdev->dev, bdev->num_channels,
>  				sizeof(*bdev->channels), GFP_KERNEL);
>  
>  	if (!bdev->channels) {
>  		ret = -ENOMEM;
> -		goto err_tasklet_kill;
> +		goto err_workqueue_cancel;
>  	}
>  
>  	/* allocate and initialize channels */
> @@ -1364,8 +1364,8 @@ static int bam_dma_probe(struct platform_device *pdev)
>  err_bam_channel_exit:
>  	for (i = 0; i < bdev->num_channels; i++)
>  		tasklet_kill(&bdev->channels[i].vc.task);
> -err_tasklet_kill:
> -	tasklet_kill(&bdev->task);
> +err_workqueue_cancel:
> +	cancel_work_sync(&bdev->work);
>  err_disable_clk:
>  	clk_disable_unprepare(bdev->bamclk);
>  
> @@ -1399,7 +1399,7 @@ static void bam_dma_remove(struct platform_device *pdev)
>  			    bdev->channels[i].fifo_phys);
>  	}
>  
> -	tasklet_kill(&bdev->task);
> +	cancel_work_sync(&bdev->work);
>  
>  	clk_disable_unprepare(bdev->bamclk);
>  }
> 
> -- 
> 2.51.0
> 
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] dmaengine: qcom: bam_dma: convert tasklet to a workqueue
  2025-11-12 16:19   ` Bjorn Andersson
@ 2025-11-13 10:26     ` Bartosz Golaszewski
  0 siblings, 0 replies; 8+ messages in thread
From: Bartosz Golaszewski @ 2025-11-13 10:26 UTC (permalink / raw)
  To: Bjorn Andersson
  Cc: Vinod Koul, linux-arm-msm, dmaengine, linux-kernel,
	Bartosz Golaszewski

On Wed, Nov 12, 2025 at 5:15 PM Bjorn Andersson <andersson@kernel.org> wrote:
>
> On Thu, Nov 06, 2025 at 04:44:52PM +0100, Bartosz Golaszewski wrote:
> > From: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
> >
> > There is nothing in the interrupt handling that requires us to run in
> > atomic context so convert the tasklet to a workqueue.
> >
> > Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
>
> Reviewed-by: Bjorn Andersson <andersson@kernel.org>
>
> I like the patch, getting off the tasklet is nice. But reading the
> patch/driver spawned some additional questions (not (necessarily)
> feedback to this patch).
>

Thanks, I can look into it once the locking and these changes are in next.

Bart

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-11-13 10:26 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-06 15:44 [PATCH 0/3] dmengine: qcom: bam_dma: some driver refactoring Bartosz Golaszewski
2025-11-06 15:44 ` [PATCH 1/3] dmaengine: qcom: bam_dma: order includes alphabetically Bartosz Golaszewski
2025-11-12  3:11   ` Dmitry Baryshkov
2025-11-06 15:44 ` [PATCH 2/3] dmaengine: qcom: bam_dma: use lock guards Bartosz Golaszewski
2025-11-06 15:44 ` [PATCH 3/3] dmaengine: qcom: bam_dma: convert tasklet to a workqueue Bartosz Golaszewski
2025-11-12  3:10   ` Dmitry Baryshkov
2025-11-12 16:19   ` Bjorn Andersson
2025-11-13 10:26     ` Bartosz Golaszewski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).