dmaengine.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4] dmaengine: xilinx_dma: Support descriptor setup from dma_vecs
@ 2025-08-26 18:21 Folker Schwesinger
  2025-09-02  9:49 ` Vinod Koul
  0 siblings, 1 reply; 2+ messages in thread
From: Folker Schwesinger @ 2025-08-26 18:21 UTC (permalink / raw)
  To: dmaengine, linux-arm-kernel, linux-kernel
  Cc: Vinod Koul, Michal Simek, Jernej Skrabec, Krzysztof Kozlowski,
	Uwe Kleine-König, Marek Vasut, Radhey Shyam Pandey

The DMAEngine provides an interface for obtaining DMA transaction
descriptors from an array of scatter gather buffers represented by
struct dma_vec. This interface is used in the DMABUF API of the IIO
framework [1][2].
To enable DMABUF support through the IIO framework for the Xilinx DMA,
implement callback .device_prep_peripheral_dma_vec() of struct
dma_device in the driver.

[1]: 7a86d469983a ("iio: buffer-dmaengine: Support new DMABUF based userspace API")
[2]: 5878853fc938 ("dmaengine: Add API function dmaengine_prep_peripheral_dma_vec()")

Signed-off-by: Folker Schwesinger <dev@folker-schwesinger.de>
Reviewed-by: Suraj Gupta <suraj.gupta2@amd.com>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>

---
Changes in v4:
- Replace external link to relevant source code with commit ids of the
  changes introducing dmaengine_prep_peripheral_dma_vec() and the IIO
  DMABUF-based API (suggested by Radhey Shyam Pandey).
- Collect R-b tags from v3.
- Link to v3: https://lore.kernel.org/dmaengine/DBZUIRI5Q4A3.1OIBMF9Z5EQ0X@folker-schwesinger.de/

Changes in v3:
- Collect R-b tags from v2.
- Rebase onto v6.17-rc1.
- Link to v2: https://lore.kernel.org/dmaengine/DAQB7EU7UXR3.Z07Q6JQ1V67Y@folker-schwesinger.de/

Changes in v2:
- Improve commit message to include reasoning behind the change.
- Rebase onto v6.16-rc1.
- Link to v1: https://lore.kernel.org/dmaengine/D8TV2MP99NTE.1842MMA04VB9N@folker-schwesinger.de/
---
 drivers/dma/xilinx/xilinx_dma.c | 94 +++++++++++++++++++++++++++++++++
 1 file changed, 94 insertions(+)

diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
index a34d8f0ceed8..fabff602065f 100644
--- a/drivers/dma/xilinx/xilinx_dma.c
+++ b/drivers/dma/xilinx/xilinx_dma.c
@@ -2172,6 +2172,99 @@ xilinx_cdma_prep_memcpy(struct dma_chan *dchan, dma_addr_t dma_dst,
 	return NULL;
 }
 
+/**
+ * xilinx_dma_prep_peripheral_dma_vec - prepare descriptors for a DMA_SLAVE
+ *	transaction from DMA vectors
+ * @dchan: DMA channel
+ * @vecs: Array of DMA vectors that should be transferred
+ * @nb: number of entries in @vecs
+ * @direction: DMA direction
+ * @flags: transfer ack flags
+ *
+ * Return: Async transaction descriptor on success and NULL on failure
+ */
+static struct dma_async_tx_descriptor *xilinx_dma_prep_peripheral_dma_vec(
+	struct dma_chan *dchan, const struct dma_vec *vecs, size_t nb,
+	enum dma_transfer_direction direction, unsigned long flags)
+{
+	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+	struct xilinx_dma_tx_descriptor *desc;
+	struct xilinx_axidma_tx_segment *segment, *head, *prev = NULL;
+	size_t copy;
+	size_t sg_used;
+	unsigned int i;
+
+	if (!is_slave_direction(direction) || direction != chan->direction)
+		return NULL;
+
+	desc = xilinx_dma_alloc_tx_descriptor(chan);
+	if (!desc)
+		return NULL;
+
+	dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
+	desc->async_tx.tx_submit = xilinx_dma_tx_submit;
+
+	/* Build transactions using information from DMA vectors */
+	for (i = 0; i < nb; i++) {
+		sg_used = 0;
+
+		/* Loop until the entire dma_vec entry is used */
+		while (sg_used < vecs[i].len) {
+			struct xilinx_axidma_desc_hw *hw;
+
+			/* Get a free segment */
+			segment = xilinx_axidma_alloc_tx_segment(chan);
+			if (!segment)
+				goto error;
+
+			/*
+			 * Calculate the maximum number of bytes to transfer,
+			 * making sure it is less than the hw limit
+			 */
+			copy = xilinx_dma_calc_copysize(chan, vecs[i].len,
+					sg_used);
+			hw = &segment->hw;
+
+			/* Fill in the descriptor */
+			xilinx_axidma_buf(chan, hw, vecs[i].addr, sg_used, 0);
+			hw->control = copy;
+
+			if (prev)
+				prev->hw.next_desc = segment->phys;
+
+			prev = segment;
+			sg_used += copy;
+
+			/*
+			 * Insert the segment into the descriptor segments
+			 * list.
+			 */
+			list_add_tail(&segment->node, &desc->segments);
+		}
+	}
+
+	head = list_first_entry(&desc->segments, struct xilinx_axidma_tx_segment, node);
+	desc->async_tx.phys = head->phys;
+
+	/* For the last DMA_MEM_TO_DEV transfer, set EOP */
+	if (chan->direction == DMA_MEM_TO_DEV) {
+		segment->hw.control |= XILINX_DMA_BD_SOP;
+		segment = list_last_entry(&desc->segments,
+					  struct xilinx_axidma_tx_segment,
+					  node);
+		segment->hw.control |= XILINX_DMA_BD_EOP;
+	}
+
+	if (chan->xdev->has_axistream_connected)
+		desc->async_tx.metadata_ops = &xilinx_dma_metadata_ops;
+
+	return &desc->async_tx;
+
+error:
+	xilinx_dma_free_tx_descriptor(chan, desc);
+	return NULL;
+}
+
 /**
  * xilinx_dma_prep_slave_sg - prepare descriptors for a DMA_SLAVE transaction
  * @dchan: DMA channel
@@ -3180,6 +3273,7 @@ static int xilinx_dma_probe(struct platform_device *pdev)
 	xdev->common.device_config = xilinx_dma_device_config;
 	if (xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) {
 		dma_cap_set(DMA_CYCLIC, xdev->common.cap_mask);
+		xdev->common.device_prep_peripheral_dma_vec = xilinx_dma_prep_peripheral_dma_vec;
 		xdev->common.device_prep_slave_sg = xilinx_dma_prep_slave_sg;
 		xdev->common.device_prep_dma_cyclic =
 					  xilinx_dma_prep_dma_cyclic;

base-commit: 8f5ae30d69d7543eee0d70083daf4de8fe15d585
-- 
2.50.1

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH v4] dmaengine: xilinx_dma: Support descriptor setup from dma_vecs
  2025-08-26 18:21 [PATCH v4] dmaengine: xilinx_dma: Support descriptor setup from dma_vecs Folker Schwesinger
@ 2025-09-02  9:49 ` Vinod Koul
  0 siblings, 0 replies; 2+ messages in thread
From: Vinod Koul @ 2025-09-02  9:49 UTC (permalink / raw)
  To: dmaengine, linux-arm-kernel, linux-kernel, Folker Schwesinger
  Cc: Michal Simek, Jernej Skrabec, Krzysztof Kozlowski,
	Uwe Kleine-König, Marek Vasut, Radhey Shyam Pandey


On Tue, 26 Aug 2025 20:21:10 +0200, Folker Schwesinger wrote:
> The DMAEngine provides an interface for obtaining DMA transaction
> descriptors from an array of scatter gather buffers represented by
> struct dma_vec. This interface is used in the DMABUF API of the IIO
> framework [1][2].
> To enable DMABUF support through the IIO framework for the Xilinx DMA,
> implement callback .device_prep_peripheral_dma_vec() of struct
> dma_device in the driver.
> 
> [...]

Applied, thanks!

[1/1] dmaengine: xilinx_dma: Support descriptor setup from dma_vecs
      commit: 38433a6fdfb75d12a90ffff262705e1ecfe88556

Best regards,
-- 
~Vinod



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-09-02  9:49 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-26 18:21 [PATCH v4] dmaengine: xilinx_dma: Support descriptor setup from dma_vecs Folker Schwesinger
2025-09-02  9:49 ` Vinod Koul

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).