* [PATCH v3] dmaengine: xilinx_dma: Support descriptor setup from dma_vecs
@ 2025-08-11 19:22 Folker Schwesinger
2025-08-13 6:43 ` Pandey, Radhey Shyam
0 siblings, 1 reply; 2+ messages in thread
From: Folker Schwesinger @ 2025-08-11 19:22 UTC (permalink / raw)
To: dmaengine, linux-arm-kernel, linux-kernel
Cc: Vinod Koul, Michal Simek, Jernej Skrabec, Krzysztof Kozlowski,
Uwe Kleine-König, Marek Vasut, Radhey Shyam Pandey
The DMAEngine provides an interface for obtaining DMA transaction
descriptors from an array of scatter gather buffers represented by
struct dma_vec. This interface is used in the DMABUF API of the IIO
framework [1].
To enable DMABUF support through the IIO framework for the Xilinx DMA,
implement callback .device_prep_peripheral_dma_vec() of struct
dma_device in the driver.
[1]: https://elixir.bootlin.com/linux/v6.16-rc1/source/drivers/iio/buffer/industrialio-buffer-dmaengine.c#L104
Signed-off-by: Folker Schwesinger <dev@folker-schwesinger.de>
Reviewed-by: Suraj Gupta <suraj.gupta2@amd.com>
---
Changes in v3:
- Collect R-b tags from v2.
- Rebase onto v6.17-rc1.
- Link to v2: https://lore.kernel.org/dmaengine/DAQB7EU7UXR3.Z07Q6JQ1V67Y@folker-schwesinger.de/
Changes in v2:
- Improve commit message to include reasoning behind the change.
- Rebase onto v6.16-rc1.
- Link to v1: https://lore.kernel.org/dmaengine/D8TV2MP99NTE.1842MMA04VB9N@folker-schwesinger.de/
---
drivers/dma/xilinx/xilinx_dma.c | 94 +++++++++++++++++++++++++++++++++
1 file changed, 94 insertions(+)
diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
index a34d8f0ceed8..fabff602065f 100644
--- a/drivers/dma/xilinx/xilinx_dma.c
+++ b/drivers/dma/xilinx/xilinx_dma.c
@@ -2172,6 +2172,99 @@ xilinx_cdma_prep_memcpy(struct dma_chan *dchan, dma_addr_t dma_dst,
return NULL;
}
+/**
+ * xilinx_dma_prep_peripheral_dma_vec - prepare descriptors for a DMA_SLAVE
+ * transaction from DMA vectors
+ * @dchan: DMA channel
+ * @vecs: Array of DMA vectors that should be transferred
+ * @nb: number of entries in @vecs
+ * @direction: DMA direction
+ * @flags: transfer ack flags
+ *
+ * Return: Async transaction descriptor on success and NULL on failure
+ */
+static struct dma_async_tx_descriptor *xilinx_dma_prep_peripheral_dma_vec(
+ struct dma_chan *dchan, const struct dma_vec *vecs, size_t nb,
+ enum dma_transfer_direction direction, unsigned long flags)
+{
+ struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+ struct xilinx_dma_tx_descriptor *desc;
+ struct xilinx_axidma_tx_segment *segment, *head, *prev = NULL;
+ size_t copy;
+ size_t sg_used;
+ unsigned int i;
+
+ if (!is_slave_direction(direction) || direction != chan->direction)
+ return NULL;
+
+ desc = xilinx_dma_alloc_tx_descriptor(chan);
+ if (!desc)
+ return NULL;
+
+ dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
+ desc->async_tx.tx_submit = xilinx_dma_tx_submit;
+
+ /* Build transactions using information from DMA vectors */
+ for (i = 0; i < nb; i++) {
+ sg_used = 0;
+
+ /* Loop until the entire dma_vec entry is used */
+ while (sg_used < vecs[i].len) {
+ struct xilinx_axidma_desc_hw *hw;
+
+ /* Get a free segment */
+ segment = xilinx_axidma_alloc_tx_segment(chan);
+ if (!segment)
+ goto error;
+
+ /*
+ * Calculate the maximum number of bytes to transfer,
+ * making sure it is less than the hw limit
+ */
+ copy = xilinx_dma_calc_copysize(chan, vecs[i].len,
+ sg_used);
+ hw = &segment->hw;
+
+ /* Fill in the descriptor */
+ xilinx_axidma_buf(chan, hw, vecs[i].addr, sg_used, 0);
+ hw->control = copy;
+
+ if (prev)
+ prev->hw.next_desc = segment->phys;
+
+ prev = segment;
+ sg_used += copy;
+
+ /*
+ * Insert the segment into the descriptor segments
+ * list.
+ */
+ list_add_tail(&segment->node, &desc->segments);
+ }
+ }
+
+ head = list_first_entry(&desc->segments, struct xilinx_axidma_tx_segment, node);
+ desc->async_tx.phys = head->phys;
+
+ /* For the last DMA_MEM_TO_DEV transfer, set EOP */
+ if (chan->direction == DMA_MEM_TO_DEV) {
+ segment->hw.control |= XILINX_DMA_BD_SOP;
+ segment = list_last_entry(&desc->segments,
+ struct xilinx_axidma_tx_segment,
+ node);
+ segment->hw.control |= XILINX_DMA_BD_EOP;
+ }
+
+ if (chan->xdev->has_axistream_connected)
+ desc->async_tx.metadata_ops = &xilinx_dma_metadata_ops;
+
+ return &desc->async_tx;
+
+error:
+ xilinx_dma_free_tx_descriptor(chan, desc);
+ return NULL;
+}
+
/**
* xilinx_dma_prep_slave_sg - prepare descriptors for a DMA_SLAVE transaction
* @dchan: DMA channel
@@ -3180,6 +3273,7 @@ static int xilinx_dma_probe(struct platform_device *pdev)
xdev->common.device_config = xilinx_dma_device_config;
if (xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) {
dma_cap_set(DMA_CYCLIC, xdev->common.cap_mask);
+ xdev->common.device_prep_peripheral_dma_vec = xilinx_dma_prep_peripheral_dma_vec;
xdev->common.device_prep_slave_sg = xilinx_dma_prep_slave_sg;
xdev->common.device_prep_dma_cyclic =
xilinx_dma_prep_dma_cyclic;
base-commit: 8f5ae30d69d7543eee0d70083daf4de8fe15d585
--
2.50.1
^ permalink raw reply related [flat|nested] 2+ messages in thread
* RE: [PATCH v3] dmaengine: xilinx_dma: Support descriptor setup from dma_vecs
2025-08-11 19:22 [PATCH v3] dmaengine: xilinx_dma: Support descriptor setup from dma_vecs Folker Schwesinger
@ 2025-08-13 6:43 ` Pandey, Radhey Shyam
0 siblings, 0 replies; 2+ messages in thread
From: Pandey, Radhey Shyam @ 2025-08-13 6:43 UTC (permalink / raw)
To: Folker Schwesinger, dmaengine@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org
Cc: Vinod Koul, Simek, Michal, Jernej Skrabec, Krzysztof Kozlowski,
Uwe Kleine-König, Marek Vasut
[Public]
> -----Original Message-----
> From: Folker Schwesinger <dev@folker-schwesinger.de>
> Sent: Tuesday, August 12, 2025 12:53 AM
> To: dmaengine@vger.kernel.org; linux-arm-kernel@lists.infradead.org; linux-
> kernel@vger.kernel.org
> Cc: Vinod Koul <vkoul@kernel.org>; Simek, Michal <michal.simek@amd.com>;
> Jernej Skrabec <jernej.skrabec@gmail.com>; Krzysztof Kozlowski
> <krzysztof.kozlowski@linaro.org>; Uwe Kleine-König <u.kleine-
> koenig@baylibre.com>; Marek Vasut <marex@denx.de>; Pandey, Radhey Shyam
> <radhey.shyam.pandey@amd.com>
> Subject: [PATCH v3] dmaengine: xilinx_dma: Support descriptor setup from
> dma_vecs
>
> The DMAEngine provides an interface for obtaining DMA transaction descriptors
> from an array of scatter gather buffers represented by struct dma_vec. This interface
> is used in the DMABUF API of the IIO framework [1].
> To enable DMABUF support through the IIO framework for the Xilinx DMA,
> implement callback .device_prep_peripheral_dma_vec() of struct dma_device in the
> driver.
>
> [1]: https://elixir.bootlin.com/linux/v6.16-rc1/source/drivers/iio/buffer/industrialio-
> buffer-dmaengine.c#L104
Nit - avoid links for existing kernel sources instead can mention source file
name or commit id that introduced the change.
>
> Signed-off-by: Folker Schwesinger <dev@folker-schwesinger.de>
> Reviewed-by: Suraj Gupta <suraj.gupta2@amd.com>
Rest looks fine to me.
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
>
> ---
> Changes in v3:
> - Collect R-b tags from v2.
> - Rebase onto v6.17-rc1.
> - Link to v2:
> https://lore.kernel.org/dmaengine/DAQB7EU7UXR3.Z07Q6JQ1V67Y@folker-
> schwesinger.de/
>
> Changes in v2:
> - Improve commit message to include reasoning behind the change.
> - Rebase onto v6.16-rc1.
> - Link to v1:
> https://lore.kernel.org/dmaengine/D8TV2MP99NTE.1842MMA04VB9N@folker-
> schwesinger.de/
> ---
> drivers/dma/xilinx/xilinx_dma.c | 94 +++++++++++++++++++++++++++++++++
> 1 file changed, 94 insertions(+)
>
> diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c index
> a34d8f0ceed8..fabff602065f 100644
> --- a/drivers/dma/xilinx/xilinx_dma.c
> +++ b/drivers/dma/xilinx/xilinx_dma.c
> @@ -2172,6 +2172,99 @@ xilinx_cdma_prep_memcpy(struct dma_chan *dchan,
> dma_addr_t dma_dst,
> return NULL;
> }
>
> +/**
> + * xilinx_dma_prep_peripheral_dma_vec - prepare descriptors for a DMA_SLAVE
> + * transaction from DMA vectors
> + * @dchan: DMA channel
> + * @vecs: Array of DMA vectors that should be transferred
> + * @nb: number of entries in @vecs
> + * @direction: DMA direction
> + * @flags: transfer ack flags
> + *
> + * Return: Async transaction descriptor on success and NULL on failure
> +*/ static struct dma_async_tx_descriptor
> +*xilinx_dma_prep_peripheral_dma_vec(
> + struct dma_chan *dchan, const struct dma_vec *vecs, size_t nb,
> + enum dma_transfer_direction direction, unsigned long flags) {
> + struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> + struct xilinx_dma_tx_descriptor *desc;
> + struct xilinx_axidma_tx_segment *segment, *head, *prev = NULL;
> + size_t copy;
> + size_t sg_used;
> + unsigned int i;
> +
> + if (!is_slave_direction(direction) || direction != chan->direction)
> + return NULL;
> +
> + desc = xilinx_dma_alloc_tx_descriptor(chan);
> + if (!desc)
> + return NULL;
> +
> + dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
> + desc->async_tx.tx_submit = xilinx_dma_tx_submit;
> +
> + /* Build transactions using information from DMA vectors */
> + for (i = 0; i < nb; i++) {
> + sg_used = 0;
> +
> + /* Loop until the entire dma_vec entry is used */
> + while (sg_used < vecs[i].len) {
> + struct xilinx_axidma_desc_hw *hw;
> +
> + /* Get a free segment */
> + segment = xilinx_axidma_alloc_tx_segment(chan);
> + if (!segment)
> + goto error;
> +
> + /*
> + * Calculate the maximum number of bytes to transfer,
> + * making sure it is less than the hw limit
> + */
> + copy = xilinx_dma_calc_copysize(chan, vecs[i].len,
> + sg_used);
> + hw = &segment->hw;
> +
> + /* Fill in the descriptor */
> + xilinx_axidma_buf(chan, hw, vecs[i].addr, sg_used, 0);
> + hw->control = copy;
> +
> + if (prev)
> + prev->hw.next_desc = segment->phys;
> +
> + prev = segment;
> + sg_used += copy;
> +
> + /*
> + * Insert the segment into the descriptor segments
> + * list.
> + */
> + list_add_tail(&segment->node, &desc->segments);
> + }
> + }
> +
> + head = list_first_entry(&desc->segments, struct xilinx_axidma_tx_segment,
> node);
> + desc->async_tx.phys = head->phys;
> +
> + /* For the last DMA_MEM_TO_DEV transfer, set EOP */
> + if (chan->direction == DMA_MEM_TO_DEV) {
> + segment->hw.control |= XILINX_DMA_BD_SOP;
> + segment = list_last_entry(&desc->segments,
> + struct xilinx_axidma_tx_segment,
> + node);
> + segment->hw.control |= XILINX_DMA_BD_EOP;
> + }
> +
> + if (chan->xdev->has_axistream_connected)
> + desc->async_tx.metadata_ops = &xilinx_dma_metadata_ops;
> +
> + return &desc->async_tx;
> +
> +error:
> + xilinx_dma_free_tx_descriptor(chan, desc);
> + return NULL;
> +}
> +
> /**
> * xilinx_dma_prep_slave_sg - prepare descriptors for a DMA_SLAVE transaction
> * @dchan: DMA channel
> @@ -3180,6 +3273,7 @@ static int xilinx_dma_probe(struct platform_device *pdev)
> xdev->common.device_config = xilinx_dma_device_config;
> if (xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) {
> dma_cap_set(DMA_CYCLIC, xdev->common.cap_mask);
> + xdev->common.device_prep_peripheral_dma_vec =
> +xilinx_dma_prep_peripheral_dma_vec;
> xdev->common.device_prep_slave_sg = xilinx_dma_prep_slave_sg;
> xdev->common.device_prep_dma_cyclic =
> xilinx_dma_prep_dma_cyclic;
>
> base-commit: 8f5ae30d69d7543eee0d70083daf4de8fe15d585
> --
> 2.50.1
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2025-08-13 7:41 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-11 19:22 [PATCH v3] dmaengine: xilinx_dma: Support descriptor setup from dma_vecs Folker Schwesinger
2025-08-13 6:43 ` Pandey, Radhey Shyam
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).