From: Jonathan Cameron <jic23@kernel.org>
To: Paul Cercueil <paul@crapouillou.net>
Cc: "Lars-Peter Clausen" <lars@metafoo.de>,
"Sumit Semwal" <sumit.semwal@linaro.org>,
"Christian König" <christian.koenig@amd.com>,
"Vinod Koul" <vkoul@kernel.org>,
"Jonathan Corbet" <corbet@lwn.net>,
linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
dmaengine@vger.kernel.org, linux-iio@vger.kernel.org,
linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org,
linaro-mm-sig@lists.linaro.org, "Nuno Sá" <noname.nuno@gmail.com>,
"Michael Hennerich" <Michael.Hennerich@analog.com>
Subject: Re: [PATCH v5 7/8] iio: buffer-dmaengine: Support new DMABUF based userspace API
Date: Thu, 21 Dec 2023 16:12:58 +0000 [thread overview]
Message-ID: <20231221161258.056f5ce4@jic23-huawei> (raw)
In-Reply-To: <20231219175009.65482-8-paul@crapouillou.net>
On Tue, 19 Dec 2023 18:50:08 +0100
Paul Cercueil <paul@crapouillou.net> wrote:
> Use the functions provided by the buffer-dma core to implement the
> DMABUF userspace API in the buffer-dmaengine IIO buffer implementation.
>
> Since we want to be able to transfer an arbitrary number of bytes and
> not necesarily the full DMABUF, the associated scatterlist is converted
> to an array of DMA addresses + lengths, which is then passed to
> dmaengine_prep_slave_dma_array().
>
> Signed-off-by: Paul Cercueil <paul@crapouillou.net>
One question inline. Otherwise looks fine to me.
J
>
> ---
> v3: Use the new dmaengine_prep_slave_dma_array(), and adapt the code to
> work with the new functions introduced in industrialio-buffer-dma.c.
>
> v5: - Use the new dmaengine_prep_slave_dma_vec().
> - Restrict to input buffers, since output buffers are not yet
> supported by IIO buffers.
> ---
> .../buffer/industrialio-buffer-dmaengine.c | 52 ++++++++++++++++---
> 1 file changed, 46 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> index 5f85ba38e6f6..825d76a24a67 100644
> --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> @@ -64,15 +64,51 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue,
> struct dmaengine_buffer *dmaengine_buffer =
> iio_buffer_to_dmaengine_buffer(&queue->buffer);
> struct dma_async_tx_descriptor *desc;
> + unsigned int i, nents;
> + struct scatterlist *sgl;
> + struct dma_vec *vecs;
> + size_t max_size;
> dma_cookie_t cookie;
> + size_t len_total;
>
> - block->bytes_used = min(block->size, dmaengine_buffer->max_size);
> - block->bytes_used = round_down(block->bytes_used,
> - dmaengine_buffer->align);
> + if (queue->buffer.direction != IIO_BUFFER_DIRECTION_IN) {
> + /* We do not yet support output buffers. */
> + return -EINVAL;
> + }
>
> - desc = dmaengine_prep_slave_single(dmaengine_buffer->chan,
> - block->phys_addr, block->bytes_used, DMA_DEV_TO_MEM,
> - DMA_PREP_INTERRUPT);
> + if (block->sg_table) {
> + sgl = block->sg_table->sgl;
> + nents = sg_nents_for_len(sgl, block->bytes_used);
Are we guaranteed the length in the sglist is enough? If not this
can return an error code.
> +
> + vecs = kmalloc_array(nents, sizeof(*vecs), GFP_KERNEL);
> + if (!vecs)
> + return -ENOMEM;
> +
> + len_total = block->bytes_used;
> +
> + for (i = 0; i < nents; i++) {
> + vecs[i].addr = sg_dma_address(sgl);
> + vecs[i].len = min(sg_dma_len(sgl), len_total);
> + len_total -= vecs[i].len;
> +
> + sgl = sg_next(sgl);
> + }
> +
> + desc = dmaengine_prep_slave_dma_vec(dmaengine_buffer->chan,
> + vecs, nents, DMA_DEV_TO_MEM,
> + DMA_PREP_INTERRUPT);
> + kfree(vecs);
> + } else {
> + max_size = min(block->size, dmaengine_buffer->max_size);
> + max_size = round_down(max_size, dmaengine_buffer->align);
> + block->bytes_used = max_size;
> +
> + desc = dmaengine_prep_slave_single(dmaengine_buffer->chan,
> + block->phys_addr,
> + block->bytes_used,
> + DMA_DEV_TO_MEM,
> + DMA_PREP_INTERRUPT);
> + }
> if (!desc)
> return -ENOMEM;
>
next prev parent reply other threads:[~2023-12-21 16:13 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-19 17:50 [PATCH v5 0/8] iio: new DMABUF based API, v5 Paul Cercueil
2023-12-19 17:50 ` [PATCH v5 1/8] iio: buffer-dma: Get rid of outgoing queue Paul Cercueil
2023-12-21 11:28 ` Jonathan Cameron
2023-12-19 17:50 ` [PATCH v5 2/8] iio: buffer-dma: split iio_dma_buffer_fileio_free() function Paul Cercueil
2023-12-21 11:31 ` Jonathan Cameron
2023-12-19 17:50 ` [PATCH v5 3/8] dmaengine: Add API function dmaengine_prep_slave_dma_vec() Paul Cercueil
2023-12-21 11:40 ` Jonathan Cameron
2023-12-21 15:14 ` Vinod Koul
2023-12-21 15:29 ` Paul Cercueil
2024-01-08 12:20 ` Paul Cercueil
2024-01-22 11:06 ` [Linaro-mm-sig] " Vinod Koul
2023-12-19 17:50 ` [PATCH v5 4/8] dmaengine: dma-axi-dmac: Implement device_prep_slave_dma_vec Paul Cercueil
2023-12-19 17:50 ` [PATCH v5 5/8] iio: core: Add new DMABUF interface infrastructure Paul Cercueil
2023-12-21 12:06 ` Jonathan Cameron
2023-12-21 17:21 ` Paul Cercueil
2024-01-25 13:47 ` Paul Cercueil
2024-01-27 16:50 ` Jonathan Cameron
2024-01-29 12:52 ` Christian König
2024-01-29 13:06 ` Paul Cercueil
2024-01-29 13:17 ` Christian König
2024-01-29 13:32 ` Paul Cercueil
2024-01-29 14:15 ` Paul Cercueil
2024-01-08 13:20 ` Daniel Vetter
2023-12-19 17:50 ` [PATCH v5 6/8] iio: buffer-dma: Enable support for DMABUFs Paul Cercueil
2023-12-21 16:04 ` Jonathan Cameron
2023-12-22 8:56 ` Nuno Sá
2023-12-26 15:30 ` Jonathan Cameron
2023-12-19 17:50 ` [PATCH v5 7/8] iio: buffer-dmaengine: Support new DMABUF based userspace API Paul Cercueil
2023-12-21 16:12 ` Jonathan Cameron [this message]
2023-12-21 17:30 ` Paul Cercueil
2023-12-22 8:58 ` Nuno Sá
2023-12-26 15:31 ` Jonathan Cameron
2023-12-19 17:50 ` [PATCH v5 8/8] Documentation: iio: Document high-speed DMABUF based API Paul Cercueil
2023-12-21 16:15 ` Jonathan Cameron
2023-12-21 16:30 ` [PATCH v5 0/8] iio: new DMABUF based API, v5 Jonathan Cameron
2023-12-21 17:56 ` Paul Cercueil
2023-12-26 15:37 ` Jonathan Cameron
2024-01-08 21:12 ` Andrew Davis
2024-01-11 9:20 ` Paul Cercueil
2024-01-11 17:30 ` Andrew Davis
2024-01-12 11:33 ` Paul Cercueil
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231221161258.056f5ce4@jic23-huawei \
--to=jic23@kernel.org \
--cc=Michael.Hennerich@analog.com \
--cc=christian.koenig@amd.com \
--cc=corbet@lwn.net \
--cc=dmaengine@vger.kernel.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=lars@metafoo.de \
--cc=linaro-mm-sig@lists.linaro.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-iio@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-media@vger.kernel.org \
--cc=noname.nuno@gmail.com \
--cc=paul@crapouillou.net \
--cc=sumit.semwal@linaro.org \
--cc=vkoul@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox