* [PATCH v4 0/6] iio: Add buffer write() support
@ 2023-08-07 11:21 Paul Cercueil
2023-08-07 11:21 ` [PATCH v4 1/6] iio: buffer-dma: Get rid of outgoing queue Paul Cercueil
` (7 more replies)
0 siblings, 8 replies; 11+ messages in thread
From: Paul Cercueil @ 2023-08-07 11:21 UTC (permalink / raw)
To: Jonathan Cameron
Cc: Lars-Peter Clausen, Michael Hennerich, Nuno Sá, linux-iio,
linux-kernel, Paul Cercueil
[V3 was: "iio: new DMABUF based API, v3"][1]
Hi Jonathan,
This is a subset of my patchset that introduced a new interface based on
DMABUF objects [1]. It adds write() support to the IIO buffer
infrastructure.
The reason it is not the full IIO-DMABUF patchset, is because you
requested performance benchmarks - and our current numbers are barely
better (~ +10%) than the fileio interface. There is a good reason for
that: V3 of the patchset switched from having the IIO core creating the
DMABUFs backed by physically contiguous memory, to having the IIO core
being a simple DMABUF importer, and having the DMABUFs created
externally. We now use the udmabuf driver to create those, and they are
allocated from paged memory. While this works perfectly fine, our
buffers are now cut in 4 KiB chunks (pages), non-contiguous in memory,
which causes the DMA hardware to create an IRQ storm, as it raises an
interrupt after each 4 KiB in the worst case scenario.
Anyway, this is not directly a problem of the IIO-DMABUF code - but I
can't really upstream a shiny new interface that I claim is much faster,
without giving numbers.
So while we fix this (either by updating the DMA IP and driver to
support scatter-gather, or by hacking something quick to give us
physically contiguous DMABUFs just for the benchmark), I thought it
would make sense to upstream the few patches of the V3 patchset that are
needed for the IIO-DMABUF interface but aren't directly related.
As for write() support, Nuno (Cc'd) said he will work on upstreaming the
DAC counterpart of adc/adi-axi-adc.c in the next few weeks, so there
will be a user for the buffer write() support. I hope you are okay with
this - otherwise, we can just wait until this work is done and submit it
all at once.
Changelog since v3:
- [PATCH 2/6] is new;
- [PATCH 3/6]: Drop iio_dma_buffer_space_available() function, and
update patch description accordingly;
- [PATCH 6/6]: .space_available is now set to iio_dma_buffer_usage
(which is functionally the exact same).
Cheers,
-Paul
[1] https://lore.kernel.org/all/20230403154800.215924-1-paul@crapouillou.net/
Alexandru Ardelean (1):
iio: buffer-dma: split iio_dma_buffer_fileio_free() function
Paul Cercueil (5):
iio: buffer-dma: Get rid of outgoing queue
iio: buffer-dma: Rename iio_dma_buffer_data_available()
iio: buffer-dma: Enable buffer write support
iio: buffer-dmaengine: Support specifying buffer direction
iio: buffer-dmaengine: Enable write support
drivers/iio/adc/adi-axi-adc.c | 3 +-
drivers/iio/buffer/industrialio-buffer-dma.c | 187 ++++++++++++------
.../buffer/industrialio-buffer-dmaengine.c | 28 ++-
include/linux/iio/buffer-dma.h | 11 +-
include/linux/iio/buffer-dmaengine.h | 5 +-
5 files changed, 160 insertions(+), 74 deletions(-)
--
2.40.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v4 1/6] iio: buffer-dma: Get rid of outgoing queue
2023-08-07 11:21 [PATCH v4 0/6] iio: Add buffer write() support Paul Cercueil
@ 2023-08-07 11:21 ` Paul Cercueil
2023-08-07 11:21 ` [PATCH v4 2/6] iio: buffer-dma: Rename iio_dma_buffer_data_available() Paul Cercueil
` (6 subsequent siblings)
7 siblings, 0 replies; 11+ messages in thread
From: Paul Cercueil @ 2023-08-07 11:21 UTC (permalink / raw)
To: Jonathan Cameron
Cc: Lars-Peter Clausen, Michael Hennerich, Nuno Sá, linux-iio,
linux-kernel, Paul Cercueil
The buffer-dma code was using two queues, incoming and outgoing, to
manage the state of the blocks in use.
While this totally works, it adds some complexity to the code,
especially since the code only manages 2 blocks. It is much easier to
just check each block's state manually, and keep a counter for the next
block to dequeue.
Since the new DMABUF based API wouldn't use the outgoing queue anyway,
getting rid of it now makes the upcoming changes simpler.
With this change, the IIO_BLOCK_STATE_DEQUEUED is now useless, and can
be removed.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
---
v2: - Only remove the outgoing queue, and keep the incoming queue, as we
want the buffer to start streaming data as soon as it is enabled.
- Remove IIO_BLOCK_STATE_DEQUEUED, since it is now functionally the
same as IIO_BLOCK_STATE_DONE.
---
drivers/iio/buffer/industrialio-buffer-dma.c | 44 ++++++++++----------
include/linux/iio/buffer-dma.h | 7 ++--
2 files changed, 26 insertions(+), 25 deletions(-)
diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c
index d348af8b9705..1fc91467d1aa 100644
--- a/drivers/iio/buffer/industrialio-buffer-dma.c
+++ b/drivers/iio/buffer/industrialio-buffer-dma.c
@@ -179,7 +179,7 @@ static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(
}
block->size = size;
- block->state = IIO_BLOCK_STATE_DEQUEUED;
+ block->state = IIO_BLOCK_STATE_DONE;
block->queue = queue;
INIT_LIST_HEAD(&block->head);
kref_init(&block->kref);
@@ -191,16 +191,8 @@ static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(
static void _iio_dma_buffer_block_done(struct iio_dma_buffer_block *block)
{
- struct iio_dma_buffer_queue *queue = block->queue;
-
- /*
- * The buffer has already been freed by the application, just drop the
- * reference.
- */
- if (block->state != IIO_BLOCK_STATE_DEAD) {
+ if (block->state != IIO_BLOCK_STATE_DEAD)
block->state = IIO_BLOCK_STATE_DONE;
- list_add_tail(&block->head, &queue->outgoing);
- }
}
/**
@@ -261,7 +253,6 @@ static bool iio_dma_block_reusable(struct iio_dma_buffer_block *block)
* not support abort and has not given back the block yet.
*/
switch (block->state) {
- case IIO_BLOCK_STATE_DEQUEUED:
case IIO_BLOCK_STATE_QUEUED:
case IIO_BLOCK_STATE_DONE:
return true;
@@ -317,7 +308,6 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer)
* dead. This means we can reset the lists without having to fear
* corrution.
*/
- INIT_LIST_HEAD(&queue->outgoing);
spin_unlock_irq(&queue->list_lock);
INIT_LIST_HEAD(&queue->incoming);
@@ -456,14 +446,20 @@ static struct iio_dma_buffer_block *iio_dma_buffer_dequeue(
struct iio_dma_buffer_queue *queue)
{
struct iio_dma_buffer_block *block;
+ unsigned int idx;
spin_lock_irq(&queue->list_lock);
- block = list_first_entry_or_null(&queue->outgoing, struct
- iio_dma_buffer_block, head);
- if (block != NULL) {
- list_del(&block->head);
- block->state = IIO_BLOCK_STATE_DEQUEUED;
+
+ idx = queue->fileio.next_dequeue;
+ block = queue->fileio.blocks[idx];
+
+ if (block->state == IIO_BLOCK_STATE_DONE) {
+ idx = (idx + 1) % ARRAY_SIZE(queue->fileio.blocks);
+ queue->fileio.next_dequeue = idx;
+ } else {
+ block = NULL;
}
+
spin_unlock_irq(&queue->list_lock);
return block;
@@ -539,6 +535,7 @@ size_t iio_dma_buffer_data_available(struct iio_buffer *buf)
struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buf);
struct iio_dma_buffer_block *block;
size_t data_available = 0;
+ unsigned int i;
/*
* For counting the available bytes we'll use the size of the block not
@@ -552,8 +549,15 @@ size_t iio_dma_buffer_data_available(struct iio_buffer *buf)
data_available += queue->fileio.active_block->size;
spin_lock_irq(&queue->list_lock);
- list_for_each_entry(block, &queue->outgoing, head)
- data_available += block->size;
+
+ for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
+ block = queue->fileio.blocks[i];
+
+ if (block != queue->fileio.active_block
+ && block->state == IIO_BLOCK_STATE_DONE)
+ data_available += block->size;
+ }
+
spin_unlock_irq(&queue->list_lock);
mutex_unlock(&queue->lock);
@@ -617,7 +621,6 @@ int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue,
queue->ops = ops;
INIT_LIST_HEAD(&queue->incoming);
- INIT_LIST_HEAD(&queue->outgoing);
mutex_init(&queue->lock);
spin_lock_init(&queue->list_lock);
@@ -645,7 +648,6 @@ void iio_dma_buffer_exit(struct iio_dma_buffer_queue *queue)
continue;
queue->fileio.blocks[i]->state = IIO_BLOCK_STATE_DEAD;
}
- INIT_LIST_HEAD(&queue->outgoing);
spin_unlock_irq(&queue->list_lock);
INIT_LIST_HEAD(&queue->incoming);
diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h
index 6564bdcdac66..18d3702fa95d 100644
--- a/include/linux/iio/buffer-dma.h
+++ b/include/linux/iio/buffer-dma.h
@@ -19,14 +19,12 @@ struct device;
/**
* enum iio_block_state - State of a struct iio_dma_buffer_block
- * @IIO_BLOCK_STATE_DEQUEUED: Block is not queued
* @IIO_BLOCK_STATE_QUEUED: Block is on the incoming queue
* @IIO_BLOCK_STATE_ACTIVE: Block is currently being processed by the DMA
* @IIO_BLOCK_STATE_DONE: Block is on the outgoing queue
* @IIO_BLOCK_STATE_DEAD: Block has been marked as to be freed
*/
enum iio_block_state {
- IIO_BLOCK_STATE_DEQUEUED,
IIO_BLOCK_STATE_QUEUED,
IIO_BLOCK_STATE_ACTIVE,
IIO_BLOCK_STATE_DONE,
@@ -73,12 +71,15 @@ struct iio_dma_buffer_block {
* @active_block: Block being used in read()
* @pos: Read offset in the active block
* @block_size: Size of each block
+ * @next_dequeue: index of next block that will be dequeued
*/
struct iio_dma_buffer_queue_fileio {
struct iio_dma_buffer_block *blocks[2];
struct iio_dma_buffer_block *active_block;
size_t pos;
size_t block_size;
+
+ unsigned int next_dequeue;
};
/**
@@ -93,7 +94,6 @@ struct iio_dma_buffer_queue_fileio {
* list and typically also a list of active blocks in the part that handles
* the DMA controller
* @incoming: List of buffers on the incoming queue
- * @outgoing: List of buffers on the outgoing queue
* @active: Whether the buffer is currently active
* @fileio: FileIO state
*/
@@ -105,7 +105,6 @@ struct iio_dma_buffer_queue {
struct mutex lock;
spinlock_t list_lock;
struct list_head incoming;
- struct list_head outgoing;
bool active;
--
2.40.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 2/6] iio: buffer-dma: Rename iio_dma_buffer_data_available()
2023-08-07 11:21 [PATCH v4 0/6] iio: Add buffer write() support Paul Cercueil
2023-08-07 11:21 ` [PATCH v4 1/6] iio: buffer-dma: Get rid of outgoing queue Paul Cercueil
@ 2023-08-07 11:21 ` Paul Cercueil
2023-08-07 11:21 ` [PATCH v4 3/6] iio: buffer-dma: Enable buffer write support Paul Cercueil
` (5 subsequent siblings)
7 siblings, 0 replies; 11+ messages in thread
From: Paul Cercueil @ 2023-08-07 11:21 UTC (permalink / raw)
To: Jonathan Cameron
Cc: Lars-Peter Clausen, Michael Hennerich, Nuno Sá, linux-iio,
linux-kernel, Paul Cercueil
Change its name to iio_dma_buffer_usage(), as this function can be used
both for the .data_available and the .space_available callbacks.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
---
v4: New patch
---
drivers/iio/buffer/industrialio-buffer-dma.c | 11 ++++++-----
drivers/iio/buffer/industrialio-buffer-dmaengine.c | 2 +-
include/linux/iio/buffer-dma.h | 2 +-
3 files changed, 8 insertions(+), 7 deletions(-)
diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c
index 1fc91467d1aa..764f1400a545 100644
--- a/drivers/iio/buffer/industrialio-buffer-dma.c
+++ b/drivers/iio/buffer/industrialio-buffer-dma.c
@@ -524,13 +524,14 @@ int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n,
EXPORT_SYMBOL_GPL(iio_dma_buffer_read);
/**
- * iio_dma_buffer_data_available() - DMA buffer data_available callback
+ * iio_dma_buffer_usage() - DMA buffer data_available and
+ * space_available callback
* @buf: Buffer to check for data availability
*
- * Should be used as the data_available callback for iio_buffer_access_ops
- * struct for DMA buffers.
+ * Should be used as the data_available and space_available callbacks for
+ * iio_buffer_access_ops struct for DMA buffers.
*/
-size_t iio_dma_buffer_data_available(struct iio_buffer *buf)
+size_t iio_dma_buffer_usage(struct iio_buffer *buf)
{
struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buf);
struct iio_dma_buffer_block *block;
@@ -563,7 +564,7 @@ size_t iio_dma_buffer_data_available(struct iio_buffer *buf)
return data_available;
}
-EXPORT_SYMBOL_GPL(iio_dma_buffer_data_available);
+EXPORT_SYMBOL_GPL(iio_dma_buffer_usage);
/**
* iio_dma_buffer_set_bytes_per_datum() - DMA buffer set_bytes_per_datum callback
diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
index 5f85ba38e6f6..7b49f85af064 100644
--- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
+++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
@@ -117,7 +117,7 @@ static const struct iio_buffer_access_funcs iio_dmaengine_buffer_ops = {
.request_update = iio_dma_buffer_request_update,
.enable = iio_dma_buffer_enable,
.disable = iio_dma_buffer_disable,
- .data_available = iio_dma_buffer_data_available,
+ .data_available = iio_dma_buffer_usage,
.release = iio_dmaengine_buffer_release,
.modes = INDIO_BUFFER_HARDWARE,
diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h
index 18d3702fa95d..52a838ec0e57 100644
--- a/include/linux/iio/buffer-dma.h
+++ b/include/linux/iio/buffer-dma.h
@@ -132,7 +132,7 @@ int iio_dma_buffer_disable(struct iio_buffer *buffer,
struct iio_dev *indio_dev);
int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n,
char __user *user_buffer);
-size_t iio_dma_buffer_data_available(struct iio_buffer *buffer);
+size_t iio_dma_buffer_usage(struct iio_buffer *buffer);
int iio_dma_buffer_set_bytes_per_datum(struct iio_buffer *buffer, size_t bpd);
int iio_dma_buffer_set_length(struct iio_buffer *buffer, unsigned int length);
int iio_dma_buffer_request_update(struct iio_buffer *buffer);
--
2.40.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 3/6] iio: buffer-dma: Enable buffer write support
2023-08-07 11:21 [PATCH v4 0/6] iio: Add buffer write() support Paul Cercueil
2023-08-07 11:21 ` [PATCH v4 1/6] iio: buffer-dma: Get rid of outgoing queue Paul Cercueil
2023-08-07 11:21 ` [PATCH v4 2/6] iio: buffer-dma: Rename iio_dma_buffer_data_available() Paul Cercueil
@ 2023-08-07 11:21 ` Paul Cercueil
2023-08-07 11:21 ` [PATCH v4 4/6] iio: buffer-dma: split iio_dma_buffer_fileio_free() function Paul Cercueil
` (4 subsequent siblings)
7 siblings, 0 replies; 11+ messages in thread
From: Paul Cercueil @ 2023-08-07 11:21 UTC (permalink / raw)
To: Jonathan Cameron
Cc: Lars-Peter Clausen, Michael Hennerich, Nuno Sá, linux-iio,
linux-kernel, Paul Cercueil, Alexandru Ardelean
Adding write support to the buffer-dma code is easy - the write()
function basically needs to do the exact same thing as the read()
function: dequeue a block, read or write the data, enqueue the block
when entirely processed.
Therefore, the iio_buffer_dma_read() and the new iio_buffer_dma_write()
now both call a function iio_buffer_dma_io(), which will perform this
task.
Note that we preemptively reset block->bytes_used to the buffer's size
in iio_dma_buffer_request_update(), as in the future the
iio_dma_buffer_enqueue() function won't reset it.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Reviewed-by: Alexandru Ardelean <ardeleanalex@gmail.com>
---
v2: - Fix block->state not being reset in
iio_dma_buffer_request_update() for output buffers.
- Only update block->bytes_used once and add a comment about why we
update it.
- Add a comment about why we're setting a different state for output
buffers in iio_dma_buffer_request_update()
- Remove useless cast to bool (!!) in iio_dma_buffer_io()
v3: - Reorganize arguments to iio_dma_buffer_io()
- Change 'is_write' argument to 'is_from_user'
- Change (__force char *) to (__force __user char *), in
iio_dma_buffer_write(), since we only want to drop the "const".
v4: Drop iio_dma_buffer_space_available() function, and update patch
description accordingly.
---
drivers/iio/buffer/industrialio-buffer-dma.c | 89 ++++++++++++++++----
include/linux/iio/buffer-dma.h | 2 +
2 files changed, 75 insertions(+), 16 deletions(-)
diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c
index 764f1400a545..e00b704941b6 100644
--- a/drivers/iio/buffer/industrialio-buffer-dma.c
+++ b/drivers/iio/buffer/industrialio-buffer-dma.c
@@ -195,6 +195,18 @@ static void _iio_dma_buffer_block_done(struct iio_dma_buffer_block *block)
block->state = IIO_BLOCK_STATE_DONE;
}
+static void iio_dma_buffer_queue_wake(struct iio_dma_buffer_queue *queue)
+{
+ __poll_t flags;
+
+ if (queue->buffer.direction == IIO_BUFFER_DIRECTION_IN)
+ flags = EPOLLIN | EPOLLRDNORM;
+ else
+ flags = EPOLLOUT | EPOLLWRNORM;
+
+ wake_up_interruptible_poll(&queue->buffer.pollq, flags);
+}
+
/**
* iio_dma_buffer_block_done() - Indicate that a block has been completed
* @block: The completed block
@@ -212,7 +224,7 @@ void iio_dma_buffer_block_done(struct iio_dma_buffer_block *block)
spin_unlock_irqrestore(&queue->list_lock, flags);
iio_buffer_block_put_atomic(block);
- wake_up_interruptible_poll(&queue->buffer.pollq, EPOLLIN | EPOLLRDNORM);
+ iio_dma_buffer_queue_wake(queue);
}
EXPORT_SYMBOL_GPL(iio_dma_buffer_block_done);
@@ -241,7 +253,7 @@ void iio_dma_buffer_block_list_abort(struct iio_dma_buffer_queue *queue,
}
spin_unlock_irqrestore(&queue->list_lock, flags);
- wake_up_interruptible_poll(&queue->buffer.pollq, EPOLLIN | EPOLLRDNORM);
+ iio_dma_buffer_queue_wake(queue);
}
EXPORT_SYMBOL_GPL(iio_dma_buffer_block_list_abort);
@@ -335,8 +347,24 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer)
queue->fileio.blocks[i] = block;
}
- block->state = IIO_BLOCK_STATE_QUEUED;
- list_add_tail(&block->head, &queue->incoming);
+ /*
+ * block->bytes_used may have been modified previously, e.g. by
+ * iio_dma_buffer_block_list_abort(). Reset it here to the
+ * block's so that iio_dma_buffer_io() will work.
+ */
+ block->bytes_used = block->size;
+
+ /*
+ * If it's an input buffer, mark the block as queued, and
+ * iio_dma_buffer_enable() will submit it. Otherwise mark it as
+ * done, which means it's ready to be dequeued.
+ */
+ if (queue->buffer.direction == IIO_BUFFER_DIRECTION_IN) {
+ block->state = IIO_BLOCK_STATE_QUEUED;
+ list_add_tail(&block->head, &queue->incoming);
+ } else {
+ block->state = IIO_BLOCK_STATE_DONE;
+ }
}
out_unlock:
@@ -465,20 +493,12 @@ static struct iio_dma_buffer_block *iio_dma_buffer_dequeue(
return block;
}
-/**
- * iio_dma_buffer_read() - DMA buffer read callback
- * @buffer: Buffer to read form
- * @n: Number of bytes to read
- * @user_buffer: Userspace buffer to copy the data to
- *
- * Should be used as the read callback for iio_buffer_access_ops
- * struct for DMA buffers.
- */
-int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n,
- char __user *user_buffer)
+static int iio_dma_buffer_io(struct iio_buffer *buffer, size_t n,
+ char __user *user_buffer, bool is_from_user)
{
struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer);
struct iio_dma_buffer_block *block;
+ void *addr;
int ret;
if (n < buffer->bytes_per_datum)
@@ -501,8 +521,13 @@ int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n,
n = rounddown(n, buffer->bytes_per_datum);
if (n > block->bytes_used - queue->fileio.pos)
n = block->bytes_used - queue->fileio.pos;
+ addr = block->vaddr + queue->fileio.pos;
- if (copy_to_user(user_buffer, block->vaddr + queue->fileio.pos, n)) {
+ if (is_from_user)
+ ret = copy_from_user(addr, user_buffer, n);
+ else
+ ret = copy_to_user(user_buffer, addr, n);
+ if (ret) {
ret = -EFAULT;
goto out_unlock;
}
@@ -521,8 +546,40 @@ int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n,
return ret;
}
+
+/**
+ * iio_dma_buffer_read() - DMA buffer read callback
+ * @buffer: Buffer to read form
+ * @n: Number of bytes to read
+ * @user_buffer: Userspace buffer to copy the data to
+ *
+ * Should be used as the read callback for iio_buffer_access_ops
+ * struct for DMA buffers.
+ */
+int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n,
+ char __user *user_buffer)
+{
+ return iio_dma_buffer_io(buffer, n, user_buffer, false);
+}
EXPORT_SYMBOL_GPL(iio_dma_buffer_read);
+/**
+ * iio_dma_buffer_write() - DMA buffer write callback
+ * @buffer: Buffer to read form
+ * @n: Number of bytes to read
+ * @user_buffer: Userspace buffer to copy the data from
+ *
+ * Should be used as the write callback for iio_buffer_access_ops
+ * struct for DMA buffers.
+ */
+int iio_dma_buffer_write(struct iio_buffer *buffer, size_t n,
+ const char __user *user_buffer)
+{
+ return iio_dma_buffer_io(buffer, n,
+ (__force __user char *)user_buffer, true);
+}
+EXPORT_SYMBOL_GPL(iio_dma_buffer_write);
+
/**
* iio_dma_buffer_usage() - DMA buffer data_available and
* space_available callback
diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h
index 52a838ec0e57..6e27e47077d5 100644
--- a/include/linux/iio/buffer-dma.h
+++ b/include/linux/iio/buffer-dma.h
@@ -132,6 +132,8 @@ int iio_dma_buffer_disable(struct iio_buffer *buffer,
struct iio_dev *indio_dev);
int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n,
char __user *user_buffer);
+int iio_dma_buffer_write(struct iio_buffer *buffer, size_t n,
+ const char __user *user_buffer);
size_t iio_dma_buffer_usage(struct iio_buffer *buffer);
int iio_dma_buffer_set_bytes_per_datum(struct iio_buffer *buffer, size_t bpd);
int iio_dma_buffer_set_length(struct iio_buffer *buffer, unsigned int length);
--
2.40.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 4/6] iio: buffer-dma: split iio_dma_buffer_fileio_free() function
2023-08-07 11:21 [PATCH v4 0/6] iio: Add buffer write() support Paul Cercueil
` (2 preceding siblings ...)
2023-08-07 11:21 ` [PATCH v4 3/6] iio: buffer-dma: Enable buffer write support Paul Cercueil
@ 2023-08-07 11:21 ` Paul Cercueil
2023-08-07 11:21 ` [PATCH v4 5/6] iio: buffer-dmaengine: Support specifying buffer direction Paul Cercueil
` (3 subsequent siblings)
7 siblings, 0 replies; 11+ messages in thread
From: Paul Cercueil @ 2023-08-07 11:21 UTC (permalink / raw)
To: Jonathan Cameron
Cc: Lars-Peter Clausen, Michael Hennerich, Nuno Sá, linux-iio,
linux-kernel, Alexandru Ardelean, Alexandru Ardelean,
Paul Cercueil
From: Alexandru Ardelean <alexandru.ardelean@analog.com>
This change splits the logic into a separate function, which will be
re-used later.
Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com>
Cc: Alexandru Ardelean <ardeleanalex@gmail.com>
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
---
drivers/iio/buffer/industrialio-buffer-dma.c | 43 +++++++++++---------
1 file changed, 24 insertions(+), 19 deletions(-)
diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c
index e00b704941b6..29cc3083cb7e 100644
--- a/drivers/iio/buffer/industrialio-buffer-dma.c
+++ b/drivers/iio/buffer/industrialio-buffer-dma.c
@@ -374,6 +374,29 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer)
}
EXPORT_SYMBOL_GPL(iio_dma_buffer_request_update);
+static void iio_dma_buffer_fileio_free(struct iio_dma_buffer_queue *queue)
+{
+ unsigned int i;
+
+ spin_lock_irq(&queue->list_lock);
+ for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
+ if (!queue->fileio.blocks[i])
+ continue;
+ queue->fileio.blocks[i]->state = IIO_BLOCK_STATE_DEAD;
+ }
+ spin_unlock_irq(&queue->list_lock);
+
+ INIT_LIST_HEAD(&queue->incoming);
+
+ for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
+ if (!queue->fileio.blocks[i])
+ continue;
+ iio_buffer_block_put(queue->fileio.blocks[i]);
+ queue->fileio.blocks[i] = NULL;
+ }
+ queue->fileio.active_block = NULL;
+}
+
static void iio_dma_buffer_submit_block(struct iio_dma_buffer_queue *queue,
struct iio_dma_buffer_block *block)
{
@@ -696,27 +719,9 @@ EXPORT_SYMBOL_GPL(iio_dma_buffer_init);
*/
void iio_dma_buffer_exit(struct iio_dma_buffer_queue *queue)
{
- unsigned int i;
-
mutex_lock(&queue->lock);
- spin_lock_irq(&queue->list_lock);
- for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
- if (!queue->fileio.blocks[i])
- continue;
- queue->fileio.blocks[i]->state = IIO_BLOCK_STATE_DEAD;
- }
- spin_unlock_irq(&queue->list_lock);
-
- INIT_LIST_HEAD(&queue->incoming);
-
- for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
- if (!queue->fileio.blocks[i])
- continue;
- iio_buffer_block_put(queue->fileio.blocks[i]);
- queue->fileio.blocks[i] = NULL;
- }
- queue->fileio.active_block = NULL;
+ iio_dma_buffer_fileio_free(queue);
queue->ops = NULL;
mutex_unlock(&queue->lock);
--
2.40.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 5/6] iio: buffer-dmaengine: Support specifying buffer direction
2023-08-07 11:21 [PATCH v4 0/6] iio: Add buffer write() support Paul Cercueil
` (3 preceding siblings ...)
2023-08-07 11:21 ` [PATCH v4 4/6] iio: buffer-dma: split iio_dma_buffer_fileio_free() function Paul Cercueil
@ 2023-08-07 11:21 ` Paul Cercueil
2023-08-07 11:21 ` [PATCH v4 6/6] iio: buffer-dmaengine: Enable write support Paul Cercueil
` (2 subsequent siblings)
7 siblings, 0 replies; 11+ messages in thread
From: Paul Cercueil @ 2023-08-07 11:21 UTC (permalink / raw)
To: Jonathan Cameron
Cc: Lars-Peter Clausen, Michael Hennerich, Nuno Sá, linux-iio,
linux-kernel, Paul Cercueil, Alexandru Ardelean
Update the devm_iio_dmaengine_buffer_setup() function to support
specifying the buffer direction.
Update the iio_dmaengine_buffer_submit() function to handle input
buffers as well as output buffers.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Reviewed-by: Alexandru Ardelean <ardeleanalex@gmail.com>
---
drivers/iio/adc/adi-axi-adc.c | 3 ++-
.../buffer/industrialio-buffer-dmaengine.c | 24 +++++++++++++++----
include/linux/iio/buffer-dmaengine.h | 5 +++-
3 files changed, 25 insertions(+), 7 deletions(-)
diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c
index e8a8ea4140f1..d33574b5417a 100644
--- a/drivers/iio/adc/adi-axi-adc.c
+++ b/drivers/iio/adc/adi-axi-adc.c
@@ -114,7 +114,8 @@ static int adi_axi_adc_config_dma_buffer(struct device *dev,
dma_name = "rx";
return devm_iio_dmaengine_buffer_setup(indio_dev->dev.parent,
- indio_dev, dma_name);
+ indio_dev, dma_name,
+ IIO_BUFFER_DIRECTION_IN);
}
static int adi_axi_adc_read_raw(struct iio_dev *indio_dev,
diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
index 7b49f85af064..deae5a4ac03d 100644
--- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
+++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
@@ -64,14 +64,25 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue,
struct dmaengine_buffer *dmaengine_buffer =
iio_buffer_to_dmaengine_buffer(&queue->buffer);
struct dma_async_tx_descriptor *desc;
+ enum dma_transfer_direction dma_dir;
+ size_t max_size;
dma_cookie_t cookie;
- block->bytes_used = min(block->size, dmaengine_buffer->max_size);
- block->bytes_used = round_down(block->bytes_used,
- dmaengine_buffer->align);
+ max_size = min(block->size, dmaengine_buffer->max_size);
+ max_size = round_down(max_size, dmaengine_buffer->align);
+
+ if (queue->buffer.direction == IIO_BUFFER_DIRECTION_IN) {
+ block->bytes_used = max_size;
+ dma_dir = DMA_DEV_TO_MEM;
+ } else {
+ dma_dir = DMA_MEM_TO_DEV;
+ }
+
+ if (!block->bytes_used || block->bytes_used > max_size)
+ return -EINVAL;
desc = dmaengine_prep_slave_single(dmaengine_buffer->chan,
- block->phys_addr, block->bytes_used, DMA_DEV_TO_MEM,
+ block->phys_addr, block->bytes_used, dma_dir,
DMA_PREP_INTERRUPT);
if (!desc)
return -ENOMEM;
@@ -275,7 +286,8 @@ static struct iio_buffer *devm_iio_dmaengine_buffer_alloc(struct device *dev,
*/
int devm_iio_dmaengine_buffer_setup(struct device *dev,
struct iio_dev *indio_dev,
- const char *channel)
+ const char *channel,
+ enum iio_buffer_direction dir)
{
struct iio_buffer *buffer;
@@ -286,6 +298,8 @@ int devm_iio_dmaengine_buffer_setup(struct device *dev,
indio_dev->modes |= INDIO_BUFFER_HARDWARE;
+ buffer->direction = dir;
+
return iio_device_attach_buffer(indio_dev, buffer);
}
EXPORT_SYMBOL_GPL(devm_iio_dmaengine_buffer_setup);
diff --git a/include/linux/iio/buffer-dmaengine.h b/include/linux/iio/buffer-dmaengine.h
index 5c355be89814..538d0479cdd6 100644
--- a/include/linux/iio/buffer-dmaengine.h
+++ b/include/linux/iio/buffer-dmaengine.h
@@ -7,11 +7,14 @@
#ifndef __IIO_DMAENGINE_H__
#define __IIO_DMAENGINE_H__
+#include <linux/iio/buffer.h>
+
struct iio_dev;
struct device;
int devm_iio_dmaengine_buffer_setup(struct device *dev,
struct iio_dev *indio_dev,
- const char *channel);
+ const char *channel,
+ enum iio_buffer_direction dir);
#endif
--
2.40.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 6/6] iio: buffer-dmaengine: Enable write support
2023-08-07 11:21 [PATCH v4 0/6] iio: Add buffer write() support Paul Cercueil
` (4 preceding siblings ...)
2023-08-07 11:21 ` [PATCH v4 5/6] iio: buffer-dmaengine: Support specifying buffer direction Paul Cercueil
@ 2023-08-07 11:21 ` Paul Cercueil
2023-08-07 14:12 ` [PATCH v4 0/6] iio: Add buffer write() support Nuno Sá
2023-08-30 16:11 ` Jonathan Cameron
7 siblings, 0 replies; 11+ messages in thread
From: Paul Cercueil @ 2023-08-07 11:21 UTC (permalink / raw)
To: Jonathan Cameron
Cc: Lars-Peter Clausen, Michael Hennerich, Nuno Sá, linux-iio,
linux-kernel, Paul Cercueil, Alexandru Ardelean
Use the iio_dma_buffer_write() and iio_dma_buffer_space_available()
functions provided by the buffer-dma core, to enable write support in
the buffer-dmaengine code.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Reviewed-by: Alexandru Ardelean <ardeleanalex@gmail.com>
---
v4: .space_available is now set to iio_dma_buffer_usage (which is
functionally the exact same).
---
drivers/iio/buffer/industrialio-buffer-dmaengine.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
index deae5a4ac03d..ef9d890ed3c9 100644
--- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
+++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
@@ -123,12 +123,14 @@ static void iio_dmaengine_buffer_release(struct iio_buffer *buf)
static const struct iio_buffer_access_funcs iio_dmaengine_buffer_ops = {
.read = iio_dma_buffer_read,
+ .write = iio_dma_buffer_write,
.set_bytes_per_datum = iio_dma_buffer_set_bytes_per_datum,
.set_length = iio_dma_buffer_set_length,
.request_update = iio_dma_buffer_request_update,
.enable = iio_dma_buffer_enable,
.disable = iio_dma_buffer_disable,
.data_available = iio_dma_buffer_usage,
+ .space_available = iio_dma_buffer_usage,
.release = iio_dmaengine_buffer_release,
.modes = INDIO_BUFFER_HARDWARE,
--
2.40.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v4 0/6] iio: Add buffer write() support
2023-08-07 11:21 [PATCH v4 0/6] iio: Add buffer write() support Paul Cercueil
` (5 preceding siblings ...)
2023-08-07 11:21 ` [PATCH v4 6/6] iio: buffer-dmaengine: Enable write support Paul Cercueil
@ 2023-08-07 14:12 ` Nuno Sá
2023-08-30 16:11 ` Jonathan Cameron
7 siblings, 0 replies; 11+ messages in thread
From: Nuno Sá @ 2023-08-07 14:12 UTC (permalink / raw)
To: Paul Cercueil, Jonathan Cameron
Cc: Lars-Peter Clausen, Michael Hennerich, linux-iio, linux-kernel
On Mon, 2023-08-07 at 13:21 +0200, Paul Cercueil wrote:
> [V3 was: "iio: new DMABUF based API, v3"][1]
>
> Hi Jonathan,
>
> This is a subset of my patchset that introduced a new interface based on
> DMABUF objects [1]. It adds write() support to the IIO buffer
> infrastructure.
>
> The reason it is not the full IIO-DMABUF patchset, is because you
> requested performance benchmarks - and our current numbers are barely
> better (~ +10%) than the fileio interface. There is a good reason for
> that: V3 of the patchset switched from having the IIO core creating the
> DMABUFs backed by physically contiguous memory, to having the IIO core
> being a simple DMABUF importer, and having the DMABUFs created
> externally. We now use the udmabuf driver to create those, and they are
> allocated from paged memory. While this works perfectly fine, our
> buffers are now cut in 4 KiB chunks (pages), non-contiguous in memory,
> which causes the DMA hardware to create an IRQ storm, as it raises an
> interrupt after each 4 KiB in the worst case scenario.
>
> Anyway, this is not directly a problem of the IIO-DMABUF code - but I
> can't really upstream a shiny new interface that I claim is much faster,
> without giving numbers.
>
> So while we fix this (either by updating the DMA IP and driver to
> support scatter-gather, or by hacking something quick to give us
> physically contiguous DMABUFs just for the benchmark), I thought it
> would make sense to upstream the few patches of the V3 patchset that are
> needed for the IIO-DMABUF interface but aren't directly related.
>
> As for write() support, Nuno (Cc'd) said he will work on upstreaming the
> DAC counterpart of adc/adi-axi-adc.c in the next few weeks, so there
> will be a user for the buffer write() support. I hope you are okay with
> this - otherwise, we can just wait until this work is done and submit it
> all at once.
>
Yeah, I've started that process last week:
https://lore.kernel.org/linux-iio/20230804145342.1600136-1-nuno.sa@analog.com/
the dac counterpart is actually missing in the RFC (as the goal of the RFC is
not review those drivers but the framework) but I do state that my plan is to
have it in the actual series where I actually mention we would need this work
:).
- Nuno Sá
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v4 0/6] iio: Add buffer write() support
2023-08-07 11:21 [PATCH v4 0/6] iio: Add buffer write() support Paul Cercueil
` (6 preceding siblings ...)
2023-08-07 14:12 ` [PATCH v4 0/6] iio: Add buffer write() support Nuno Sá
@ 2023-08-30 16:11 ` Jonathan Cameron
2023-08-30 16:18 ` Jonathan Cameron
7 siblings, 1 reply; 11+ messages in thread
From: Jonathan Cameron @ 2023-08-30 16:11 UTC (permalink / raw)
To: Paul Cercueil
Cc: Jonathan Cameron, Lars-Peter Clausen, Michael Hennerich,
Nuno Sá, linux-iio, linux-kernel
On Mon, 7 Aug 2023 13:21:07 +0200
Paul Cercueil <paul@crapouillou.net> wrote:
> [V3 was: "iio: new DMABUF based API, v3"][1]
>
> Hi Jonathan,
>
> This is a subset of my patchset that introduced a new interface based on
> DMABUF objects [1]. It adds write() support to the IIO buffer
> infrastructure.
>
> The reason it is not the full IIO-DMABUF patchset, is because you
> requested performance benchmarks - and our current numbers are barely
> better (~ +10%) than the fileio interface. There is a good reason for
> that: V3 of the patchset switched from having the IIO core creating the
> DMABUFs backed by physically contiguous memory, to having the IIO core
> being a simple DMABUF importer, and having the DMABUFs created
> externally. We now use the udmabuf driver to create those, and they are
> allocated from paged memory. While this works perfectly fine, our
> buffers are now cut in 4 KiB chunks (pages), non-contiguous in memory,
> which causes the DMA hardware to create an IRQ storm, as it raises an
> interrupt after each 4 KiB in the worst case scenario.
Interesting. I'm guessing you don't necessarily need contiguous memory
and huge pages would get rid of most of that overhead?
Given embedded target those huge pages are hard to get so you need
hugetlb support to improve the chances of it working. Some quick searching
suggests there is possible support on the way.
https://lore.kernel.org/linux-mm/20230817064623.3424348-1-vivek.kasireddy@intel.com/
>
> Anyway, this is not directly a problem of the IIO-DMABUF code - but I
> can't really upstream a shiny new interface that I claim is much faster,
> without giving numbers.
>
> So while we fix this (either by updating the DMA IP and driver to
> support scatter-gather)
Long run you almost always end up needing that unless contig requirements
are small and you want a robust solution. I'm guessing no IOMMU to pretend
it's all contiguous...
> or by hacking something quick to give us
> physically contiguous DMABUFs just for the benchmark), I thought it
> would make sense to upstream the few patches of the V3 patchset that are
> needed for the IIO-DMABUF interface but aren't directly related.
Good idea.
>
> As for write() support, Nuno (Cc'd) said he will work on upstreaming the
> DAC counterpart of adc/adi-axi-adc.c in the next few weeks, so there
> will be a user for the buffer write() support. I hope you are okay with
> this - otherwise, we can just wait until this work is done and submit it
> all at once.
Absolutely fine, though I won't pick this up without the user also being
ready to go.
>
> Changelog since v3:
> - [PATCH 2/6] is new;
> - [PATCH 3/6]: Drop iio_dma_buffer_space_available() function, and
> update patch description accordingly;
> - [PATCH 6/6]: .space_available is now set to iio_dma_buffer_usage
> (which is functionally the exact same).
>
> Cheers,
> -Paul
>
> [1] https://lore.kernel.org/all/20230403154800.215924-1-paul@crapouillou.net/
>
> Alexandru Ardelean (1):
> iio: buffer-dma: split iio_dma_buffer_fileio_free() function
>
> Paul Cercueil (5):
> iio: buffer-dma: Get rid of outgoing queue
> iio: buffer-dma: Rename iio_dma_buffer_data_available()
> iio: buffer-dma: Enable buffer write support
> iio: buffer-dmaengine: Support specifying buffer direction
> iio: buffer-dmaengine: Enable write support
>
> drivers/iio/adc/adi-axi-adc.c | 3 +-
> drivers/iio/buffer/industrialio-buffer-dma.c | 187 ++++++++++++------
> .../buffer/industrialio-buffer-dmaengine.c | 28 ++-
> include/linux/iio/buffer-dma.h | 11 +-
> include/linux/iio/buffer-dmaengine.h | 5 +-
> 5 files changed, 160 insertions(+), 74 deletions(-)
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v4 0/6] iio: Add buffer write() support
2023-08-30 16:11 ` Jonathan Cameron
@ 2023-08-30 16:18 ` Jonathan Cameron
2023-08-31 11:01 ` Nuno Sá
0 siblings, 1 reply; 11+ messages in thread
From: Jonathan Cameron @ 2023-08-30 16:18 UTC (permalink / raw)
To: Paul Cercueil
Cc: Jonathan Cameron, Lars-Peter Clausen, Michael Hennerich,
Nuno Sá, linux-iio, linux-kernel
On Wed, 30 Aug 2023 17:11:18 +0100
Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
> On Mon, 7 Aug 2023 13:21:07 +0200
> Paul Cercueil <paul@crapouillou.net> wrote:
>
> > [V3 was: "iio: new DMABUF based API, v3"][1]
> >
> > Hi Jonathan,
> >
> > This is a subset of my patchset that introduced a new interface based on
> > DMABUF objects [1]. It adds write() support to the IIO buffer
> > infrastructure.
> >
> > The reason it is not the full IIO-DMABUF patchset, is because you
> > requested performance benchmarks - and our current numbers are barely
> > better (~ +10%) than the fileio interface. There is a good reason for
> > that: V3 of the patchset switched from having the IIO core creating the
> > DMABUFs backed by physically contiguous memory, to having the IIO core
> > being a simple DMABUF importer, and having the DMABUFs created
> > externally. We now use the udmabuf driver to create those, and they are
> > allocated from paged memory. While this works perfectly fine, our
> > buffers are now cut in 4 KiB chunks (pages), non-contiguous in memory,
> > which causes the DMA hardware to create an IRQ storm, as it raises an
> > interrupt after each 4 KiB in the worst case scenario.
>
> Interesting. I'm guessing you don't necessarily need contiguous memory
> and huge pages would get rid of most of that overhead?
>
> Given embedded target those huge pages are hard to get so you need
> hugetlb support to improve the chances of it working. Some quick searching
> suggests there is possible support on the way.
> https://lore.kernel.org/linux-mm/20230817064623.3424348-1-vivek.kasireddy@intel.com/
>
>
> >
> > Anyway, this is not directly a problem of the IIO-DMABUF code - but I
> > can't really upstream a shiny new interface that I claim is much faster,
> > without giving numbers.
> >
> > So while we fix this (either by updating the DMA IP and driver to
> > support scatter-gather)
>
> Long run you almost always end up needing that unless contig requirements
> are small and you want a robust solution. I'm guessing no IOMMU to pretend
> it's all contiguous...
>
> > or by hacking something quick to give us
> > physically contiguous DMABUFs just for the benchmark), I thought it
> > would make sense to upstream the few patches of the V3 patchset that are
> > needed for the IIO-DMABUF interface but aren't directly related.
>
> Good idea.
>
> >
> > As for write() support, Nuno (Cc'd) said he will work on upstreaming the
> > DAC counterpart of adc/adi-axi-adc.c in the next few weeks, so there
> > will be a user for the buffer write() support. I hope you are okay with
> > this - otherwise, we can just wait until this work is done and submit it
> > all at once.
>
> Absolutely fine, though I won't pick this up without the user also being
> ready to go.
Having looked through these again, they are straight forward so no changes
requested from me. Nuno, if you can add this set into appropriate
point in your series that will make use of it that will make my life easier
and ensure and minor rebasing etc happens without having to bother Paul.
Thanks,
Jonathan
>
> >
> > Changelog since v3:
> > - [PATCH 2/6] is new;
> > - [PATCH 3/6]: Drop iio_dma_buffer_space_available() function, and
> > update patch description accordingly;
> > - [PATCH 6/6]: .space_available is now set to iio_dma_buffer_usage
> > (which is functionally the exact same).
> >
> > Cheers,
> > -Paul
> >
> > [1] https://lore.kernel.org/all/20230403154800.215924-1-paul@crapouillou.net/
> >
> > Alexandru Ardelean (1):
> > iio: buffer-dma: split iio_dma_buffer_fileio_free() function
> >
> > Paul Cercueil (5):
> > iio: buffer-dma: Get rid of outgoing queue
> > iio: buffer-dma: Rename iio_dma_buffer_data_available()
> > iio: buffer-dma: Enable buffer write support
> > iio: buffer-dmaengine: Support specifying buffer direction
> > iio: buffer-dmaengine: Enable write support
> >
> > drivers/iio/adc/adi-axi-adc.c | 3 +-
> > drivers/iio/buffer/industrialio-buffer-dma.c | 187 ++++++++++++------
> > .../buffer/industrialio-buffer-dmaengine.c | 28 ++-
> > include/linux/iio/buffer-dma.h | 11 +-
> > include/linux/iio/buffer-dmaengine.h | 5 +-
> > 5 files changed, 160 insertions(+), 74 deletions(-)
> >
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v4 0/6] iio: Add buffer write() support
2023-08-30 16:18 ` Jonathan Cameron
@ 2023-08-31 11:01 ` Nuno Sá
0 siblings, 0 replies; 11+ messages in thread
From: Nuno Sá @ 2023-08-31 11:01 UTC (permalink / raw)
To: Jonathan Cameron, Paul Cercueil
Cc: Jonathan Cameron, Lars-Peter Clausen, Michael Hennerich,
linux-iio, linux-kernel
On Wed, 2023-08-30 at 17:18 +0100, Jonathan Cameron wrote:
> On Wed, 30 Aug 2023 17:11:18 +0100
> Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
>
> > On Mon, 7 Aug 2023 13:21:07 +0200
> > Paul Cercueil <paul@crapouillou.net> wrote:
> >
> > > [V3 was: "iio: new DMABUF based API, v3"][1]
> > >
> > > Hi Jonathan,
> > >
> > > This is a subset of my patchset that introduced a new interface based on
> > > DMABUF objects [1]. It adds write() support to the IIO buffer
> > > infrastructure.
> > >
> > > The reason it is not the full IIO-DMABUF patchset, is because you
> > > requested performance benchmarks - and our current numbers are barely
> > > better (~ +10%) than the fileio interface. There is a good reason for
> > > that: V3 of the patchset switched from having the IIO core creating the
> > > DMABUFs backed by physically contiguous memory, to having the IIO core
> > > being a simple DMABUF importer, and having the DMABUFs created
> > > externally. We now use the udmabuf driver to create those, and they are
> > > allocated from paged memory. While this works perfectly fine, our
> > > buffers are now cut in 4 KiB chunks (pages), non-contiguous in memory,
> > > which causes the DMA hardware to create an IRQ storm, as it raises an
> > > interrupt after each 4 KiB in the worst case scenario.
> >
> > Interesting. I'm guessing you don't necessarily need contiguous memory
> > and huge pages would get rid of most of that overhead?
> >
> > Given embedded target those huge pages are hard to get so you need
> > hugetlb support to improve the chances of it working. Some quick searching
> > suggests there is possible support on the way.
> > https://lore.kernel.org/linux-mm/20230817064623.3424348-1-vivek.kasireddy@intel.com/
> >
> >
> > >
> > > Anyway, this is not directly a problem of the IIO-DMABUF code - but I
> > > can't really upstream a shiny new interface that I claim is much faster,
> > > without giving numbers.
> > >
> > > So while we fix this (either by updating the DMA IP and driver to
> > > support scatter-gather)
> >
> > Long run you almost always end up needing that unless contig requirements
> > are small and you want a robust solution. I'm guessing no IOMMU to pretend
> > it's all contiguous...
> >
> > > or by hacking something quick to give us
> > > physically contiguous DMABUFs just for the benchmark), I thought it
> > > would make sense to upstream the few patches of the V3 patchset that are
> > > needed for the IIO-DMABUF interface but aren't directly related.
> >
> > Good idea.
> >
> > >
> > > As for write() support, Nuno (Cc'd) said he will work on upstreaming the
> > > DAC counterpart of adc/adi-axi-adc.c in the next few weeks, so there
> > > will be a user for the buffer write() support. I hope you are okay with
> > > this - otherwise, we can just wait until this work is done and submit it
> > > all at once.
> >
> > Absolutely fine, though I won't pick this up without the user also being
> > ready to go.
>
>
> Having looked through these again, they are straight forward so no changes
> requested from me. Nuno, if you can add this set into appropriate
> point in your series that will make use of it that will make my life easier
> and ensure and minor rebasing etc happens without having to bother Paul.
>
Sure...
- Nuno Sá
>
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2023-08-31 11:01 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-08-07 11:21 [PATCH v4 0/6] iio: Add buffer write() support Paul Cercueil
2023-08-07 11:21 ` [PATCH v4 1/6] iio: buffer-dma: Get rid of outgoing queue Paul Cercueil
2023-08-07 11:21 ` [PATCH v4 2/6] iio: buffer-dma: Rename iio_dma_buffer_data_available() Paul Cercueil
2023-08-07 11:21 ` [PATCH v4 3/6] iio: buffer-dma: Enable buffer write support Paul Cercueil
2023-08-07 11:21 ` [PATCH v4 4/6] iio: buffer-dma: split iio_dma_buffer_fileio_free() function Paul Cercueil
2023-08-07 11:21 ` [PATCH v4 5/6] iio: buffer-dmaengine: Support specifying buffer direction Paul Cercueil
2023-08-07 11:21 ` [PATCH v4 6/6] iio: buffer-dmaengine: Enable write support Paul Cercueil
2023-08-07 14:12 ` [PATCH v4 0/6] iio: Add buffer write() support Nuno Sá
2023-08-30 16:11 ` Jonathan Cameron
2023-08-30 16:18 ` Jonathan Cameron
2023-08-31 11:01 ` Nuno Sá
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox