* [PATCH v2 0/6] iio: buffer-dma: Minor cleanups and improvements
@ 2025-12-19 15:28 Nuno Sá via B4 Relay
2025-12-19 15:28 ` [PATCH v2 1/6] iio: buffer-dma: Use lockdep for locking annotations Nuno Sá via B4 Relay
` (6 more replies)
0 siblings, 7 replies; 12+ messages in thread
From: Nuno Sá via B4 Relay @ 2025-12-19 15:28 UTC (permalink / raw)
To: linux-iio; +Cc: Jonathan Cameron, David Lechner, Andy Shevchenko
Small series with some minor improvements for IIO DMA buffers:
* Use lockdep instead of WARN() + mutex API;
* Use cleanup.h;
* Turn iio_dma_buffer_init() void;
* And I could not resist in cleaning up coding style.
Also note that in some of the coding style cleanups I deliberately went
above the 80 col limit as I think it otherwise hurts readability. If not
the case for everyone, I can change it.
---
Changes in v2:
- Patch 1
* Updated the commit subject and message (given that lockdep also WARNs())
- Patch 2
* Slight change on the 80 column limit when allocating the block
(Jonathan expressed preference on that form).
- Patch 4
* Updated mutex/spinlock comments according Andy feedback.
- Link to v1: https://lore.kernel.org/r/20251203-iio-dmabuf-improvs-v1-0-0e4907ce7322@analog.com
---
Nuno Sá (6):
iio: buffer-dma: Use lockdep for locking annotations
iio: buffer-dma: Use the cleanup.h API
iio: buffer-dma: Turn iio_dma_buffer_init() void
iio: buffer-dma: Fix coding style complains
iio: buffer-dmaengine: Use the cleanup.h API
iio: buffer-dmaengine: Fix coding style complains
drivers/iio/buffer/industrialio-buffer-dma.c | 187 +++++++++------------
drivers/iio/buffer/industrialio-buffer-dmaengine.c | 22 +--
include/linux/iio/buffer-dma.h | 20 ++-
3 files changed, 97 insertions(+), 132 deletions(-)
---
base-commit: c5411c8b9ed1caf53604bb1a5be3f487988efc98
change-id: 20251104-iio-dmabuf-improvs-03d942284b86
--
Thanks!
- Nuno Sá
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 1/6] iio: buffer-dma: Use lockdep for locking annotations
2025-12-19 15:28 [PATCH v2 0/6] iio: buffer-dma: Minor cleanups and improvements Nuno Sá via B4 Relay
@ 2025-12-19 15:28 ` Nuno Sá via B4 Relay
2025-12-19 15:28 ` [PATCH v2 2/6] iio: buffer-dma: Use the cleanup.h API Nuno Sá via B4 Relay
` (5 subsequent siblings)
6 siblings, 0 replies; 12+ messages in thread
From: Nuno Sá via B4 Relay @ 2025-12-19 15:28 UTC (permalink / raw)
To: linux-iio; +Cc: Jonathan Cameron, David Lechner, Andy Shevchenko
From: Nuno Sá <nuno.sa@analog.com>
Don't use mutex_is_locked() + WARN_ON() for checking if a specif lock is
taken. Instead use the existing annotations which means
lockdep_assert_held().
Signed-off-by: Nuno Sá <nuno.sa@analog.com>
---
drivers/iio/buffer/industrialio-buffer-dma.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c
index ee294a775e8a..617b2d550c2f 100644
--- a/drivers/iio/buffer/industrialio-buffer-dma.c
+++ b/drivers/iio/buffer/industrialio-buffer-dma.c
@@ -6,6 +6,7 @@
#include <linux/atomic.h>
#include <linux/cleanup.h>
+#include <linux/lockdep.h>
#include <linux/slab.h>
#include <linux/kernel.h>
#include <linux/module.h>
@@ -764,7 +765,7 @@ int iio_dma_buffer_enqueue_dmabuf(struct iio_buffer *buffer,
bool cookie;
int ret;
- WARN_ON(!mutex_is_locked(&queue->lock));
+ lockdep_assert_held(&queue->lock);
cookie = dma_fence_begin_signalling();
--
2.52.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v2 2/6] iio: buffer-dma: Use the cleanup.h API
2025-12-19 15:28 [PATCH v2 0/6] iio: buffer-dma: Minor cleanups and improvements Nuno Sá via B4 Relay
2025-12-19 15:28 ` [PATCH v2 1/6] iio: buffer-dma: Use lockdep for locking annotations Nuno Sá via B4 Relay
@ 2025-12-19 15:28 ` Nuno Sá via B4 Relay
2025-12-19 15:28 ` [PATCH v2 3/6] iio: buffer-dma: Turn iio_dma_buffer_init() void Nuno Sá via B4 Relay
` (4 subsequent siblings)
6 siblings, 0 replies; 12+ messages in thread
From: Nuno Sá via B4 Relay @ 2025-12-19 15:28 UTC (permalink / raw)
To: linux-iio; +Cc: Jonathan Cameron, David Lechner, Andy Shevchenko
From: Nuno Sá <nuno.sa@analog.com>
Make use of the cleanup.h API for locks and memory allocation in order
to simplify some code paths.
Signed-off-by: Nuno Sá <nuno.sa@analog.com>
---
drivers/iio/buffer/industrialio-buffer-dma.c | 155 +++++++++++----------------
1 file changed, 62 insertions(+), 93 deletions(-)
diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c
index 617b2d550c2f..1f918a3e6b93 100644
--- a/drivers/iio/buffer/industrialio-buffer-dma.c
+++ b/drivers/iio/buffer/industrialio-buffer-dma.c
@@ -136,9 +136,8 @@ static void iio_dma_buffer_cleanup_worker(struct work_struct *work)
struct iio_dma_buffer_block *block, *_block;
LIST_HEAD(block_list);
- spin_lock_irq(&iio_dma_buffer_dead_blocks_lock);
- list_splice_tail_init(&iio_dma_buffer_dead_blocks, &block_list);
- spin_unlock_irq(&iio_dma_buffer_dead_blocks_lock);
+ scoped_guard(spinlock_irq, &iio_dma_buffer_dead_blocks_lock)
+ list_splice_tail_init(&iio_dma_buffer_dead_blocks, &block_list);
list_for_each_entry_safe(block, _block, &block_list, head)
iio_buffer_block_release(&block->kref);
@@ -148,13 +147,11 @@ static DECLARE_WORK(iio_dma_buffer_cleanup_work, iio_dma_buffer_cleanup_worker);
static void iio_buffer_block_release_atomic(struct kref *kref)
{
struct iio_dma_buffer_block *block;
- unsigned long flags;
block = container_of(kref, struct iio_dma_buffer_block, kref);
- spin_lock_irqsave(&iio_dma_buffer_dead_blocks_lock, flags);
- list_add_tail(&block->head, &iio_dma_buffer_dead_blocks);
- spin_unlock_irqrestore(&iio_dma_buffer_dead_blocks_lock, flags);
+ scoped_guard(spinlock_irqsave, &iio_dma_buffer_dead_blocks_lock)
+ list_add_tail(&block->head, &iio_dma_buffer_dead_blocks);
schedule_work(&iio_dma_buffer_cleanup_work);
}
@@ -175,19 +172,16 @@ static struct iio_dma_buffer_queue *iio_buffer_to_queue(struct iio_buffer *buf)
static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(
struct iio_dma_buffer_queue *queue, size_t size, bool fileio)
{
- struct iio_dma_buffer_block *block;
-
- block = kzalloc(sizeof(*block), GFP_KERNEL);
+ struct iio_dma_buffer_block *block __free(kfree) =
+ kzalloc(sizeof(*block), GFP_KERNEL);
if (!block)
return NULL;
if (fileio) {
block->vaddr = dma_alloc_coherent(queue->dev, PAGE_ALIGN(size),
&block->phys_addr, GFP_KERNEL);
- if (!block->vaddr) {
- kfree(block);
+ if (!block->vaddr)
return NULL;
- }
}
block->fileio = fileio;
@@ -202,7 +196,7 @@ static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(
if (!fileio)
atomic_inc(&queue->num_dmabufs);
- return block;
+ return_ptr(block);
}
static void _iio_dma_buffer_block_done(struct iio_dma_buffer_block *block)
@@ -233,14 +227,12 @@ static void iio_dma_buffer_queue_wake(struct iio_dma_buffer_queue *queue)
void iio_dma_buffer_block_done(struct iio_dma_buffer_block *block)
{
struct iio_dma_buffer_queue *queue = block->queue;
- unsigned long flags;
bool cookie;
cookie = dma_fence_begin_signalling();
- spin_lock_irqsave(&queue->list_lock, flags);
- _iio_dma_buffer_block_done(block);
- spin_unlock_irqrestore(&queue->list_lock, flags);
+ scoped_guard(spinlock_irqsave, &queue->list_lock)
+ _iio_dma_buffer_block_done(block);
if (!block->fileio)
iio_buffer_signal_dmabuf_done(block->fence, 0);
@@ -265,22 +257,22 @@ void iio_dma_buffer_block_list_abort(struct iio_dma_buffer_queue *queue,
struct list_head *list)
{
struct iio_dma_buffer_block *block, *_block;
- unsigned long flags;
bool cookie;
cookie = dma_fence_begin_signalling();
- spin_lock_irqsave(&queue->list_lock, flags);
- list_for_each_entry_safe(block, _block, list, head) {
- list_del(&block->head);
- block->bytes_used = 0;
- _iio_dma_buffer_block_done(block);
+ scoped_guard(spinlock_irqsave, &queue->list_lock) {
+ list_for_each_entry_safe(block, _block, list, head) {
+ list_del(&block->head);
+ block->bytes_used = 0;
+ _iio_dma_buffer_block_done(block);
- if (!block->fileio)
- iio_buffer_signal_dmabuf_done(block->fence, -EINTR);
- iio_buffer_block_put_atomic(block);
+ if (!block->fileio)
+ iio_buffer_signal_dmabuf_done(block->fence,
+ -EINTR);
+ iio_buffer_block_put_atomic(block);
+ }
}
- spin_unlock_irqrestore(&queue->list_lock, flags);
if (queue->fileio.enabled)
queue->fileio.enabled = false;
@@ -329,7 +321,6 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer)
struct iio_dma_buffer_block *block;
bool try_reuse = false;
size_t size;
- int ret = 0;
int i;
/*
@@ -340,13 +331,13 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer)
size = DIV_ROUND_UP(queue->buffer.bytes_per_datum *
queue->buffer.length, 2);
- mutex_lock(&queue->lock);
+ guard(mutex)(&queue->lock);
queue->fileio.enabled = iio_dma_buffer_can_use_fileio(queue);
/* If DMABUFs were created, disable fileio interface */
if (!queue->fileio.enabled)
- goto out_unlock;
+ return 0;
/* Allocations are page aligned */
if (PAGE_ALIGN(queue->fileio.block_size) == PAGE_ALIGN(size))
@@ -355,22 +346,22 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer)
queue->fileio.block_size = size;
queue->fileio.active_block = NULL;
- spin_lock_irq(&queue->list_lock);
- for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
- block = queue->fileio.blocks[i];
+ scoped_guard(spinlock_irq, &queue->list_lock) {
+ for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
+ block = queue->fileio.blocks[i];
- /* If we can't re-use it free it */
- if (block && (!iio_dma_block_reusable(block) || !try_reuse))
- block->state = IIO_BLOCK_STATE_DEAD;
+ /* If we can't re-use it free it */
+ if (block && (!iio_dma_block_reusable(block) || !try_reuse))
+ block->state = IIO_BLOCK_STATE_DEAD;
+ }
+
+ /*
+ * At this point all blocks are either owned by the core or
+ * marked as dead. This means we can reset the lists without
+ * having to fear corruption.
+ */
}
- /*
- * At this point all blocks are either owned by the core or marked as
- * dead. This means we can reset the lists without having to fear
- * corrution.
- */
- spin_unlock_irq(&queue->list_lock);
-
INIT_LIST_HEAD(&queue->incoming);
for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
@@ -389,10 +380,9 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer)
if (!block) {
block = iio_dma_buffer_alloc_block(queue, size, true);
- if (!block) {
- ret = -ENOMEM;
- goto out_unlock;
- }
+ if (!block)
+ return -ENOMEM;
+
queue->fileio.blocks[i] = block;
}
@@ -416,10 +406,7 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer)
}
}
-out_unlock:
- mutex_unlock(&queue->lock);
-
- return ret;
+ return 0;
}
EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_request_update, "IIO_DMA_BUFFER");
@@ -427,13 +414,13 @@ static void iio_dma_buffer_fileio_free(struct iio_dma_buffer_queue *queue)
{
unsigned int i;
- spin_lock_irq(&queue->list_lock);
- for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
- if (!queue->fileio.blocks[i])
- continue;
- queue->fileio.blocks[i]->state = IIO_BLOCK_STATE_DEAD;
+ scoped_guard(spinlock_irq, &queue->list_lock) {
+ for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
+ if (!queue->fileio.blocks[i])
+ continue;
+ queue->fileio.blocks[i]->state = IIO_BLOCK_STATE_DEAD;
+ }
}
- spin_unlock_irq(&queue->list_lock);
INIT_LIST_HEAD(&queue->incoming);
@@ -497,13 +484,12 @@ int iio_dma_buffer_enable(struct iio_buffer *buffer,
struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer);
struct iio_dma_buffer_block *block, *_block;
- mutex_lock(&queue->lock);
+ guard(mutex)(&queue->lock);
queue->active = true;
list_for_each_entry_safe(block, _block, &queue->incoming, head) {
list_del(&block->head);
iio_dma_buffer_submit_block(queue, block);
}
- mutex_unlock(&queue->lock);
return 0;
}
@@ -522,12 +508,11 @@ int iio_dma_buffer_disable(struct iio_buffer *buffer,
{
struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer);
- mutex_lock(&queue->lock);
+ guard(mutex)(&queue->lock);
queue->active = false;
if (queue->ops && queue->ops->abort)
queue->ops->abort(queue);
- mutex_unlock(&queue->lock);
return 0;
}
@@ -552,19 +537,16 @@ static struct iio_dma_buffer_block *iio_dma_buffer_dequeue(
struct iio_dma_buffer_block *block;
unsigned int idx;
- spin_lock_irq(&queue->list_lock);
+ guard(spinlock_irq)(&queue->list_lock);
idx = queue->fileio.next_dequeue;
block = queue->fileio.blocks[idx];
- if (block->state == IIO_BLOCK_STATE_DONE) {
- idx = (idx + 1) % ARRAY_SIZE(queue->fileio.blocks);
- queue->fileio.next_dequeue = idx;
- } else {
- block = NULL;
- }
+ if (block->state != IIO_BLOCK_STATE_DONE)
+ return NULL;
- spin_unlock_irq(&queue->list_lock);
+ idx = (idx + 1) % ARRAY_SIZE(queue->fileio.blocks);
+ queue->fileio.next_dequeue = idx;
return block;
}
@@ -580,14 +562,13 @@ static int iio_dma_buffer_io(struct iio_buffer *buffer, size_t n,
if (n < buffer->bytes_per_datum)
return -EINVAL;
- mutex_lock(&queue->lock);
+ guard(mutex)(&queue->lock);
if (!queue->fileio.active_block) {
block = iio_dma_buffer_dequeue(queue);
- if (block == NULL) {
- ret = 0;
- goto out_unlock;
- }
+ if (!block)
+ return 0;
+
queue->fileio.pos = 0;
queue->fileio.active_block = block;
} else {
@@ -603,10 +584,8 @@ static int iio_dma_buffer_io(struct iio_buffer *buffer, size_t n,
ret = copy_from_user(addr, user_buffer, n);
else
ret = copy_to_user(user_buffer, addr, n);
- if (ret) {
- ret = -EFAULT;
- goto out_unlock;
- }
+ if (ret)
+ return -EFAULT;
queue->fileio.pos += n;
@@ -615,12 +594,7 @@ static int iio_dma_buffer_io(struct iio_buffer *buffer, size_t n,
iio_dma_buffer_enqueue(queue, block);
}
- ret = n;
-
-out_unlock:
- mutex_unlock(&queue->lock);
-
- return ret;
+ return n;
}
/**
@@ -678,11 +652,11 @@ size_t iio_dma_buffer_usage(struct iio_buffer *buf)
* but won't increase since all blocks are in use.
*/
- mutex_lock(&queue->lock);
+ guard(mutex)(&queue->lock);
if (queue->fileio.active_block)
data_available += queue->fileio.active_block->size;
- spin_lock_irq(&queue->list_lock);
+ guard(spinlock_irq)(&queue->list_lock);
for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
block = queue->fileio.blocks[i];
@@ -692,9 +666,6 @@ size_t iio_dma_buffer_usage(struct iio_buffer *buf)
data_available += block->size;
}
- spin_unlock_irq(&queue->list_lock);
- mutex_unlock(&queue->lock);
-
return data_available;
}
EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_usage, "IIO_DMA_BUFFER");
@@ -876,12 +847,10 @@ EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_init, "IIO_DMA_BUFFER");
*/
void iio_dma_buffer_exit(struct iio_dma_buffer_queue *queue)
{
- mutex_lock(&queue->lock);
+ guard(mutex)(&queue->lock);
iio_dma_buffer_fileio_free(queue);
queue->ops = NULL;
-
- mutex_unlock(&queue->lock);
}
EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_exit, "IIO_DMA_BUFFER");
--
2.52.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v2 3/6] iio: buffer-dma: Turn iio_dma_buffer_init() void
2025-12-19 15:28 [PATCH v2 0/6] iio: buffer-dma: Minor cleanups and improvements Nuno Sá via B4 Relay
2025-12-19 15:28 ` [PATCH v2 1/6] iio: buffer-dma: Use lockdep for locking annotations Nuno Sá via B4 Relay
2025-12-19 15:28 ` [PATCH v2 2/6] iio: buffer-dma: Use the cleanup.h API Nuno Sá via B4 Relay
@ 2025-12-19 15:28 ` Nuno Sá via B4 Relay
2025-12-19 15:28 ` [PATCH v2 4/6] iio: buffer-dma: Fix coding style complains Nuno Sá via B4 Relay
` (3 subsequent siblings)
6 siblings, 0 replies; 12+ messages in thread
From: Nuno Sá via B4 Relay @ 2025-12-19 15:28 UTC (permalink / raw)
To: linux-iio; +Cc: Jonathan Cameron, David Lechner, Andy Shevchenko
From: Nuno Sá <nuno.sa@analog.com>
iio_dma_buffer_init() always return 0. Therefore there's no point in
returning int.
While at it, fix a mismatch between the function declaration and definition
regarding the struct device (dma_dev != dev).
Signed-off-by: Nuno Sá <nuno.sa@analog.com>
---
drivers/iio/buffer/industrialio-buffer-dma.c | 6 ++----
include/linux/iio/buffer-dma.h | 4 ++--
2 files changed, 4 insertions(+), 6 deletions(-)
diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c
index 1f918a3e6b93..3ab1349f9ea5 100644
--- a/drivers/iio/buffer/industrialio-buffer-dma.c
+++ b/drivers/iio/buffer/industrialio-buffer-dma.c
@@ -820,8 +820,8 @@ EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_set_length, "IIO_DMA_BUFFER");
* should refer to the device that will perform the DMA to ensure that
* allocations are done from a memory region that can be accessed by the device.
*/
-int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue,
- struct device *dev, const struct iio_dma_buffer_ops *ops)
+void iio_dma_buffer_init(struct iio_dma_buffer_queue *queue, struct device *dev,
+ const struct iio_dma_buffer_ops *ops)
{
iio_buffer_init(&queue->buffer);
queue->buffer.length = PAGE_SIZE;
@@ -833,8 +833,6 @@ int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue,
mutex_init(&queue->lock);
spin_lock_init(&queue->list_lock);
-
- return 0;
}
EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_init, "IIO_DMA_BUFFER");
diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h
index 5eb66a399002..91f678e5be71 100644
--- a/include/linux/iio/buffer-dma.h
+++ b/include/linux/iio/buffer-dma.h
@@ -157,8 +157,8 @@ int iio_dma_buffer_set_bytes_per_datum(struct iio_buffer *buffer, size_t bpd);
int iio_dma_buffer_set_length(struct iio_buffer *buffer, unsigned int length);
int iio_dma_buffer_request_update(struct iio_buffer *buffer);
-int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue,
- struct device *dma_dev, const struct iio_dma_buffer_ops *ops);
+void iio_dma_buffer_init(struct iio_dma_buffer_queue *queue, struct device *dev,
+ const struct iio_dma_buffer_ops *ops);
void iio_dma_buffer_exit(struct iio_dma_buffer_queue *queue);
void iio_dma_buffer_release(struct iio_dma_buffer_queue *queue);
--
2.52.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v2 4/6] iio: buffer-dma: Fix coding style complains
2025-12-19 15:28 [PATCH v2 0/6] iio: buffer-dma: Minor cleanups and improvements Nuno Sá via B4 Relay
` (2 preceding siblings ...)
2025-12-19 15:28 ` [PATCH v2 3/6] iio: buffer-dma: Turn iio_dma_buffer_init() void Nuno Sá via B4 Relay
@ 2025-12-19 15:28 ` Nuno Sá via B4 Relay
2025-12-21 11:59 ` Jonathan Cameron
2025-12-19 15:28 ` [PATCH v2 5/6] iio: buffer-dmaengine: Use the cleanup.h API Nuno Sá via B4 Relay
` (2 subsequent siblings)
6 siblings, 1 reply; 12+ messages in thread
From: Nuno Sá via B4 Relay @ 2025-12-19 15:28 UTC (permalink / raw)
To: linux-iio; +Cc: Jonathan Cameron, David Lechner, Andy Shevchenko
From: Nuno Sá <nuno.sa@analog.com>
Just making sure checkpatch is happy. No functional change intended.
Signed-off-by: Nuno Sá <nuno.sa@analog.com>
---
drivers/iio/buffer/industrialio-buffer-dma.c | 23 ++++++++++-------------
include/linux/iio/buffer-dma.h | 16 ++++++++++------
2 files changed, 20 insertions(+), 19 deletions(-)
diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c
index 3ab1349f9ea5..c5ee58effc92 100644
--- a/drivers/iio/buffer/industrialio-buffer-dma.c
+++ b/drivers/iio/buffer/industrialio-buffer-dma.c
@@ -169,8 +169,9 @@ static struct iio_dma_buffer_queue *iio_buffer_to_queue(struct iio_buffer *buf)
return container_of(buf, struct iio_dma_buffer_queue, buffer);
}
-static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(
- struct iio_dma_buffer_queue *queue, size_t size, bool fileio)
+static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(struct iio_dma_buffer_queue *queue,
+ size_t size,
+ bool fileio)
{
struct iio_dma_buffer_block *block __free(kfree) =
kzalloc(sizeof(*block), GFP_KERNEL);
@@ -254,7 +255,7 @@ EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_block_done, "IIO_DMA_BUFFER");
* hand the blocks back to the queue.
*/
void iio_dma_buffer_block_list_abort(struct iio_dma_buffer_queue *queue,
- struct list_head *list)
+ struct list_head *list)
{
struct iio_dma_buffer_block *block, *_block;
bool cookie;
@@ -434,7 +435,7 @@ static void iio_dma_buffer_fileio_free(struct iio_dma_buffer_queue *queue)
}
static void iio_dma_buffer_submit_block(struct iio_dma_buffer_queue *queue,
- struct iio_dma_buffer_block *block)
+ struct iio_dma_buffer_block *block)
{
int ret;
@@ -478,8 +479,7 @@ static void iio_dma_buffer_submit_block(struct iio_dma_buffer_queue *queue,
*
* This will allocate the DMA buffers and start the DMA transfers.
*/
-int iio_dma_buffer_enable(struct iio_buffer *buffer,
- struct iio_dev *indio_dev)
+int iio_dma_buffer_enable(struct iio_buffer *buffer, struct iio_dev *indio_dev)
{
struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer);
struct iio_dma_buffer_block *block, *_block;
@@ -503,8 +503,7 @@ EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_enable, "IIO_DMA_BUFFER");
* Needs to be called when the device that the buffer is attached to stops
* sampling. Typically should be the iio_buffer_access_ops disable callback.
*/
-int iio_dma_buffer_disable(struct iio_buffer *buffer,
- struct iio_dev *indio_dev)
+int iio_dma_buffer_disable(struct iio_buffer *buffer, struct iio_dev *indio_dev)
{
struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer);
@@ -519,7 +518,7 @@ int iio_dma_buffer_disable(struct iio_buffer *buffer,
EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_disable, "IIO_DMA_BUFFER");
static void iio_dma_buffer_enqueue(struct iio_dma_buffer_queue *queue,
- struct iio_dma_buffer_block *block)
+ struct iio_dma_buffer_block *block)
{
if (block->state == IIO_BLOCK_STATE_DEAD) {
iio_buffer_block_put(block);
@@ -531,8 +530,7 @@ static void iio_dma_buffer_enqueue(struct iio_dma_buffer_queue *queue,
}
}
-static struct iio_dma_buffer_block *iio_dma_buffer_dequeue(
- struct iio_dma_buffer_queue *queue)
+static struct iio_dma_buffer_block *iio_dma_buffer_dequeue(struct iio_dma_buffer_queue *queue)
{
struct iio_dma_buffer_block *block;
unsigned int idx;
@@ -661,8 +659,7 @@ size_t iio_dma_buffer_usage(struct iio_buffer *buf)
for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
block = queue->fileio.blocks[i];
- if (block != queue->fileio.active_block
- && block->state == IIO_BLOCK_STATE_DONE)
+ if (block != queue->fileio.active_block && block->state == IIO_BLOCK_STATE_DONE)
data_available += block->size;
}
diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h
index 91f678e5be71..f794af0970bd 100644
--- a/include/linux/iio/buffer-dma.h
+++ b/include/linux/iio/buffer-dma.h
@@ -119,7 +119,12 @@ struct iio_dma_buffer_queue {
struct device *dev;
const struct iio_dma_buffer_ops *ops;
+ /*
+ * A mutex to protect accessing, configuring (eg: enqueuing DMA blocks)
+ * and do file IO on struct iio_dma_buffer_queue objects.
+ */
struct mutex lock;
+ /* A spin lock to protect adding/removing blocks to the queue list */
spinlock_t list_lock;
struct list_head incoming;
@@ -136,20 +141,19 @@ struct iio_dma_buffer_queue {
*/
struct iio_dma_buffer_ops {
int (*submit)(struct iio_dma_buffer_queue *queue,
- struct iio_dma_buffer_block *block);
+ struct iio_dma_buffer_block *block);
void (*abort)(struct iio_dma_buffer_queue *queue);
};
void iio_dma_buffer_block_done(struct iio_dma_buffer_block *block);
void iio_dma_buffer_block_list_abort(struct iio_dma_buffer_queue *queue,
- struct list_head *list);
+ struct list_head *list);
-int iio_dma_buffer_enable(struct iio_buffer *buffer,
- struct iio_dev *indio_dev);
+int iio_dma_buffer_enable(struct iio_buffer *buffer, struct iio_dev *indio_dev);
int iio_dma_buffer_disable(struct iio_buffer *buffer,
- struct iio_dev *indio_dev);
+ struct iio_dev *indio_dev);
int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n,
- char __user *user_buffer);
+ char __user *user_buffer);
int iio_dma_buffer_write(struct iio_buffer *buffer, size_t n,
const char __user *user_buffer);
size_t iio_dma_buffer_usage(struct iio_buffer *buffer);
--
2.52.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v2 5/6] iio: buffer-dmaengine: Use the cleanup.h API
2025-12-19 15:28 [PATCH v2 0/6] iio: buffer-dma: Minor cleanups and improvements Nuno Sá via B4 Relay
` (3 preceding siblings ...)
2025-12-19 15:28 ` [PATCH v2 4/6] iio: buffer-dma: Fix coding style complains Nuno Sá via B4 Relay
@ 2025-12-19 15:28 ` Nuno Sá via B4 Relay
2025-12-21 12:01 ` Jonathan Cameron
2025-12-19 15:28 ` [PATCH v2 6/6] iio: buffer-dmaengine: Fix coding style complains Nuno Sá via B4 Relay
2025-12-21 12:02 ` [PATCH v2 0/6] iio: buffer-dma: Minor cleanups and improvements Jonathan Cameron
6 siblings, 1 reply; 12+ messages in thread
From: Nuno Sá via B4 Relay @ 2025-12-19 15:28 UTC (permalink / raw)
To: linux-iio; +Cc: Jonathan Cameron, David Lechner, Andy Shevchenko
From: Nuno Sá <nuno.sa@analog.com>
Make use of the cleanup.h API for locks in order to simplify some code
paths.
Signed-off-by: Nuno Sá <nuno.sa@analog.com>
---
drivers/iio/buffer/industrialio-buffer-dmaengine.c | 11 ++++-------
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
index e9d9a7d39fe1..a8a4adb5ed3a 100644
--- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
+++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
@@ -49,11 +49,9 @@ static void iio_dmaengine_buffer_block_done(void *data,
const struct dmaengine_result *result)
{
struct iio_dma_buffer_block *block = data;
- unsigned long flags;
- spin_lock_irqsave(&block->queue->list_lock, flags);
- list_del(&block->head);
- spin_unlock_irqrestore(&block->queue->list_lock, flags);
+ scoped_guard(spinlock_irqsave, &block->queue->list_lock)
+ list_del(&block->head);
block->bytes_used -= result->residue;
iio_dma_buffer_block_done(block);
}
@@ -131,9 +129,8 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue,
if (dma_submit_error(cookie))
return dma_submit_error(cookie);
- spin_lock_irq(&dmaengine_buffer->queue.list_lock);
- list_add_tail(&block->head, &dmaengine_buffer->active);
- spin_unlock_irq(&dmaengine_buffer->queue.list_lock);
+ scoped_guard(spinlock_irq, &dmaengine_buffer->queue.list_lock)
+ list_add_tail(&block->head, &dmaengine_buffer->active);
dma_async_issue_pending(dmaengine_buffer->chan);
--
2.52.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v2 6/6] iio: buffer-dmaengine: Fix coding style complains
2025-12-19 15:28 [PATCH v2 0/6] iio: buffer-dma: Minor cleanups and improvements Nuno Sá via B4 Relay
` (4 preceding siblings ...)
2025-12-19 15:28 ` [PATCH v2 5/6] iio: buffer-dmaengine: Use the cleanup.h API Nuno Sá via B4 Relay
@ 2025-12-19 15:28 ` Nuno Sá via B4 Relay
2025-12-21 12:02 ` [PATCH v2 0/6] iio: buffer-dma: Minor cleanups and improvements Jonathan Cameron
6 siblings, 0 replies; 12+ messages in thread
From: Nuno Sá via B4 Relay @ 2025-12-19 15:28 UTC (permalink / raw)
To: linux-iio; +Cc: Jonathan Cameron, David Lechner, Andy Shevchenko
From: Nuno Sá <nuno.sa@analog.com>
Just making sure checkpatch is happy. No functional change intended.
Signed-off-by: Nuno Sá <nuno.sa@analog.com>
---
drivers/iio/buffer/industrialio-buffer-dmaengine.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
index a8a4adb5ed3a..b906ceaff9e1 100644
--- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
+++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
@@ -39,14 +39,13 @@ struct dmaengine_buffer {
size_t max_size;
};
-static struct dmaengine_buffer *iio_buffer_to_dmaengine_buffer(
- struct iio_buffer *buffer)
+static struct dmaengine_buffer *iio_buffer_to_dmaengine_buffer(struct iio_buffer *buffer)
{
return container_of(buffer, struct dmaengine_buffer, queue.buffer);
}
static void iio_dmaengine_buffer_block_done(void *data,
- const struct dmaengine_result *result)
+ const struct dmaengine_result *result)
{
struct iio_dma_buffer_block *block = data;
@@ -57,7 +56,7 @@ static void iio_dmaengine_buffer_block_done(void *data,
}
static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue,
- struct iio_dma_buffer_block *block)
+ struct iio_dma_buffer_block *block)
{
struct dmaengine_buffer *dmaengine_buffer =
iio_buffer_to_dmaengine_buffer(&queue->buffer);
@@ -184,7 +183,7 @@ static const struct iio_dma_buffer_ops iio_dmaengine_default_ops = {
};
static ssize_t iio_dmaengine_buffer_get_length_align(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr, char *buf)
{
struct iio_buffer *buffer = to_iio_dev_attr(attr)->buffer;
struct dmaengine_buffer *dmaengine_buffer =
@@ -243,7 +242,7 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct dma_chan *chan)
dmaengine_buffer->max_size = dma_get_max_seg_size(chan->device->dev);
iio_dma_buffer_init(&dmaengine_buffer->queue, chan->device->dev,
- &iio_dmaengine_default_ops);
+ &iio_dmaengine_default_ops);
dmaengine_buffer->queue.buffer.attrs = iio_dmaengine_buffer_attrs;
dmaengine_buffer->queue.buffer.access = &iio_dmaengine_buffer_ops;
--
2.52.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v2 4/6] iio: buffer-dma: Fix coding style complains
2025-12-19 15:28 ` [PATCH v2 4/6] iio: buffer-dma: Fix coding style complains Nuno Sá via B4 Relay
@ 2025-12-21 11:59 ` Jonathan Cameron
2025-12-22 12:38 ` Nuno Sá
0 siblings, 1 reply; 12+ messages in thread
From: Jonathan Cameron @ 2025-12-21 11:59 UTC (permalink / raw)
To: Nuno Sá via B4 Relay
Cc: nuno.sa, linux-iio, David Lechner, Andy Shevchenko
On Fri, 19 Dec 2025 15:28:15 +0000
Nuno Sá via B4 Relay <devnull+nuno.sa.analog.com@kernel.org> wrote:
> From: Nuno Sá <nuno.sa@analog.com>
>
> Just making sure checkpatch is happy. No functional change intended.
>
> Signed-off-by: Nuno Sá <nuno.sa@analog.com>
I made a couple of small tweaks whilst applying this one. See below.
Jonathan
> ---
> drivers/iio/buffer/industrialio-buffer-dma.c | 23 ++++++++++-------------
> include/linux/iio/buffer-dma.h | 16 ++++++++++------
> 2 files changed, 20 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c
> index 3ab1349f9ea5..c5ee58effc92 100644
> --- a/drivers/iio/buffer/industrialio-buffer-dma.c
> +++ b/drivers/iio/buffer/industrialio-buffer-dma.c
> @@ -169,8 +169,9 @@ static struct iio_dma_buffer_queue *iio_buffer_to_queue(struct iio_buffer *buf)
> return container_of(buf, struct iio_dma_buffer_queue, buffer);
> }
>
> -static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(
> - struct iio_dma_buffer_queue *queue, size_t size, bool fileio)
> +static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(struct iio_dma_buffer_queue *queue,
> + size_t size,
> + bool fileio)
For this one I'd split it as:
static struct iio_dma_buffer_block *
iio_dma_buffer_alloc_block(struct iio_dma_buffer_queue *queue, size_t size,
bool fileio)
> {
> struct iio_dma_buffer_block *block __free(kfree) =
> kzalloc(sizeof(*block), GFP_KERNEL);
> @@ -254,7 +255,7 @@ EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_block_done, "IIO_DMA_BUFFER");
> * hand the blocks back to the queue.
> */
> void iio_dma_buffer_block_list_abort(struct iio_dma_buffer_queue *queue,
> - struct list_head *list)
> + struct list_head *list)
> {
> struct iio_dma_buffer_block *block, *_block;
> bool cookie;
> @@ -434,7 +435,7 @@ static void iio_dma_buffer_fileio_free(struct iio_dma_buffer_queue *queue)
> }
>
> static void iio_dma_buffer_submit_block(struct iio_dma_buffer_queue *queue,
> - struct iio_dma_buffer_block *block)
> + struct iio_dma_buffer_block *block)
> {
> int ret;
>
> @@ -478,8 +479,7 @@ static void iio_dma_buffer_submit_block(struct iio_dma_buffer_queue *queue,
> *
> * This will allocate the DMA buffers and start the DMA transfers.
> */
> -int iio_dma_buffer_enable(struct iio_buffer *buffer,
> - struct iio_dev *indio_dev)
> +int iio_dma_buffer_enable(struct iio_buffer *buffer, struct iio_dev *indio_dev)
> {
> struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer);
> struct iio_dma_buffer_block *block, *_block;
> @@ -503,8 +503,7 @@ EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_enable, "IIO_DMA_BUFFER");
> * Needs to be called when the device that the buffer is attached to stops
> * sampling. Typically should be the iio_buffer_access_ops disable callback.
> */
> -int iio_dma_buffer_disable(struct iio_buffer *buffer,
> - struct iio_dev *indio_dev)
> +int iio_dma_buffer_disable(struct iio_buffer *buffer, struct iio_dev *indio_dev)
> {
> struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer);
>
> @@ -519,7 +518,7 @@ int iio_dma_buffer_disable(struct iio_buffer *buffer,
> EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_disable, "IIO_DMA_BUFFER");
>
> static void iio_dma_buffer_enqueue(struct iio_dma_buffer_queue *queue,
> - struct iio_dma_buffer_block *block)
> + struct iio_dma_buffer_block *block)
> {
> if (block->state == IIO_BLOCK_STATE_DEAD) {
> iio_buffer_block_put(block);
> @@ -531,8 +530,7 @@ static void iio_dma_buffer_enqueue(struct iio_dma_buffer_queue *queue,
> }
> }
>
> -static struct iio_dma_buffer_block *iio_dma_buffer_dequeue(
> - struct iio_dma_buffer_queue *queue)
> +static struct iio_dma_buffer_block *iio_dma_buffer_dequeue(struct iio_dma_buffer_queue *queue)
static struct iio_dma_buffer_block *
iio_dma_buffer_dequeue(struct iio_dma_buffer_queue *queue)
is a bit nicer than that long line to my eyes and common enough style.
> {
> struct iio_dma_buffer_block *block;
> unsigned int idx;
> @@ -661,8 +659,7 @@ size_t iio_dma_buffer_usage(struct iio_buffer *buf)
> for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
> block = queue->fileio.blocks[i];
>
> - if (block != queue->fileio.active_block
> - && block->state == IIO_BLOCK_STATE_DONE)
> + if (block != queue->fileio.active_block && block->state == IIO_BLOCK_STATE_DONE)
> data_available += block->size;
> }
>
> diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h
> index 91f678e5be71..f794af0970bd 100644
> --- a/include/linux/iio/buffer-dma.h
> +++ b/include/linux/iio/buffer-dma.h
> @@ -119,7 +119,12 @@ struct iio_dma_buffer_queue {
> struct device *dev;
> const struct iio_dma_buffer_ops *ops;
>
> + /*
> + * A mutex to protect accessing, configuring (eg: enqueuing DMA blocks)
> + * and do file IO on struct iio_dma_buffer_queue objects.
> + */
> struct mutex lock;
> + /* A spin lock to protect adding/removing blocks to the queue list */
> spinlock_t list_lock;
> struct list_head incoming;
>
> @@ -136,20 +141,19 @@ struct iio_dma_buffer_queue {
> */
> struct iio_dma_buffer_ops {
> int (*submit)(struct iio_dma_buffer_queue *queue,
> - struct iio_dma_buffer_block *block);
> + struct iio_dma_buffer_block *block);
> void (*abort)(struct iio_dma_buffer_queue *queue);
> };
>
> void iio_dma_buffer_block_done(struct iio_dma_buffer_block *block);
> void iio_dma_buffer_block_list_abort(struct iio_dma_buffer_queue *queue,
> - struct list_head *list);
> + struct list_head *list);
>
> -int iio_dma_buffer_enable(struct iio_buffer *buffer,
> - struct iio_dev *indio_dev);
> +int iio_dma_buffer_enable(struct iio_buffer *buffer, struct iio_dev *indio_dev);
> int iio_dma_buffer_disable(struct iio_buffer *buffer,
> - struct iio_dev *indio_dev);
> + struct iio_dev *indio_dev);
> int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n,
> - char __user *user_buffer);
> + char __user *user_buffer);
> int iio_dma_buffer_write(struct iio_buffer *buffer, size_t n,
> const char __user *user_buffer);
> size_t iio_dma_buffer_usage(struct iio_buffer *buffer);
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 5/6] iio: buffer-dmaengine: Use the cleanup.h API
2025-12-19 15:28 ` [PATCH v2 5/6] iio: buffer-dmaengine: Use the cleanup.h API Nuno Sá via B4 Relay
@ 2025-12-21 12:01 ` Jonathan Cameron
2025-12-22 12:40 ` Nuno Sá
0 siblings, 1 reply; 12+ messages in thread
From: Jonathan Cameron @ 2025-12-21 12:01 UTC (permalink / raw)
To: Nuno Sá via B4 Relay
Cc: nuno.sa, linux-iio, David Lechner, Andy Shevchenko
On Fri, 19 Dec 2025 15:28:16 +0000
Nuno Sá via B4 Relay <devnull+nuno.sa.analog.com@kernel.org> wrote:
> From: Nuno Sá <nuno.sa@analog.com>
>
> Make use of the cleanup.h API for locks in order to simplify some code
> paths.
>
> Signed-off-by: Nuno Sá <nuno.sa@analog.com>
Needs cleanup.h for scoped_guard() definition.
I'm not seeing this as a significant simplification but this driver is
your problems so fair enough.
So applied with the header include added.
Jonathan
> ---
> drivers/iio/buffer/industrialio-buffer-dmaengine.c | 11 ++++-------
> 1 file changed, 4 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> index e9d9a7d39fe1..a8a4adb5ed3a 100644
> --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> @@ -49,11 +49,9 @@ static void iio_dmaengine_buffer_block_done(void *data,
> const struct dmaengine_result *result)
> {
> struct iio_dma_buffer_block *block = data;
> - unsigned long flags;
>
> - spin_lock_irqsave(&block->queue->list_lock, flags);
> - list_del(&block->head);
> - spin_unlock_irqrestore(&block->queue->list_lock, flags);
> + scoped_guard(spinlock_irqsave, &block->queue->list_lock)
> + list_del(&block->head);
> block->bytes_used -= result->residue;
> iio_dma_buffer_block_done(block);
> }
> @@ -131,9 +129,8 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue,
> if (dma_submit_error(cookie))
> return dma_submit_error(cookie);
>
> - spin_lock_irq(&dmaengine_buffer->queue.list_lock);
> - list_add_tail(&block->head, &dmaengine_buffer->active);
> - spin_unlock_irq(&dmaengine_buffer->queue.list_lock);
> + scoped_guard(spinlock_irq, &dmaengine_buffer->queue.list_lock)
> + list_add_tail(&block->head, &dmaengine_buffer->active);
>
> dma_async_issue_pending(dmaengine_buffer->chan);
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 0/6] iio: buffer-dma: Minor cleanups and improvements
2025-12-19 15:28 [PATCH v2 0/6] iio: buffer-dma: Minor cleanups and improvements Nuno Sá via B4 Relay
` (5 preceding siblings ...)
2025-12-19 15:28 ` [PATCH v2 6/6] iio: buffer-dmaengine: Fix coding style complains Nuno Sá via B4 Relay
@ 2025-12-21 12:02 ` Jonathan Cameron
6 siblings, 0 replies; 12+ messages in thread
From: Jonathan Cameron @ 2025-12-21 12:02 UTC (permalink / raw)
To: Nuno Sá via B4 Relay
Cc: nuno.sa, linux-iio, David Lechner, Andy Shevchenko
On Fri, 19 Dec 2025 15:28:11 +0000
Nuno Sá via B4 Relay <devnull+nuno.sa.analog.com@kernel.org> wrote:
> Small series with some minor improvements for IIO DMA buffers:
> * Use lockdep instead of WARN() + mutex API;
> * Use cleanup.h;
> * Turn iio_dma_buffer_init() void;
> * And I could not resist in cleaning up coding style.
>
> Also note that in some of the coding style cleanups I deliberately went
> above the 80 col limit as I think it otherwise hurts readability. If not
> the case for everyone, I can change it.
>
Series applied with a few tweaks.
Jonathan
> ---
> Changes in v2:
> - Patch 1
> * Updated the commit subject and message (given that lockdep also WARNs())
> - Patch 2
> * Slight change on the 80 column limit when allocating the block
> (Jonathan expressed preference on that form).
> - Patch 4
> * Updated mutex/spinlock comments according Andy feedback.
> - Link to v1: https://lore.kernel.org/r/20251203-iio-dmabuf-improvs-v1-0-0e4907ce7322@analog.com
>
> ---
> Nuno Sá (6):
> iio: buffer-dma: Use lockdep for locking annotations
> iio: buffer-dma: Use the cleanup.h API
> iio: buffer-dma: Turn iio_dma_buffer_init() void
> iio: buffer-dma: Fix coding style complains
> iio: buffer-dmaengine: Use the cleanup.h API
> iio: buffer-dmaengine: Fix coding style complains
>
> drivers/iio/buffer/industrialio-buffer-dma.c | 187 +++++++++------------
> drivers/iio/buffer/industrialio-buffer-dmaengine.c | 22 +--
> include/linux/iio/buffer-dma.h | 20 ++-
> 3 files changed, 97 insertions(+), 132 deletions(-)
> ---
> base-commit: c5411c8b9ed1caf53604bb1a5be3f487988efc98
> change-id: 20251104-iio-dmabuf-improvs-03d942284b86
> --
>
> Thanks!
> - Nuno Sá
>
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 4/6] iio: buffer-dma: Fix coding style complains
2025-12-21 11:59 ` Jonathan Cameron
@ 2025-12-22 12:38 ` Nuno Sá
0 siblings, 0 replies; 12+ messages in thread
From: Nuno Sá @ 2025-12-22 12:38 UTC (permalink / raw)
To: Jonathan Cameron, Nuno Sá via B4 Relay
Cc: nuno.sa, linux-iio, David Lechner, Andy Shevchenko
On Sun, 2025-12-21 at 11:59 +0000, Jonathan Cameron wrote:
> On Fri, 19 Dec 2025 15:28:15 +0000
> Nuno Sá via B4 Relay <devnull+nuno.sa.analog.com@kernel.org> wrote:
>
> > From: Nuno Sá <nuno.sa@analog.com>
> >
> > Just making sure checkpatch is happy. No functional change intended.
> >
> > Signed-off-by: Nuno Sá <nuno.sa@analog.com>
> I made a couple of small tweaks whilst applying this one. See below.
>
LGTM... Thanks!
- Nuno Sá
> Jonathan
>
> > ---
> > drivers/iio/buffer/industrialio-buffer-dma.c | 23 ++++++++++-------------
> > include/linux/iio/buffer-dma.h | 16 ++++++++++------
> > 2 files changed, 20 insertions(+), 19 deletions(-)
> >
> > diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-
> > buffer-dma.c
> > index 3ab1349f9ea5..c5ee58effc92 100644
> > --- a/drivers/iio/buffer/industrialio-buffer-dma.c
> > +++ b/drivers/iio/buffer/industrialio-buffer-dma.c
> > @@ -169,8 +169,9 @@ static struct iio_dma_buffer_queue *iio_buffer_to_queue(struct iio_buffer
> > *buf)
> > return container_of(buf, struct iio_dma_buffer_queue, buffer);
> > }
> >
> > -static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(
> > - struct iio_dma_buffer_queue *queue, size_t size, bool fileio)
> > +static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(struct iio_dma_buffer_queue
> > *queue,
> > + size_t size,
> > + bool fileio)
> For this one I'd split it as:
>
> static struct iio_dma_buffer_block *
> iio_dma_buffer_alloc_block(struct iio_dma_buffer_queue *queue, size_t size,
> bool fileio)
>
> > {
> > struct iio_dma_buffer_block *block __free(kfree) =
> > kzalloc(sizeof(*block), GFP_KERNEL);
> > @@ -254,7 +255,7 @@ EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_block_done, "IIO_DMA_BUFFER");
> > * hand the blocks back to the queue.
> > */
> > void iio_dma_buffer_block_list_abort(struct iio_dma_buffer_queue *queue,
> > - struct list_head *list)
> > + struct list_head *list)
> > {
> > struct iio_dma_buffer_block *block, *_block;
> > bool cookie;
> > @@ -434,7 +435,7 @@ static void iio_dma_buffer_fileio_free(struct iio_dma_buffer_queue *queue)
> > }
> >
> > static void iio_dma_buffer_submit_block(struct iio_dma_buffer_queue *queue,
> > - struct iio_dma_buffer_block *block)
> > + struct iio_dma_buffer_block *block)
> > {
> > int ret;
> >
> > @@ -478,8 +479,7 @@ static void iio_dma_buffer_submit_block(struct iio_dma_buffer_queue *queue,
> > *
> > * This will allocate the DMA buffers and start the DMA transfers.
> > */
> > -int iio_dma_buffer_enable(struct iio_buffer *buffer,
> > - struct iio_dev *indio_dev)
> > +int iio_dma_buffer_enable(struct iio_buffer *buffer, struct iio_dev *indio_dev)
> > {
> > struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer);
> > struct iio_dma_buffer_block *block, *_block;
> > @@ -503,8 +503,7 @@ EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_enable, "IIO_DMA_BUFFER");
> > * Needs to be called when the device that the buffer is attached to stops
> > * sampling. Typically should be the iio_buffer_access_ops disable callback.
> > */
> > -int iio_dma_buffer_disable(struct iio_buffer *buffer,
> > - struct iio_dev *indio_dev)
> > +int iio_dma_buffer_disable(struct iio_buffer *buffer, struct iio_dev *indio_dev)
> > {
> > struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer);
> >
> > @@ -519,7 +518,7 @@ int iio_dma_buffer_disable(struct iio_buffer *buffer,
> > EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_disable, "IIO_DMA_BUFFER");
> >
> > static void iio_dma_buffer_enqueue(struct iio_dma_buffer_queue *queue,
> > - struct iio_dma_buffer_block *block)
> > + struct iio_dma_buffer_block *block)
> > {
> > if (block->state == IIO_BLOCK_STATE_DEAD) {
> > iio_buffer_block_put(block);
> > @@ -531,8 +530,7 @@ static void iio_dma_buffer_enqueue(struct iio_dma_buffer_queue *queue,
> > }
> > }
> >
> > -static struct iio_dma_buffer_block *iio_dma_buffer_dequeue(
> > - struct iio_dma_buffer_queue *queue)
> > +static struct iio_dma_buffer_block *iio_dma_buffer_dequeue(struct iio_dma_buffer_queue *queue)
>
> static struct iio_dma_buffer_block *
> iio_dma_buffer_dequeue(struct iio_dma_buffer_queue *queue)
>
> is a bit nicer than that long line to my eyes and common enough style.
>
> > {
> > struct iio_dma_buffer_block *block;
> > unsigned int idx;
> > @@ -661,8 +659,7 @@ size_t iio_dma_buffer_usage(struct iio_buffer *buf)
> > for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
> > block = queue->fileio.blocks[i];
> >
> > - if (block != queue->fileio.active_block
> > - && block->state == IIO_BLOCK_STATE_DONE)
> > + if (block != queue->fileio.active_block && block->state ==
> > IIO_BLOCK_STATE_DONE)
> > data_available += block->size;
> > }
> >
> > diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h
> > index 91f678e5be71..f794af0970bd 100644
> > --- a/include/linux/iio/buffer-dma.h
> > +++ b/include/linux/iio/buffer-dma.h
> > @@ -119,7 +119,12 @@ struct iio_dma_buffer_queue {
> > struct device *dev;
> > const struct iio_dma_buffer_ops *ops;
> >
> > + /*
> > + * A mutex to protect accessing, configuring (eg: enqueuing DMA blocks)
> > + * and do file IO on struct iio_dma_buffer_queue objects.
> > + */
> > struct mutex lock;
> > + /* A spin lock to protect adding/removing blocks to the queue list */
> > spinlock_t list_lock;
> > struct list_head incoming;
> >
> > @@ -136,20 +141,19 @@ struct iio_dma_buffer_queue {
> > */
> > struct iio_dma_buffer_ops {
> > int (*submit)(struct iio_dma_buffer_queue *queue,
> > - struct iio_dma_buffer_block *block);
> > + struct iio_dma_buffer_block *block);
> > void (*abort)(struct iio_dma_buffer_queue *queue);
> > };
> >
> > void iio_dma_buffer_block_done(struct iio_dma_buffer_block *block);
> > void iio_dma_buffer_block_list_abort(struct iio_dma_buffer_queue *queue,
> > - struct list_head *list);
> > + struct list_head *list);
> >
> > -int iio_dma_buffer_enable(struct iio_buffer *buffer,
> > - struct iio_dev *indio_dev);
> > +int iio_dma_buffer_enable(struct iio_buffer *buffer, struct iio_dev *indio_dev);
> > int iio_dma_buffer_disable(struct iio_buffer *buffer,
> > - struct iio_dev *indio_dev);
> > + struct iio_dev *indio_dev);
> > int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n,
> > - char __user *user_buffer);
> > + char __user *user_buffer);
> > int iio_dma_buffer_write(struct iio_buffer *buffer, size_t n,
> > const char __user *user_buffer);
> > size_t iio_dma_buffer_usage(struct iio_buffer *buffer);
> >
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 5/6] iio: buffer-dmaengine: Use the cleanup.h API
2025-12-21 12:01 ` Jonathan Cameron
@ 2025-12-22 12:40 ` Nuno Sá
0 siblings, 0 replies; 12+ messages in thread
From: Nuno Sá @ 2025-12-22 12:40 UTC (permalink / raw)
To: Jonathan Cameron, Nuno Sá via B4 Relay
Cc: nuno.sa, linux-iio, David Lechner, Andy Shevchenko
On Sun, 2025-12-21 at 12:01 +0000, Jonathan Cameron wrote:
> On Fri, 19 Dec 2025 15:28:16 +0000
> Nuno Sá via B4 Relay <devnull+nuno.sa.analog.com@kernel.org> wrote:
>
> > From: Nuno Sá <nuno.sa@analog.com>
> >
> > Make use of the cleanup.h API for locks in order to simplify some code
> > paths.
> >
> > Signed-off-by: Nuno Sá <nuno.sa@analog.com>
>
> Needs cleanup.h for scoped_guard() definition.
>
> I'm not seeing this as a significant simplification but this driver is
> your problems so fair enough.
Mostly for being consistent and the minimal improvement of not needing the local flags variable.
>
> So applied with the header include added.
Thanks!
- Nuno Sá
> Jonathan
>
>
>
> > ---
> > drivers/iio/buffer/industrialio-buffer-dmaengine.c | 11 ++++-------
> > 1 file changed, 4 insertions(+), 7 deletions(-)
> >
> > diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> > b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> > index e9d9a7d39fe1..a8a4adb5ed3a 100644
> > --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> > +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> > @@ -49,11 +49,9 @@ static void iio_dmaengine_buffer_block_done(void *data,
> > const struct dmaengine_result *result)
> > {
> > struct iio_dma_buffer_block *block = data;
> > - unsigned long flags;
> >
> > - spin_lock_irqsave(&block->queue->list_lock, flags);
> > - list_del(&block->head);
> > - spin_unlock_irqrestore(&block->queue->list_lock, flags);
> > + scoped_guard(spinlock_irqsave, &block->queue->list_lock)
> > + list_del(&block->head);
> > block->bytes_used -= result->residue;
> > iio_dma_buffer_block_done(block);
> > }
> > @@ -131,9 +129,8 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue
> > *queue,
> > if (dma_submit_error(cookie))
> > return dma_submit_error(cookie);
> >
> > - spin_lock_irq(&dmaengine_buffer->queue.list_lock);
> > - list_add_tail(&block->head, &dmaengine_buffer->active);
> > - spin_unlock_irq(&dmaengine_buffer->queue.list_lock);
> > + scoped_guard(spinlock_irq, &dmaengine_buffer->queue.list_lock)
> > + list_add_tail(&block->head, &dmaengine_buffer->active);
> >
> > dma_async_issue_pending(dmaengine_buffer->chan);
> >
> >
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2025-12-22 12:39 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-19 15:28 [PATCH v2 0/6] iio: buffer-dma: Minor cleanups and improvements Nuno Sá via B4 Relay
2025-12-19 15:28 ` [PATCH v2 1/6] iio: buffer-dma: Use lockdep for locking annotations Nuno Sá via B4 Relay
2025-12-19 15:28 ` [PATCH v2 2/6] iio: buffer-dma: Use the cleanup.h API Nuno Sá via B4 Relay
2025-12-19 15:28 ` [PATCH v2 3/6] iio: buffer-dma: Turn iio_dma_buffer_init() void Nuno Sá via B4 Relay
2025-12-19 15:28 ` [PATCH v2 4/6] iio: buffer-dma: Fix coding style complains Nuno Sá via B4 Relay
2025-12-21 11:59 ` Jonathan Cameron
2025-12-22 12:38 ` Nuno Sá
2025-12-19 15:28 ` [PATCH v2 5/6] iio: buffer-dmaengine: Use the cleanup.h API Nuno Sá via B4 Relay
2025-12-21 12:01 ` Jonathan Cameron
2025-12-22 12:40 ` Nuno Sá
2025-12-19 15:28 ` [PATCH v2 6/6] iio: buffer-dmaengine: Fix coding style complains Nuno Sá via B4 Relay
2025-12-21 12:02 ` [PATCH v2 0/6] iio: buffer-dma: Minor cleanups and improvements Jonathan Cameron
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox