* [PATCH 0/6] iio: buffer-dma: Minor cleanups and improvements
@ 2025-12-03 15:11 Nuno Sá via B4 Relay
2025-12-03 15:11 ` [PATCH 1/6] iio: buffer-dma: use lockdep instead of WARN() Nuno Sá via B4 Relay
` (5 more replies)
0 siblings, 6 replies; 23+ messages in thread
From: Nuno Sá via B4 Relay @ 2025-12-03 15:11 UTC (permalink / raw)
To: linux-iio; +Cc: Jonathan Cameron, David Lechner, Andy Shevchenko
Small series with some minor improvements for IIO DMA buffers:
* Use lockdep instead of WARN() + mutex API;
* Use cleanup.h;
* Turn iio_dma_buffer_init() void;
* And I could not resist in cleaning up coding style.
Also note that in some of the coding style cleanups I deliberately went
above the 80 col limit as I think it otherwise hurts readability. If not
the case for everyone, I can change it.
---
Nuno Sá (6):
iio: buffer-dma: use lockdep instead of WARN()
iio: buffer-dma: Use the cleanup.h API
iio: buffer-dma: Turn iio_dma_buffer_init() void
iio: buffer-dma: Fix coding style complains
iio: buffer-dmaengine: Use the cleanup.h API
iio: buffer-dmaengine: Fix coding style complains
drivers/iio/buffer/industrialio-buffer-dma.c | 186 +++++++++------------
drivers/iio/buffer/industrialio-buffer-dmaengine.c | 22 +--
include/linux/iio/buffer-dma.h | 20 ++-
3 files changed, 96 insertions(+), 132 deletions(-)
---
base-commit: c5411c8b9ed1caf53604bb1a5be3f487988efc98
change-id: 20251104-iio-dmabuf-improvs-03d942284b86
--
Thanks!
- Nuno Sá
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 1/6] iio: buffer-dma: use lockdep instead of WARN()
2025-12-03 15:11 [PATCH 0/6] iio: buffer-dma: Minor cleanups and improvements Nuno Sá via B4 Relay
@ 2025-12-03 15:11 ` Nuno Sá via B4 Relay
2025-12-03 16:20 ` Andy Shevchenko
2025-12-03 15:11 ` [PATCH 2/6] iio: buffer-dma: Use the cleanup.h API Nuno Sá via B4 Relay
` (4 subsequent siblings)
5 siblings, 1 reply; 23+ messages in thread
From: Nuno Sá via B4 Relay @ 2025-12-03 15:11 UTC (permalink / raw)
To: linux-iio; +Cc: Jonathan Cameron, David Lechner, Andy Shevchenko
From: Nuno Sá <nuno.sa@analog.com>
As documented, WARN() should be used with care given that it can panic
running kernels (depending on command line options). So, instead of
using it to make sure a lock is held, use the annotations we already
have in the kernel for the very same reason.
Signed-off-by: Nuno Sá <nuno.sa@analog.com>
---
drivers/iio/buffer/industrialio-buffer-dma.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c
index ee294a775e8a..617b2d550c2f 100644
--- a/drivers/iio/buffer/industrialio-buffer-dma.c
+++ b/drivers/iio/buffer/industrialio-buffer-dma.c
@@ -6,6 +6,7 @@
#include <linux/atomic.h>
#include <linux/cleanup.h>
+#include <linux/lockdep.h>
#include <linux/slab.h>
#include <linux/kernel.h>
#include <linux/module.h>
@@ -764,7 +765,7 @@ int iio_dma_buffer_enqueue_dmabuf(struct iio_buffer *buffer,
bool cookie;
int ret;
- WARN_ON(!mutex_is_locked(&queue->lock));
+ lockdep_assert_held(&queue->lock);
cookie = dma_fence_begin_signalling();
--
2.52.0
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 2/6] iio: buffer-dma: Use the cleanup.h API
2025-12-03 15:11 [PATCH 0/6] iio: buffer-dma: Minor cleanups and improvements Nuno Sá via B4 Relay
2025-12-03 15:11 ` [PATCH 1/6] iio: buffer-dma: use lockdep instead of WARN() Nuno Sá via B4 Relay
@ 2025-12-03 15:11 ` Nuno Sá via B4 Relay
2025-12-03 16:23 ` Andy Shevchenko
2025-12-03 15:11 ` [PATCH 3/6] iio: buffer-dma: Turn iio_dma_buffer_init() void Nuno Sá via B4 Relay
` (3 subsequent siblings)
5 siblings, 1 reply; 23+ messages in thread
From: Nuno Sá via B4 Relay @ 2025-12-03 15:11 UTC (permalink / raw)
To: linux-iio; +Cc: Jonathan Cameron, David Lechner, Andy Shevchenko
From: Nuno Sá <nuno.sa@analog.com>
Make use of the cleanup.h API for locks and memory allocation in order
to simplify some code paths.
Signed-off-by: Nuno Sá <nuno.sa@analog.com>
---
drivers/iio/buffer/industrialio-buffer-dma.c | 154 +++++++++++----------------
1 file changed, 61 insertions(+), 93 deletions(-)
diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c
index 617b2d550c2f..026b42552a0a 100644
--- a/drivers/iio/buffer/industrialio-buffer-dma.c
+++ b/drivers/iio/buffer/industrialio-buffer-dma.c
@@ -136,9 +136,8 @@ static void iio_dma_buffer_cleanup_worker(struct work_struct *work)
struct iio_dma_buffer_block *block, *_block;
LIST_HEAD(block_list);
- spin_lock_irq(&iio_dma_buffer_dead_blocks_lock);
- list_splice_tail_init(&iio_dma_buffer_dead_blocks, &block_list);
- spin_unlock_irq(&iio_dma_buffer_dead_blocks_lock);
+ scoped_guard(spinlock_irq, &iio_dma_buffer_dead_blocks_lock)
+ list_splice_tail_init(&iio_dma_buffer_dead_blocks, &block_list);
list_for_each_entry_safe(block, _block, &block_list, head)
iio_buffer_block_release(&block->kref);
@@ -148,13 +147,11 @@ static DECLARE_WORK(iio_dma_buffer_cleanup_work, iio_dma_buffer_cleanup_worker);
static void iio_buffer_block_release_atomic(struct kref *kref)
{
struct iio_dma_buffer_block *block;
- unsigned long flags;
block = container_of(kref, struct iio_dma_buffer_block, kref);
- spin_lock_irqsave(&iio_dma_buffer_dead_blocks_lock, flags);
- list_add_tail(&block->head, &iio_dma_buffer_dead_blocks);
- spin_unlock_irqrestore(&iio_dma_buffer_dead_blocks_lock, flags);
+ scoped_guard(spinlock_irqsave, &iio_dma_buffer_dead_blocks_lock)
+ list_add_tail(&block->head, &iio_dma_buffer_dead_blocks);
schedule_work(&iio_dma_buffer_cleanup_work);
}
@@ -175,19 +172,15 @@ static struct iio_dma_buffer_queue *iio_buffer_to_queue(struct iio_buffer *buf)
static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(
struct iio_dma_buffer_queue *queue, size_t size, bool fileio)
{
- struct iio_dma_buffer_block *block;
-
- block = kzalloc(sizeof(*block), GFP_KERNEL);
+ struct iio_dma_buffer_block *block __free(kfree) = kzalloc(sizeof(*block), GFP_KERNEL);
if (!block)
return NULL;
if (fileio) {
block->vaddr = dma_alloc_coherent(queue->dev, PAGE_ALIGN(size),
&block->phys_addr, GFP_KERNEL);
- if (!block->vaddr) {
- kfree(block);
+ if (!block->vaddr)
return NULL;
- }
}
block->fileio = fileio;
@@ -202,7 +195,7 @@ static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(
if (!fileio)
atomic_inc(&queue->num_dmabufs);
- return block;
+ return_ptr(block);
}
static void _iio_dma_buffer_block_done(struct iio_dma_buffer_block *block)
@@ -233,14 +226,12 @@ static void iio_dma_buffer_queue_wake(struct iio_dma_buffer_queue *queue)
void iio_dma_buffer_block_done(struct iio_dma_buffer_block *block)
{
struct iio_dma_buffer_queue *queue = block->queue;
- unsigned long flags;
bool cookie;
cookie = dma_fence_begin_signalling();
- spin_lock_irqsave(&queue->list_lock, flags);
- _iio_dma_buffer_block_done(block);
- spin_unlock_irqrestore(&queue->list_lock, flags);
+ scoped_guard(spinlock_irqsave, &queue->list_lock)
+ _iio_dma_buffer_block_done(block);
if (!block->fileio)
iio_buffer_signal_dmabuf_done(block->fence, 0);
@@ -265,22 +256,22 @@ void iio_dma_buffer_block_list_abort(struct iio_dma_buffer_queue *queue,
struct list_head *list)
{
struct iio_dma_buffer_block *block, *_block;
- unsigned long flags;
bool cookie;
cookie = dma_fence_begin_signalling();
- spin_lock_irqsave(&queue->list_lock, flags);
- list_for_each_entry_safe(block, _block, list, head) {
- list_del(&block->head);
- block->bytes_used = 0;
- _iio_dma_buffer_block_done(block);
+ scoped_guard(spinlock_irqsave, &queue->list_lock) {
+ list_for_each_entry_safe(block, _block, list, head) {
+ list_del(&block->head);
+ block->bytes_used = 0;
+ _iio_dma_buffer_block_done(block);
- if (!block->fileio)
- iio_buffer_signal_dmabuf_done(block->fence, -EINTR);
- iio_buffer_block_put_atomic(block);
+ if (!block->fileio)
+ iio_buffer_signal_dmabuf_done(block->fence,
+ -EINTR);
+ iio_buffer_block_put_atomic(block);
+ }
}
- spin_unlock_irqrestore(&queue->list_lock, flags);
if (queue->fileio.enabled)
queue->fileio.enabled = false;
@@ -329,7 +320,6 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer)
struct iio_dma_buffer_block *block;
bool try_reuse = false;
size_t size;
- int ret = 0;
int i;
/*
@@ -340,13 +330,13 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer)
size = DIV_ROUND_UP(queue->buffer.bytes_per_datum *
queue->buffer.length, 2);
- mutex_lock(&queue->lock);
+ guard(mutex)(&queue->lock);
queue->fileio.enabled = iio_dma_buffer_can_use_fileio(queue);
/* If DMABUFs were created, disable fileio interface */
if (!queue->fileio.enabled)
- goto out_unlock;
+ return 0;
/* Allocations are page aligned */
if (PAGE_ALIGN(queue->fileio.block_size) == PAGE_ALIGN(size))
@@ -355,22 +345,22 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer)
queue->fileio.block_size = size;
queue->fileio.active_block = NULL;
- spin_lock_irq(&queue->list_lock);
- for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
- block = queue->fileio.blocks[i];
+ scoped_guard(spinlock_irq, &queue->list_lock) {
+ for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
+ block = queue->fileio.blocks[i];
- /* If we can't re-use it free it */
- if (block && (!iio_dma_block_reusable(block) || !try_reuse))
- block->state = IIO_BLOCK_STATE_DEAD;
+ /* If we can't re-use it free it */
+ if (block && (!iio_dma_block_reusable(block) || !try_reuse))
+ block->state = IIO_BLOCK_STATE_DEAD;
+ }
+
+ /*
+ * At this point all blocks are either owned by the core or
+ * marked as dead. This means we can reset the lists without
+ * having to fear corruption.
+ */
}
- /*
- * At this point all blocks are either owned by the core or marked as
- * dead. This means we can reset the lists without having to fear
- * corrution.
- */
- spin_unlock_irq(&queue->list_lock);
-
INIT_LIST_HEAD(&queue->incoming);
for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
@@ -389,10 +379,9 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer)
if (!block) {
block = iio_dma_buffer_alloc_block(queue, size, true);
- if (!block) {
- ret = -ENOMEM;
- goto out_unlock;
- }
+ if (!block)
+ return -ENOMEM;
+
queue->fileio.blocks[i] = block;
}
@@ -416,10 +405,7 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer)
}
}
-out_unlock:
- mutex_unlock(&queue->lock);
-
- return ret;
+ return 0;
}
EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_request_update, "IIO_DMA_BUFFER");
@@ -427,13 +413,13 @@ static void iio_dma_buffer_fileio_free(struct iio_dma_buffer_queue *queue)
{
unsigned int i;
- spin_lock_irq(&queue->list_lock);
- for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
- if (!queue->fileio.blocks[i])
- continue;
- queue->fileio.blocks[i]->state = IIO_BLOCK_STATE_DEAD;
+ scoped_guard(spinlock_irq, &queue->list_lock) {
+ for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
+ if (!queue->fileio.blocks[i])
+ continue;
+ queue->fileio.blocks[i]->state = IIO_BLOCK_STATE_DEAD;
+ }
}
- spin_unlock_irq(&queue->list_lock);
INIT_LIST_HEAD(&queue->incoming);
@@ -497,13 +483,12 @@ int iio_dma_buffer_enable(struct iio_buffer *buffer,
struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer);
struct iio_dma_buffer_block *block, *_block;
- mutex_lock(&queue->lock);
+ guard(mutex)(&queue->lock);
queue->active = true;
list_for_each_entry_safe(block, _block, &queue->incoming, head) {
list_del(&block->head);
iio_dma_buffer_submit_block(queue, block);
}
- mutex_unlock(&queue->lock);
return 0;
}
@@ -522,12 +507,11 @@ int iio_dma_buffer_disable(struct iio_buffer *buffer,
{
struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer);
- mutex_lock(&queue->lock);
+ guard(mutex)(&queue->lock);
queue->active = false;
if (queue->ops && queue->ops->abort)
queue->ops->abort(queue);
- mutex_unlock(&queue->lock);
return 0;
}
@@ -552,19 +536,16 @@ static struct iio_dma_buffer_block *iio_dma_buffer_dequeue(
struct iio_dma_buffer_block *block;
unsigned int idx;
- spin_lock_irq(&queue->list_lock);
+ guard(spinlock_irq)(&queue->list_lock);
idx = queue->fileio.next_dequeue;
block = queue->fileio.blocks[idx];
- if (block->state == IIO_BLOCK_STATE_DONE) {
- idx = (idx + 1) % ARRAY_SIZE(queue->fileio.blocks);
- queue->fileio.next_dequeue = idx;
- } else {
- block = NULL;
- }
+ if (block->state != IIO_BLOCK_STATE_DONE)
+ return NULL;
- spin_unlock_irq(&queue->list_lock);
+ idx = (idx + 1) % ARRAY_SIZE(queue->fileio.blocks);
+ queue->fileio.next_dequeue = idx;
return block;
}
@@ -580,14 +561,13 @@ static int iio_dma_buffer_io(struct iio_buffer *buffer, size_t n,
if (n < buffer->bytes_per_datum)
return -EINVAL;
- mutex_lock(&queue->lock);
+ guard(mutex)(&queue->lock);
if (!queue->fileio.active_block) {
block = iio_dma_buffer_dequeue(queue);
- if (block == NULL) {
- ret = 0;
- goto out_unlock;
- }
+ if (!block)
+ return 0;
+
queue->fileio.pos = 0;
queue->fileio.active_block = block;
} else {
@@ -603,10 +583,8 @@ static int iio_dma_buffer_io(struct iio_buffer *buffer, size_t n,
ret = copy_from_user(addr, user_buffer, n);
else
ret = copy_to_user(user_buffer, addr, n);
- if (ret) {
- ret = -EFAULT;
- goto out_unlock;
- }
+ if (ret)
+ return -EFAULT;
queue->fileio.pos += n;
@@ -615,12 +593,7 @@ static int iio_dma_buffer_io(struct iio_buffer *buffer, size_t n,
iio_dma_buffer_enqueue(queue, block);
}
- ret = n;
-
-out_unlock:
- mutex_unlock(&queue->lock);
-
- return ret;
+ return n;
}
/**
@@ -678,11 +651,11 @@ size_t iio_dma_buffer_usage(struct iio_buffer *buf)
* but won't increase since all blocks are in use.
*/
- mutex_lock(&queue->lock);
+ guard(mutex)(&queue->lock);
if (queue->fileio.active_block)
data_available += queue->fileio.active_block->size;
- spin_lock_irq(&queue->list_lock);
+ guard(spinlock_irq)(&queue->list_lock);
for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
block = queue->fileio.blocks[i];
@@ -692,9 +665,6 @@ size_t iio_dma_buffer_usage(struct iio_buffer *buf)
data_available += block->size;
}
- spin_unlock_irq(&queue->list_lock);
- mutex_unlock(&queue->lock);
-
return data_available;
}
EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_usage, "IIO_DMA_BUFFER");
@@ -876,12 +846,10 @@ EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_init, "IIO_DMA_BUFFER");
*/
void iio_dma_buffer_exit(struct iio_dma_buffer_queue *queue)
{
- mutex_lock(&queue->lock);
+ guard(mutex)(&queue->lock);
iio_dma_buffer_fileio_free(queue);
queue->ops = NULL;
-
- mutex_unlock(&queue->lock);
}
EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_exit, "IIO_DMA_BUFFER");
--
2.52.0
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 3/6] iio: buffer-dma: Turn iio_dma_buffer_init() void
2025-12-03 15:11 [PATCH 0/6] iio: buffer-dma: Minor cleanups and improvements Nuno Sá via B4 Relay
2025-12-03 15:11 ` [PATCH 1/6] iio: buffer-dma: use lockdep instead of WARN() Nuno Sá via B4 Relay
2025-12-03 15:11 ` [PATCH 2/6] iio: buffer-dma: Use the cleanup.h API Nuno Sá via B4 Relay
@ 2025-12-03 15:11 ` Nuno Sá via B4 Relay
2025-12-03 16:34 ` Andy Shevchenko
2025-12-03 15:11 ` [PATCH 4/6] iio: buffer-dma: Fix coding style complains Nuno Sá via B4 Relay
` (2 subsequent siblings)
5 siblings, 1 reply; 23+ messages in thread
From: Nuno Sá via B4 Relay @ 2025-12-03 15:11 UTC (permalink / raw)
To: linux-iio; +Cc: Jonathan Cameron, David Lechner, Andy Shevchenko
From: Nuno Sá <nuno.sa@analog.com>
iio_dma_buffer_init() always return 0. Therefore there's no point in
returning int.
While at it, fix a mismatch between the function declaration and definition
regarding the struct device (dma_dev != dev).
Signed-off-by: Nuno Sá <nuno.sa@analog.com>
---
drivers/iio/buffer/industrialio-buffer-dma.c | 6 ++----
include/linux/iio/buffer-dma.h | 4 ++--
2 files changed, 4 insertions(+), 6 deletions(-)
diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c
index 026b42552a0a..0a6891541ed3 100644
--- a/drivers/iio/buffer/industrialio-buffer-dma.c
+++ b/drivers/iio/buffer/industrialio-buffer-dma.c
@@ -819,8 +819,8 @@ EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_set_length, "IIO_DMA_BUFFER");
* should refer to the device that will perform the DMA to ensure that
* allocations are done from a memory region that can be accessed by the device.
*/
-int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue,
- struct device *dev, const struct iio_dma_buffer_ops *ops)
+void iio_dma_buffer_init(struct iio_dma_buffer_queue *queue, struct device *dev,
+ const struct iio_dma_buffer_ops *ops)
{
iio_buffer_init(&queue->buffer);
queue->buffer.length = PAGE_SIZE;
@@ -832,8 +832,6 @@ int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue,
mutex_init(&queue->lock);
spin_lock_init(&queue->list_lock);
-
- return 0;
}
EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_init, "IIO_DMA_BUFFER");
diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h
index 5eb66a399002..91f678e5be71 100644
--- a/include/linux/iio/buffer-dma.h
+++ b/include/linux/iio/buffer-dma.h
@@ -157,8 +157,8 @@ int iio_dma_buffer_set_bytes_per_datum(struct iio_buffer *buffer, size_t bpd);
int iio_dma_buffer_set_length(struct iio_buffer *buffer, unsigned int length);
int iio_dma_buffer_request_update(struct iio_buffer *buffer);
-int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue,
- struct device *dma_dev, const struct iio_dma_buffer_ops *ops);
+void iio_dma_buffer_init(struct iio_dma_buffer_queue *queue, struct device *dev,
+ const struct iio_dma_buffer_ops *ops);
void iio_dma_buffer_exit(struct iio_dma_buffer_queue *queue);
void iio_dma_buffer_release(struct iio_dma_buffer_queue *queue);
--
2.52.0
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 4/6] iio: buffer-dma: Fix coding style complains
2025-12-03 15:11 [PATCH 0/6] iio: buffer-dma: Minor cleanups and improvements Nuno Sá via B4 Relay
` (2 preceding siblings ...)
2025-12-03 15:11 ` [PATCH 3/6] iio: buffer-dma: Turn iio_dma_buffer_init() void Nuno Sá via B4 Relay
@ 2025-12-03 15:11 ` Nuno Sá via B4 Relay
2025-12-03 16:29 ` Andy Shevchenko
2025-12-03 15:11 ` [PATCH 5/6] iio: buffer-dmaengine: Use the cleanup.h API Nuno Sá via B4 Relay
2025-12-03 15:11 ` [PATCH 6/6] iio: buffer-dmaengine: Fix coding style complains Nuno Sá via B4 Relay
5 siblings, 1 reply; 23+ messages in thread
From: Nuno Sá via B4 Relay @ 2025-12-03 15:11 UTC (permalink / raw)
To: linux-iio; +Cc: Jonathan Cameron, David Lechner, Andy Shevchenko
From: Nuno Sá <nuno.sa@analog.com>
Just making sure checkpatch is happy. No functional change intended.
Signed-off-by: Nuno Sá <nuno.sa@analog.com>
---
drivers/iio/buffer/industrialio-buffer-dma.c | 23 ++++++++++-------------
include/linux/iio/buffer-dma.h | 16 ++++++++++------
2 files changed, 20 insertions(+), 19 deletions(-)
diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c
index 0a6891541ed3..5c34cab28d34 100644
--- a/drivers/iio/buffer/industrialio-buffer-dma.c
+++ b/drivers/iio/buffer/industrialio-buffer-dma.c
@@ -169,8 +169,9 @@ static struct iio_dma_buffer_queue *iio_buffer_to_queue(struct iio_buffer *buf)
return container_of(buf, struct iio_dma_buffer_queue, buffer);
}
-static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(
- struct iio_dma_buffer_queue *queue, size_t size, bool fileio)
+static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(struct iio_dma_buffer_queue *queue,
+ size_t size,
+ bool fileio)
{
struct iio_dma_buffer_block *block __free(kfree) = kzalloc(sizeof(*block), GFP_KERNEL);
if (!block)
@@ -253,7 +254,7 @@ EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_block_done, "IIO_DMA_BUFFER");
* hand the blocks back to the queue.
*/
void iio_dma_buffer_block_list_abort(struct iio_dma_buffer_queue *queue,
- struct list_head *list)
+ struct list_head *list)
{
struct iio_dma_buffer_block *block, *_block;
bool cookie;
@@ -433,7 +434,7 @@ static void iio_dma_buffer_fileio_free(struct iio_dma_buffer_queue *queue)
}
static void iio_dma_buffer_submit_block(struct iio_dma_buffer_queue *queue,
- struct iio_dma_buffer_block *block)
+ struct iio_dma_buffer_block *block)
{
int ret;
@@ -477,8 +478,7 @@ static void iio_dma_buffer_submit_block(struct iio_dma_buffer_queue *queue,
*
* This will allocate the DMA buffers and start the DMA transfers.
*/
-int iio_dma_buffer_enable(struct iio_buffer *buffer,
- struct iio_dev *indio_dev)
+int iio_dma_buffer_enable(struct iio_buffer *buffer, struct iio_dev *indio_dev)
{
struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer);
struct iio_dma_buffer_block *block, *_block;
@@ -502,8 +502,7 @@ EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_enable, "IIO_DMA_BUFFER");
* Needs to be called when the device that the buffer is attached to stops
* sampling. Typically should be the iio_buffer_access_ops disable callback.
*/
-int iio_dma_buffer_disable(struct iio_buffer *buffer,
- struct iio_dev *indio_dev)
+int iio_dma_buffer_disable(struct iio_buffer *buffer, struct iio_dev *indio_dev)
{
struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer);
@@ -518,7 +517,7 @@ int iio_dma_buffer_disable(struct iio_buffer *buffer,
EXPORT_SYMBOL_NS_GPL(iio_dma_buffer_disable, "IIO_DMA_BUFFER");
static void iio_dma_buffer_enqueue(struct iio_dma_buffer_queue *queue,
- struct iio_dma_buffer_block *block)
+ struct iio_dma_buffer_block *block)
{
if (block->state == IIO_BLOCK_STATE_DEAD) {
iio_buffer_block_put(block);
@@ -530,8 +529,7 @@ static void iio_dma_buffer_enqueue(struct iio_dma_buffer_queue *queue,
}
}
-static struct iio_dma_buffer_block *iio_dma_buffer_dequeue(
- struct iio_dma_buffer_queue *queue)
+static struct iio_dma_buffer_block *iio_dma_buffer_dequeue(struct iio_dma_buffer_queue *queue)
{
struct iio_dma_buffer_block *block;
unsigned int idx;
@@ -660,8 +658,7 @@ size_t iio_dma_buffer_usage(struct iio_buffer *buf)
for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
block = queue->fileio.blocks[i];
- if (block != queue->fileio.active_block
- && block->state == IIO_BLOCK_STATE_DONE)
+ if (block != queue->fileio.active_block && block->state == IIO_BLOCK_STATE_DONE)
data_available += block->size;
}
diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h
index 91f678e5be71..8573a21a33ba 100644
--- a/include/linux/iio/buffer-dma.h
+++ b/include/linux/iio/buffer-dma.h
@@ -119,7 +119,12 @@ struct iio_dma_buffer_queue {
struct device *dev;
const struct iio_dma_buffer_ops *ops;
+ /*
+ * mutex to protect accessing, configuring (eg: enqueuing DMA blocks)
+ * and do file IO on struct iio_dma_buffer_queue objects.
+ */
struct mutex lock;
+ /* spinlock to protect adding/removing blocks to the queue list */
spinlock_t list_lock;
struct list_head incoming;
@@ -136,20 +141,19 @@ struct iio_dma_buffer_queue {
*/
struct iio_dma_buffer_ops {
int (*submit)(struct iio_dma_buffer_queue *queue,
- struct iio_dma_buffer_block *block);
+ struct iio_dma_buffer_block *block);
void (*abort)(struct iio_dma_buffer_queue *queue);
};
void iio_dma_buffer_block_done(struct iio_dma_buffer_block *block);
void iio_dma_buffer_block_list_abort(struct iio_dma_buffer_queue *queue,
- struct list_head *list);
+ struct list_head *list);
-int iio_dma_buffer_enable(struct iio_buffer *buffer,
- struct iio_dev *indio_dev);
+int iio_dma_buffer_enable(struct iio_buffer *buffer, struct iio_dev *indio_dev);
int iio_dma_buffer_disable(struct iio_buffer *buffer,
- struct iio_dev *indio_dev);
+ struct iio_dev *indio_dev);
int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n,
- char __user *user_buffer);
+ char __user *user_buffer);
int iio_dma_buffer_write(struct iio_buffer *buffer, size_t n,
const char __user *user_buffer);
size_t iio_dma_buffer_usage(struct iio_buffer *buffer);
--
2.52.0
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 5/6] iio: buffer-dmaengine: Use the cleanup.h API
2025-12-03 15:11 [PATCH 0/6] iio: buffer-dma: Minor cleanups and improvements Nuno Sá via B4 Relay
` (3 preceding siblings ...)
2025-12-03 15:11 ` [PATCH 4/6] iio: buffer-dma: Fix coding style complains Nuno Sá via B4 Relay
@ 2025-12-03 15:11 ` Nuno Sá via B4 Relay
2025-12-03 15:11 ` [PATCH 6/6] iio: buffer-dmaengine: Fix coding style complains Nuno Sá via B4 Relay
5 siblings, 0 replies; 23+ messages in thread
From: Nuno Sá via B4 Relay @ 2025-12-03 15:11 UTC (permalink / raw)
To: linux-iio; +Cc: Jonathan Cameron, David Lechner, Andy Shevchenko
From: Nuno Sá <nuno.sa@analog.com>
Make use of the cleanup.h API for locks in order to simplify some code
paths.
Signed-off-by: Nuno Sá <nuno.sa@analog.com>
---
drivers/iio/buffer/industrialio-buffer-dmaengine.c | 11 ++++-------
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
index e9d9a7d39fe1..a8a4adb5ed3a 100644
--- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
+++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
@@ -49,11 +49,9 @@ static void iio_dmaengine_buffer_block_done(void *data,
const struct dmaengine_result *result)
{
struct iio_dma_buffer_block *block = data;
- unsigned long flags;
- spin_lock_irqsave(&block->queue->list_lock, flags);
- list_del(&block->head);
- spin_unlock_irqrestore(&block->queue->list_lock, flags);
+ scoped_guard(spinlock_irqsave, &block->queue->list_lock)
+ list_del(&block->head);
block->bytes_used -= result->residue;
iio_dma_buffer_block_done(block);
}
@@ -131,9 +129,8 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue,
if (dma_submit_error(cookie))
return dma_submit_error(cookie);
- spin_lock_irq(&dmaengine_buffer->queue.list_lock);
- list_add_tail(&block->head, &dmaengine_buffer->active);
- spin_unlock_irq(&dmaengine_buffer->queue.list_lock);
+ scoped_guard(spinlock_irq, &dmaengine_buffer->queue.list_lock)
+ list_add_tail(&block->head, &dmaengine_buffer->active);
dma_async_issue_pending(dmaengine_buffer->chan);
--
2.52.0
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 6/6] iio: buffer-dmaengine: Fix coding style complains
2025-12-03 15:11 [PATCH 0/6] iio: buffer-dma: Minor cleanups and improvements Nuno Sá via B4 Relay
` (4 preceding siblings ...)
2025-12-03 15:11 ` [PATCH 5/6] iio: buffer-dmaengine: Use the cleanup.h API Nuno Sá via B4 Relay
@ 2025-12-03 15:11 ` Nuno Sá via B4 Relay
2025-12-03 16:31 ` Andy Shevchenko
5 siblings, 1 reply; 23+ messages in thread
From: Nuno Sá via B4 Relay @ 2025-12-03 15:11 UTC (permalink / raw)
To: linux-iio; +Cc: Jonathan Cameron, David Lechner, Andy Shevchenko
From: Nuno Sá <nuno.sa@analog.com>
Just making sure checkpatch is happy. No functional change intended.
Signed-off-by: Nuno Sá <nuno.sa@analog.com>
---
drivers/iio/buffer/industrialio-buffer-dmaengine.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
index a8a4adb5ed3a..b906ceaff9e1 100644
--- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
+++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
@@ -39,14 +39,13 @@ struct dmaengine_buffer {
size_t max_size;
};
-static struct dmaengine_buffer *iio_buffer_to_dmaengine_buffer(
- struct iio_buffer *buffer)
+static struct dmaengine_buffer *iio_buffer_to_dmaengine_buffer(struct iio_buffer *buffer)
{
return container_of(buffer, struct dmaengine_buffer, queue.buffer);
}
static void iio_dmaengine_buffer_block_done(void *data,
- const struct dmaengine_result *result)
+ const struct dmaengine_result *result)
{
struct iio_dma_buffer_block *block = data;
@@ -57,7 +56,7 @@ static void iio_dmaengine_buffer_block_done(void *data,
}
static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue,
- struct iio_dma_buffer_block *block)
+ struct iio_dma_buffer_block *block)
{
struct dmaengine_buffer *dmaengine_buffer =
iio_buffer_to_dmaengine_buffer(&queue->buffer);
@@ -184,7 +183,7 @@ static const struct iio_dma_buffer_ops iio_dmaengine_default_ops = {
};
static ssize_t iio_dmaengine_buffer_get_length_align(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr, char *buf)
{
struct iio_buffer *buffer = to_iio_dev_attr(attr)->buffer;
struct dmaengine_buffer *dmaengine_buffer =
@@ -243,7 +242,7 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct dma_chan *chan)
dmaengine_buffer->max_size = dma_get_max_seg_size(chan->device->dev);
iio_dma_buffer_init(&dmaengine_buffer->queue, chan->device->dev,
- &iio_dmaengine_default_ops);
+ &iio_dmaengine_default_ops);
dmaengine_buffer->queue.buffer.attrs = iio_dmaengine_buffer_attrs;
dmaengine_buffer->queue.buffer.access = &iio_dmaengine_buffer_ops;
--
2.52.0
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH 1/6] iio: buffer-dma: use lockdep instead of WARN()
2025-12-03 15:11 ` [PATCH 1/6] iio: buffer-dma: use lockdep instead of WARN() Nuno Sá via B4 Relay
@ 2025-12-03 16:20 ` Andy Shevchenko
2025-12-04 11:20 ` Nuno Sá
0 siblings, 1 reply; 23+ messages in thread
From: Andy Shevchenko @ 2025-12-03 16:20 UTC (permalink / raw)
To: nuno.sa; +Cc: linux-iio, Jonathan Cameron, David Lechner, Andy Shevchenko
On Wed, Dec 03, 2025 at 03:11:36PM +0000, Nuno Sá via B4 Relay wrote:
>
> As documented, WARN() should be used with care given that it can panic
> running kernels (depending on command line options). So, instead of
> using it to make sure a lock is held, use the annotations we already
> have in the kernel for the very same reason.
Which also will be a WARN :-)
I believe the main value is a bit different here, i.e. to have code being
annotated with the existing infrastructure.
In any case I support this change
Reviewed-by: Andy Shevchenko <andriy.shevchenko@intel.com>
--
With Best Regards,
Andy Shevchenko
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 2/6] iio: buffer-dma: Use the cleanup.h API
2025-12-03 15:11 ` [PATCH 2/6] iio: buffer-dma: Use the cleanup.h API Nuno Sá via B4 Relay
@ 2025-12-03 16:23 ` Andy Shevchenko
2025-12-06 21:12 ` Jonathan Cameron
0 siblings, 1 reply; 23+ messages in thread
From: Andy Shevchenko @ 2025-12-03 16:23 UTC (permalink / raw)
To: nuno.sa; +Cc: linux-iio, Jonathan Cameron, David Lechner, Andy Shevchenko
On Wed, Dec 03, 2025 at 03:11:37PM +0000, Nuno Sá via B4 Relay wrote:
> Make use of the cleanup.h API for locks and memory allocation in order
> to simplify some code paths.
...
> - struct iio_dma_buffer_block *block;
> -
> - block = kzalloc(sizeof(*block), GFP_KERNEL);
> + struct iio_dma_buffer_block *block __free(kfree) = kzalloc(sizeof(*block), GFP_KERNEL);
> if (!block)
> return NULL;
In another thread I believe you referred to the 80 rule.
Follow it then :-)
struct iio_dma_buffer_block *block __free(kfree) =
kzalloc(sizeof(*block), GFP_KERNEL);
--
With Best Regards,
Andy Shevchenko
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 4/6] iio: buffer-dma: Fix coding style complains
2025-12-03 15:11 ` [PATCH 4/6] iio: buffer-dma: Fix coding style complains Nuno Sá via B4 Relay
@ 2025-12-03 16:29 ` Andy Shevchenko
2025-12-04 11:25 ` Nuno Sá
0 siblings, 1 reply; 23+ messages in thread
From: Andy Shevchenko @ 2025-12-03 16:29 UTC (permalink / raw)
To: nuno.sa; +Cc: linux-iio, Jonathan Cameron, David Lechner, Andy Shevchenko
On Wed, Dec 03, 2025 at 03:11:39PM +0000, Nuno Sá via B4 Relay wrote:
> Just making sure checkpatch is happy. No functional change intended.
...
> -static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(
> - struct iio_dma_buffer_queue *queue, size_t size, bool fileio)
> +static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(struct iio_dma_buffer_queue *queue,
> + size_t size,
> + bool fileio)
What about 80 rule?
static struct iio_dma_buffer_block *
iio_dma_buffer_alloc_block(struct iio_dma_buffer_queue *queue, size_t size,
bool fileio)
(And personally I think that in 2025 we should grow up and forget about this
and move on to 100, but... not a maintainer here :-)
...
> -static struct iio_dma_buffer_block *iio_dma_buffer_dequeue(
> - struct iio_dma_buffer_queue *queue)
> +static struct iio_dma_buffer_block *iio_dma_buffer_dequeue(struct iio_dma_buffer_queue *queue)
Ditto.
static struct iio_dma_buffer_block *
iio_dma_buffer_dequeue(struct iio_dma_buffer_queue *queue)
...
> - if (block != queue->fileio.active_block
> - && block->state == IIO_BLOCK_STATE_DONE)
> + if (block != queue->fileio.active_block && block->state == IIO_BLOCK_STATE_DONE)
Ditto.
if (block != queue->fileio.active_block &&
block->state == IIO_BLOCK_STATE_DONE)
> data_available += block->size;
> }
...
> + /*
> + * mutex to protect accessing, configuring (eg: enqueuing DMA blocks)
A mutex
e.g.:
(this is Latin exempli gratia)
> + * and do file IO on struct iio_dma_buffer_queue objects.
> + */
...
> + /* spinlock to protect adding/removing blocks to the queue list */
A spin lock
--
With Best Regards,
Andy Shevchenko
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 6/6] iio: buffer-dmaengine: Fix coding style complains
2025-12-03 15:11 ` [PATCH 6/6] iio: buffer-dmaengine: Fix coding style complains Nuno Sá via B4 Relay
@ 2025-12-03 16:31 ` Andy Shevchenko
2025-12-03 16:33 ` Andy Shevchenko
0 siblings, 1 reply; 23+ messages in thread
From: Andy Shevchenko @ 2025-12-03 16:31 UTC (permalink / raw)
To: nuno.sa; +Cc: linux-iio, Jonathan Cameron, David Lechner, Andy Shevchenko
On Wed, Dec 03, 2025 at 03:11:41PM +0000, Nuno Sá via B4 Relay wrote:
> Just making sure checkpatch is happy. No functional change intended.
...but trigger the fighters for 80 rule!
--
With Best Regards,
Andy Shevchenko
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 6/6] iio: buffer-dmaengine: Fix coding style complains
2025-12-03 16:31 ` Andy Shevchenko
@ 2025-12-03 16:33 ` Andy Shevchenko
2025-12-04 11:21 ` Nuno Sá
0 siblings, 1 reply; 23+ messages in thread
From: Andy Shevchenko @ 2025-12-03 16:33 UTC (permalink / raw)
To: nuno.sa; +Cc: linux-iio, Jonathan Cameron, David Lechner, Andy Shevchenko
On Wed, Dec 03, 2025 at 06:31:53PM +0200, Andy Shevchenko wrote:
> On Wed, Dec 03, 2025 at 03:11:41PM +0000, Nuno Sá via B4 Relay wrote:
>
> > Just making sure checkpatch is happy. No functional change intended.
>
> ...but trigger the fighters for 80 rule!
I believe
scripts/checkpatch.pl --strict ...
should catch this up.
--
With Best Regards,
Andy Shevchenko
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 3/6] iio: buffer-dma: Turn iio_dma_buffer_init() void
2025-12-03 15:11 ` [PATCH 3/6] iio: buffer-dma: Turn iio_dma_buffer_init() void Nuno Sá via B4 Relay
@ 2025-12-03 16:34 ` Andy Shevchenko
2025-12-19 15:03 ` Nuno Sá
0 siblings, 1 reply; 23+ messages in thread
From: Andy Shevchenko @ 2025-12-03 16:34 UTC (permalink / raw)
To: nuno.sa; +Cc: linux-iio, Jonathan Cameron, David Lechner, Andy Shevchenko
On Wed, Dec 03, 2025 at 03:11:38PM +0000, Nuno Sá via B4 Relay wrote:
> iio_dma_buffer_init() always return 0. Therefore there's no point in
> returning int.
> While at it, fix a mismatch between the function declaration and definition
> regarding the struct device (dma_dev != dev).
So, all others use simple dev?
--
With Best Regards,
Andy Shevchenko
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 1/6] iio: buffer-dma: use lockdep instead of WARN()
2025-12-03 16:20 ` Andy Shevchenko
@ 2025-12-04 11:20 ` Nuno Sá
0 siblings, 0 replies; 23+ messages in thread
From: Nuno Sá @ 2025-12-04 11:20 UTC (permalink / raw)
To: Andy Shevchenko, nuno.sa
Cc: linux-iio, Jonathan Cameron, David Lechner, Andy Shevchenko
On Wed, 2025-12-03 at 18:20 +0200, Andy Shevchenko wrote:
> On Wed, Dec 03, 2025 at 03:11:36PM +0000, Nuno Sá via B4 Relay wrote:
> >
> > As documented, WARN() should be used with care given that it can panic
> > running kernels (depending on command line options). So, instead of
> > using it to make sure a lock is held, use the annotations we already
> > have in the kernel for the very same reason.
>
> Which also will be a WARN :-)
>
Obviously :facepalm:
> I believe the main value is a bit different here, i.e. to have code being
> annotated with the existing infrastructure.
>
> In any case I support this change
> Reviewed-by: Andy Shevchenko <andriy.shevchenko@intel.com>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 6/6] iio: buffer-dmaengine: Fix coding style complains
2025-12-03 16:33 ` Andy Shevchenko
@ 2025-12-04 11:21 ` Nuno Sá
2025-12-04 11:55 ` Andy Shevchenko
0 siblings, 1 reply; 23+ messages in thread
From: Nuno Sá @ 2025-12-04 11:21 UTC (permalink / raw)
To: Andy Shevchenko, nuno.sa
Cc: linux-iio, Jonathan Cameron, David Lechner, Andy Shevchenko
On Wed, 2025-12-03 at 18:33 +0200, Andy Shevchenko wrote:
> On Wed, Dec 03, 2025 at 06:31:53PM +0200, Andy Shevchenko wrote:
> > On Wed, Dec 03, 2025 at 03:11:41PM +0000, Nuno Sá via B4 Relay wrote:
> >
> > > Just making sure checkpatch is happy. No functional change intended.
> >
> > ...but trigger the fighters for 80 rule!
>
> I believe
>
> scripts/checkpatch.pl --strict ...
>
> should catch this up.
Don't think so. I do have b4 configured so that --check runs checkpatch with --strict.
Checking patches using:
scripts/checkpatch.pl -q --terse --strict --no-summary --mailback --showfile
- Nuno Sá
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 4/6] iio: buffer-dma: Fix coding style complains
2025-12-03 16:29 ` Andy Shevchenko
@ 2025-12-04 11:25 ` Nuno Sá
2025-12-06 21:09 ` Jonathan Cameron
0 siblings, 1 reply; 23+ messages in thread
From: Nuno Sá @ 2025-12-04 11:25 UTC (permalink / raw)
To: Andy Shevchenko, nuno.sa
Cc: linux-iio, Jonathan Cameron, David Lechner, Andy Shevchenko
On Wed, 2025-12-03 at 18:29 +0200, Andy Shevchenko wrote:
> On Wed, Dec 03, 2025 at 03:11:39PM +0000, Nuno Sá via B4 Relay wrote:
>
> > Just making sure checkpatch is happy. No functional change intended.
>
> ...
>
> > -static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(
> > - struct iio_dma_buffer_queue *queue, size_t size, bool fileio)
> > +static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(struct iio_dma_buffer_queue
> > *queue,
> > + size_t size,
> > + bool fileio)
>
> What about 80 rule?
>
This falls in the bucket where readability is hurt. At least IMHO, so that's why I
did it this way. If Jonathan disagrees, I'll of course change it to the below style.
> static struct iio_dma_buffer_block *
> iio_dma_buffer_alloc_block(struct iio_dma_buffer_queue *queue, size_t size,
> bool fileio)
>
> (And personally I think that in 2025 we should grow up and forget about this
> and move on to 100, but... not a maintainer here :-)
FWIW, Agreed! (And that is what I do for all the out of tree stuff :))
- Nuno Sá
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 6/6] iio: buffer-dmaengine: Fix coding style complains
2025-12-04 11:21 ` Nuno Sá
@ 2025-12-04 11:55 ` Andy Shevchenko
0 siblings, 0 replies; 23+ messages in thread
From: Andy Shevchenko @ 2025-12-04 11:55 UTC (permalink / raw)
To: Nuno Sá
Cc: Andy Shevchenko, nuno.sa, linux-iio, Jonathan Cameron,
David Lechner, Andy Shevchenko
On Thu, Dec 4, 2025 at 1:20 PM Nuno Sá <noname.nuno@gmail.com> wrote:
> On Wed, 2025-12-03 at 18:33 +0200, Andy Shevchenko wrote:
> > On Wed, Dec 03, 2025 at 06:31:53PM +0200, Andy Shevchenko wrote:
> > > On Wed, Dec 03, 2025 at 03:11:41PM +0000, Nuno Sá via B4 Relay wrote:
...
> > > > Just making sure checkpatch is happy. No functional change intended.
> > >
> > > ...but trigger the fighters for 80 rule!
> >
> > I believe
> >
> > scripts/checkpatch.pl --strict ...
> >
> > should catch this up.
>
> Don't think so. I do have b4 configured so that --check runs checkpatch with --strict.
>
> Checking patches using:
> scripts/checkpatch.pl -q --terse --strict --no-summary --mailback --showfile
Hmm... okay, then I have (had) a wrong impression.
--
With Best Regards,
Andy Shevchenko
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 4/6] iio: buffer-dma: Fix coding style complains
2025-12-04 11:25 ` Nuno Sá
@ 2025-12-06 21:09 ` Jonathan Cameron
0 siblings, 0 replies; 23+ messages in thread
From: Jonathan Cameron @ 2025-12-06 21:09 UTC (permalink / raw)
To: Nuno Sá
Cc: Andy Shevchenko, nuno.sa, linux-iio, David Lechner,
Andy Shevchenko
On Thu, 04 Dec 2025 11:25:12 +0000
Nuno Sá <noname.nuno@gmail.com> wrote:
> On Wed, 2025-12-03 at 18:29 +0200, Andy Shevchenko wrote:
> > On Wed, Dec 03, 2025 at 03:11:39PM +0000, Nuno Sá via B4 Relay wrote:
> >
> > > Just making sure checkpatch is happy. No functional change intended.
> >
> > ...
> >
> > > -static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(
> > > - struct iio_dma_buffer_queue *queue, size_t size, bool fileio)
> > > +static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(struct iio_dma_buffer_queue
> > > *queue,
> > > + size_t size,
> > > + bool fileio)
> >
> > What about 80 rule?
> >
>
> This falls in the bucket where readability is hurt. At least IMHO, so that's why I
> did it this way. If Jonathan disagrees, I'll of course change it to the below style.
>
Marginal readability benefit. The complex return type on the line above is pretty
common pattern so my eyes are used to it even if no one else's are ;)
>
> > static struct iio_dma_buffer_block *
> > iio_dma_buffer_alloc_block(struct iio_dma_buffer_queue *queue, size_t size,
> > bool fileio)
> >
> > (And personally I think that in 2025 we should grow up and forget about this
> > and move on to 100, but... not a maintainer here :-)
>
> FWIW, Agreed! (And that is what I do for all the out of tree stuff :))
Given it another few years and maybe I'll relax more. :)
I don't really care that much as long as people are consistent and don't
end up with something really hard to read!
Jonathan
>
> - Nuno Sá
>
>
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 2/6] iio: buffer-dma: Use the cleanup.h API
2025-12-03 16:23 ` Andy Shevchenko
@ 2025-12-06 21:12 ` Jonathan Cameron
0 siblings, 0 replies; 23+ messages in thread
From: Jonathan Cameron @ 2025-12-06 21:12 UTC (permalink / raw)
To: Andy Shevchenko; +Cc: nuno.sa, linux-iio, David Lechner, Andy Shevchenko
On Wed, 3 Dec 2025 18:23:56 +0200
Andy Shevchenko <andriy.shevchenko@intel.com> wrote:
> On Wed, Dec 03, 2025 at 03:11:37PM +0000, Nuno Sá via B4 Relay wrote:
>
> > Make use of the cleanup.h API for locks and memory allocation in order
> > to simplify some code paths.
>
> ...
>
> > - struct iio_dma_buffer_block *block;
> > -
> > - block = kzalloc(sizeof(*block), GFP_KERNEL);
> > + struct iio_dma_buffer_block *block __free(kfree) = kzalloc(sizeof(*block), GFP_KERNEL);
> > if (!block)
> > return NULL;
>
> In another thread I believe you referred to the 80 rule.
> Follow it then :-)
>
> struct iio_dma_buffer_block *block __free(kfree) =
> kzalloc(sizeof(*block), GFP_KERNEL);
This one is so common I will argue the shorter line form is better.
The other cases you have in some of these patches like the final
one are fine.
Rest of series looks fine to me but open question from Andy
on patch 3 and maybe a trivial patch description change for
that "it WARN() anyway" discussion.
Jonathan
>
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 3/6] iio: buffer-dma: Turn iio_dma_buffer_init() void
2025-12-03 16:34 ` Andy Shevchenko
@ 2025-12-19 15:03 ` Nuno Sá
2025-12-27 14:02 ` Andy Shevchenko
0 siblings, 1 reply; 23+ messages in thread
From: Nuno Sá @ 2025-12-19 15:03 UTC (permalink / raw)
To: Andy Shevchenko, nuno.sa
Cc: linux-iio, Jonathan Cameron, David Lechner, Andy Shevchenko
On Wed, 2025-12-03 at 18:34 +0200, Andy Shevchenko wrote:
> On Wed, Dec 03, 2025 at 03:11:38PM +0000, Nuno Sá via B4 Relay wrote:
>
> > iio_dma_buffer_init() always return 0. Therefore there's no point in
> > returning int.
>
> > While at it, fix a mismatch between the function declaration and definition
> > regarding the struct device (dma_dev != dev).
>
> So, all others use simple dev?
Totally forgot about this. What do you mean by the above? If other functions in the
header use just dev? If so, the one I changed is the only one that uses struct device
(in that header). It is also consistent with what we have for the devm_iio_dmaengine_*
APIs.
- Nuno Sá
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 3/6] iio: buffer-dma: Turn iio_dma_buffer_init() void
2025-12-19 15:03 ` Nuno Sá
@ 2025-12-27 14:02 ` Andy Shevchenko
2026-01-02 10:25 ` Nuno Sá
0 siblings, 1 reply; 23+ messages in thread
From: Andy Shevchenko @ 2025-12-27 14:02 UTC (permalink / raw)
To: Nuno Sá
Cc: Andy Shevchenko, nuno.sa, linux-iio, Jonathan Cameron,
David Lechner, Andy Shevchenko
On Fri, Dec 19, 2025 at 5:03 PM Nuno Sá <noname.nuno@gmail.com> wrote:
> On Wed, 2025-12-03 at 18:34 +0200, Andy Shevchenko wrote:
> > On Wed, Dec 03, 2025 at 03:11:38PM +0000, Nuno Sá via B4 Relay wrote:
...
> > > While at it, fix a mismatch between the function declaration and definition
> > > regarding the struct device (dma_dev != dev).
> >
> > So, all others use simple dev?
>
> Totally forgot about this. What do you mean by the above? If other functions in the
> header use just dev? If so, the one I changed is the only one that uses struct device
> (in that header). It is also consistent with what we have for the devm_iio_dmaengine_*
> APIs.
Does the device, that is physical, DMA? Or is it a separate device for
that purpose? I mean that naming may suggest that they are different
devices. The original Q was about APIs. Are all of them, after your
patch, use the same device semantically?
--
With Best Regards,
Andy Shevchenko
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 3/6] iio: buffer-dma: Turn iio_dma_buffer_init() void
2025-12-27 14:02 ` Andy Shevchenko
@ 2026-01-02 10:25 ` Nuno Sá
2026-01-02 12:42 ` Andy Shevchenko
0 siblings, 1 reply; 23+ messages in thread
From: Nuno Sá @ 2026-01-02 10:25 UTC (permalink / raw)
To: Andy Shevchenko
Cc: Andy Shevchenko, nuno.sa, linux-iio, Jonathan Cameron,
David Lechner, Andy Shevchenko
On Sat, 2025-12-27 at 16:02 +0200, Andy Shevchenko wrote:
> On Fri, Dec 19, 2025 at 5:03 PM Nuno Sá <noname.nuno@gmail.com> wrote:
> > On Wed, 2025-12-03 at 18:34 +0200, Andy Shevchenko wrote:
> > > On Wed, Dec 03, 2025 at 03:11:38PM +0000, Nuno Sá via B4 Relay wrote:
>
> ...
>
> > > > While at it, fix a mismatch between the function declaration and definition
> > > > regarding the struct device (dma_dev != dev).
> > >
> > > So, all others use simple dev?
> >
> > Totally forgot about this. What do you mean by the above? If other functions in the
> > header use just dev? If so, the one I changed is the only one that uses struct device
> > (in that header). It is also consistent with what we have for the devm_iio_dmaengine_*
> > APIs.
>
> Does the device, that is physical, DMA? Or is it a separate device for
> that purpose? I mean that naming may suggest that they are different
> devices. The original Q was about APIs. Are all of them, after your
> patch, use the same device semantically?
>
This device is the DMA capable device which provides the DMA chan which indeed is not the same
as the struct device in the devm APIs (that one is the consumer). So dma_dev might be a better name
even though the docs already make it clear.
- Nuno Sá
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 3/6] iio: buffer-dma: Turn iio_dma_buffer_init() void
2026-01-02 10:25 ` Nuno Sá
@ 2026-01-02 12:42 ` Andy Shevchenko
0 siblings, 0 replies; 23+ messages in thread
From: Andy Shevchenko @ 2026-01-02 12:42 UTC (permalink / raw)
To: Nuno Sá
Cc: Andy Shevchenko, nuno.sa, linux-iio, Jonathan Cameron,
David Lechner, Andy Shevchenko
On Fri, Jan 02, 2026 at 10:25:20AM +0000, Nuno Sá wrote:
> On Sat, 2025-12-27 at 16:02 +0200, Andy Shevchenko wrote:
> > On Fri, Dec 19, 2025 at 5:03 PM Nuno Sá <noname.nuno@gmail.com> wrote:
> > > On Wed, 2025-12-03 at 18:34 +0200, Andy Shevchenko wrote:
> > > > On Wed, Dec 03, 2025 at 03:11:38PM +0000, Nuno Sá via B4 Relay wrote:
...
> > > > > While at it, fix a mismatch between the function declaration and definition
> > > > > regarding the struct device (dma_dev != dev).
> > > >
> > > > So, all others use simple dev?
> > >
> > > Totally forgot about this. What do you mean by the above? If other functions in the
> > > header use just dev? If so, the one I changed is the only one that uses struct device
> > > (in that header). It is also consistent with what we have for the devm_iio_dmaengine_*
> > > APIs.
> >
> > Does the device, that is physical, DMA? Or is it a separate device for
> > that purpose? I mean that naming may suggest that they are different
> > devices. The original Q was about APIs. Are all of them, after your
> > patch, use the same device semantically?
>
> This device is the DMA capable device which provides the DMA chan
A side note, DMA capable device and device that provides DMA chan depending on
the topology can be still different devices. So, the sentence above makes a
little sense to me due to "DMA capable device which provides DMA chan" passage.
> which indeed is not the same as the struct device in the devm APIs (that one
> is the consumer). So dma_dev might be a better name even though the docs
> already make it clear.
Thanks for confirming this!
--
With Best Regards,
Andy Shevchenko
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2026-01-02 12:42 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-03 15:11 [PATCH 0/6] iio: buffer-dma: Minor cleanups and improvements Nuno Sá via B4 Relay
2025-12-03 15:11 ` [PATCH 1/6] iio: buffer-dma: use lockdep instead of WARN() Nuno Sá via B4 Relay
2025-12-03 16:20 ` Andy Shevchenko
2025-12-04 11:20 ` Nuno Sá
2025-12-03 15:11 ` [PATCH 2/6] iio: buffer-dma: Use the cleanup.h API Nuno Sá via B4 Relay
2025-12-03 16:23 ` Andy Shevchenko
2025-12-06 21:12 ` Jonathan Cameron
2025-12-03 15:11 ` [PATCH 3/6] iio: buffer-dma: Turn iio_dma_buffer_init() void Nuno Sá via B4 Relay
2025-12-03 16:34 ` Andy Shevchenko
2025-12-19 15:03 ` Nuno Sá
2025-12-27 14:02 ` Andy Shevchenko
2026-01-02 10:25 ` Nuno Sá
2026-01-02 12:42 ` Andy Shevchenko
2025-12-03 15:11 ` [PATCH 4/6] iio: buffer-dma: Fix coding style complains Nuno Sá via B4 Relay
2025-12-03 16:29 ` Andy Shevchenko
2025-12-04 11:25 ` Nuno Sá
2025-12-06 21:09 ` Jonathan Cameron
2025-12-03 15:11 ` [PATCH 5/6] iio: buffer-dmaengine: Use the cleanup.h API Nuno Sá via B4 Relay
2025-12-03 15:11 ` [PATCH 6/6] iio: buffer-dmaengine: Fix coding style complains Nuno Sá via B4 Relay
2025-12-03 16:31 ` Andy Shevchenko
2025-12-03 16:33 ` Andy Shevchenko
2025-12-04 11:21 ` Nuno Sá
2025-12-04 11:55 ` Andy Shevchenko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox