* [PATCH v11 0/7] iio: new DMABUF based API v11
@ 2024-06-18 10:02 Paul Cercueil
2024-06-18 10:02 ` [PATCH v11 1/7] dmaengine: Add API function dmaengine_prep_peripheral_dma_vec() Paul Cercueil
` (6 more replies)
0 siblings, 7 replies; 33+ messages in thread
From: Paul Cercueil @ 2024-06-18 10:02 UTC (permalink / raw)
To: Jonathan Cameron, Lars-Peter Clausen, Vinod Koul, Sumit Semwal,
Christian König
Cc: Jonathan Corbet, Nuno Sa, linux-iio, linux-doc, linux-kernel,
dmaengine, linux-media, dri-devel, linaro-mm-sig, Paul Cercueil
Hi Jonathan,
Here's the v11 of my patchset that introduces DMABUF support to IIO.
It addresses the few points that were raised in the review of the v10.
It also adds Nuno as a co-developer.
Changelog:
- [3/7]:
- Document .lock_queue / .unlock_queue buffer callbacks
- Add small comment to explain what the spinlock protects
- Use guard(mutex)
- [4/7]:
- Remove useless field "attach" in struct iio_dma_buffer_block
- Document "sg_table" and "fence" fields in struct iio_block_state
- [6/7]:
- "a IIO buffer" -> "an IIO buffer"
- Add variable name in IOCTL calls
- [7/7]: New patch, to document the DMA changes
Cheers,
-Paul
Paul Cercueil (7):
dmaengine: Add API function dmaengine_prep_peripheral_dma_vec()
dmaengine: dma-axi-dmac: Implement device_prep_peripheral_dma_vec
iio: core: Add new DMABUF interface infrastructure
iio: buffer-dma: Enable support for DMABUFs
iio: buffer-dmaengine: Support new DMABUF based userspace API
Documentation: iio: Document high-speed DMABUF based API
Documentation: dmaengine: Document new dma_vec API
Documentation/driver-api/dmaengine/client.rst | 9 +
.../driver-api/dmaengine/provider.rst | 10 +
Documentation/iio/iio_dmabuf_api.rst | 54 +++
Documentation/iio/index.rst | 1 +
drivers/dma/dma-axi-dmac.c | 40 ++
drivers/iio/Kconfig | 1 +
drivers/iio/buffer/industrialio-buffer-dma.c | 178 ++++++-
.../buffer/industrialio-buffer-dmaengine.c | 62 ++-
drivers/iio/industrialio-buffer.c | 457 ++++++++++++++++++
include/linux/dmaengine.h | 33 ++
include/linux/iio/buffer-dma.h | 31 ++
include/linux/iio/buffer_impl.h | 33 ++
include/uapi/linux/iio/buffer.h | 22 +
13 files changed, 911 insertions(+), 20 deletions(-)
create mode 100644 Documentation/iio/iio_dmabuf_api.rst
--
2.43.0
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v11 1/7] dmaengine: Add API function dmaengine_prep_peripheral_dma_vec()
2024-06-18 10:02 [PATCH v11 0/7] iio: new DMABUF based API v11 Paul Cercueil
@ 2024-06-18 10:02 ` Paul Cercueil
2024-06-18 10:02 ` [PATCH v11 2/7] dmaengine: dma-axi-dmac: Implement device_prep_peripheral_dma_vec Paul Cercueil
` (5 subsequent siblings)
6 siblings, 0 replies; 33+ messages in thread
From: Paul Cercueil @ 2024-06-18 10:02 UTC (permalink / raw)
To: Jonathan Cameron, Lars-Peter Clausen, Vinod Koul, Sumit Semwal,
Christian König
Cc: Jonathan Corbet, Nuno Sa, linux-iio, linux-doc, linux-kernel,
dmaengine, linux-media, dri-devel, linaro-mm-sig, Paul Cercueil
This function can be used to initiate a scatter-gather DMA transfer,
where the address and size of each segment is located in one entry of
the dma_vec array.
The major difference with dmaengine_prep_slave_sg() is that it supports
specifying the lengths of each DMA transfer; as trying to override the
length of the transfer with dmaengine_prep_slave_sg() is a very tedious
process. The introduction of a new API function is also justified by the
fact that scatterlists are on their way out.
Note that dmaengine_prep_interleaved_dma() is not helpful either in that
case, as it assumes that the address of each segment will be higher than
the one of the previous segment, which we just cannot guarantee in case
of a scatter-gather transfer.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Co-developed-by: Nuno Sa <nuno.sa@analog.com>
Signed-off-by: Nuno Sa <nuno.sa@analog.com>
---
v3: New patch
v5: Replace with function dmaengine_prep_slave_dma_vec(), and struct
'dma_vec'.
Note that at some point we will need to support cyclic transfers
using dmaengine_prep_slave_dma_vec(). Maybe with a new "flags"
parameter to the function?
v7:
- Renamed *device_prep_slave_dma_vec() -> device_prep_peripheral_dma_vec();
- Added a new flag parameter to the function as agreed between Paul
and Vinod. I renamed the first parameter to prep_flags as it's supposed to
be used (I think) with enum dma_ctrl_flags. I'm not really sure how that API
can grow but I was thinking in just having a bool cyclic parameter (as the
first intention of the flags is to support cyclic transfers) but ended up
"respecting" the previously agreed approach.
v10:
- Add kernel doc to dmaengine_prep_peripheral_dma_vec()
- Remove extra flags parameter
---
include/linux/dmaengine.h | 33 +++++++++++++++++++++++++++++++++
1 file changed, 33 insertions(+)
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index 73537fddbb52..b137fdb56093 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -160,6 +160,16 @@ struct dma_interleaved_template {
struct data_chunk sgl[];
};
+/**
+ * struct dma_vec - DMA vector
+ * @addr: Bus address of the start of the vector
+ * @len: Length in bytes of the DMA vector
+ */
+struct dma_vec {
+ dma_addr_t addr;
+ size_t len;
+};
+
/**
* enum dma_ctrl_flags - DMA flags to augment operation preparation,
* control completion, and communicate status.
@@ -910,6 +920,10 @@ struct dma_device {
struct dma_async_tx_descriptor *(*device_prep_dma_interrupt)(
struct dma_chan *chan, unsigned long flags);
+ struct dma_async_tx_descriptor *(*device_prep_peripheral_dma_vec)(
+ struct dma_chan *chan, const struct dma_vec *vecs,
+ size_t nents, enum dma_transfer_direction direction,
+ unsigned long flags);
struct dma_async_tx_descriptor *(*device_prep_slave_sg)(
struct dma_chan *chan, struct scatterlist *sgl,
unsigned int sg_len, enum dma_transfer_direction direction,
@@ -973,6 +987,25 @@ static inline struct dma_async_tx_descriptor *dmaengine_prep_slave_single(
dir, flags, NULL);
}
+/**
+ * dmaengine_prep_peripheral_dma_vec() - Prepare a DMA scatter-gather descriptor
+ * @chan: The channel to be used for this descriptor
+ * @vecs: The array of DMA vectors that should be transferred
+ * @nents: The number of DMA vectors in the array
+ * @dir: Specifies the direction of the data transfer
+ * @flags: DMA engine flags
+ */
+static inline struct dma_async_tx_descriptor *dmaengine_prep_peripheral_dma_vec(
+ struct dma_chan *chan, const struct dma_vec *vecs, size_t nents,
+ enum dma_transfer_direction dir, unsigned long flags)
+{
+ if (!chan || !chan->device || !chan->device->device_prep_peripheral_dma_vec)
+ return NULL;
+
+ return chan->device->device_prep_peripheral_dma_vec(chan, vecs, nents,
+ dir, flags);
+}
+
static inline struct dma_async_tx_descriptor *dmaengine_prep_slave_sg(
struct dma_chan *chan, struct scatterlist *sgl, unsigned int sg_len,
enum dma_transfer_direction dir, unsigned long flags)
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v11 2/7] dmaengine: dma-axi-dmac: Implement device_prep_peripheral_dma_vec
2024-06-18 10:02 [PATCH v11 0/7] iio: new DMABUF based API v11 Paul Cercueil
2024-06-18 10:02 ` [PATCH v11 1/7] dmaengine: Add API function dmaengine_prep_peripheral_dma_vec() Paul Cercueil
@ 2024-06-18 10:02 ` Paul Cercueil
2024-06-18 10:02 ` [PATCH v11 3/7] iio: core: Add new DMABUF interface infrastructure Paul Cercueil
` (4 subsequent siblings)
6 siblings, 0 replies; 33+ messages in thread
From: Paul Cercueil @ 2024-06-18 10:02 UTC (permalink / raw)
To: Jonathan Cameron, Lars-Peter Clausen, Vinod Koul, Sumit Semwal,
Christian König
Cc: Jonathan Corbet, Nuno Sa, linux-iio, linux-doc, linux-kernel,
dmaengine, linux-media, dri-devel, linaro-mm-sig, Paul Cercueil
Add implementation of the .device_prep_peripheral_dma_vec() callback.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Co-developed-by: Nuno Sa <nuno.sa@analog.com>
Signed-off-by: Nuno Sa <nuno.sa@analog.com>
---
v3: New patch
v5: Implement .device_prep_slave_dma_vec() instead of v3's
.device_prep_slave_dma_array().
v6: Use new prototype for axi_dmac_alloc_desc() as it changed upstream.
v7: Adapted patch for the changes made in patch 1.
v10: Use the new function prototype (without the extra prep_flags).
---
drivers/dma/dma-axi-dmac.c | 40 ++++++++++++++++++++++++++++++++++++++
1 file changed, 40 insertions(+)
diff --git a/drivers/dma/dma-axi-dmac.c b/drivers/dma/dma-axi-dmac.c
index bdb752f11869..36943b0c6d60 100644
--- a/drivers/dma/dma-axi-dmac.c
+++ b/drivers/dma/dma-axi-dmac.c
@@ -620,6 +620,45 @@ static struct axi_dmac_sg *axi_dmac_fill_linear_sg(struct axi_dmac_chan *chan,
return sg;
}
+static struct dma_async_tx_descriptor *
+axi_dmac_prep_peripheral_dma_vec(struct dma_chan *c, const struct dma_vec *vecs,
+ size_t nb, enum dma_transfer_direction direction,
+ unsigned long flags)
+{
+ struct axi_dmac_chan *chan = to_axi_dmac_chan(c);
+ struct axi_dmac_desc *desc;
+ unsigned int num_sgs = 0;
+ struct axi_dmac_sg *dsg;
+ size_t i;
+
+ if (direction != chan->direction)
+ return NULL;
+
+ for (i = 0; i < nb; i++)
+ num_sgs += DIV_ROUND_UP(vecs[i].len, chan->max_length);
+
+ desc = axi_dmac_alloc_desc(chan, num_sgs);
+ if (!desc)
+ return NULL;
+
+ dsg = desc->sg;
+
+ for (i = 0; i < nb; i++) {
+ if (!axi_dmac_check_addr(chan, vecs[i].addr) ||
+ !axi_dmac_check_len(chan, vecs[i].len)) {
+ kfree(desc);
+ return NULL;
+ }
+
+ dsg = axi_dmac_fill_linear_sg(chan, direction, vecs[i].addr, 1,
+ vecs[i].len, dsg);
+ }
+
+ desc->cyclic = false;
+
+ return vchan_tx_prep(&chan->vchan, &desc->vdesc, flags);
+}
+
static struct dma_async_tx_descriptor *axi_dmac_prep_slave_sg(
struct dma_chan *c, struct scatterlist *sgl,
unsigned int sg_len, enum dma_transfer_direction direction,
@@ -1061,6 +1100,7 @@ static int axi_dmac_probe(struct platform_device *pdev)
dma_dev->device_tx_status = dma_cookie_status;
dma_dev->device_issue_pending = axi_dmac_issue_pending;
dma_dev->device_prep_slave_sg = axi_dmac_prep_slave_sg;
+ dma_dev->device_prep_peripheral_dma_vec = axi_dmac_prep_peripheral_dma_vec;
dma_dev->device_prep_dma_cyclic = axi_dmac_prep_dma_cyclic;
dma_dev->device_prep_interleaved_dma = axi_dmac_prep_interleaved;
dma_dev->device_terminate_all = axi_dmac_terminate_all;
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-18 10:02 [PATCH v11 0/7] iio: new DMABUF based API v11 Paul Cercueil
2024-06-18 10:02 ` [PATCH v11 1/7] dmaengine: Add API function dmaengine_prep_peripheral_dma_vec() Paul Cercueil
2024-06-18 10:02 ` [PATCH v11 2/7] dmaengine: dma-axi-dmac: Implement device_prep_peripheral_dma_vec Paul Cercueil
@ 2024-06-18 10:02 ` Paul Cercueil
2024-06-19 3:15 ` kernel test robot
2024-06-19 11:43 ` [PATCH v11 3/7] iio: core: Add new DMABUF interface infrastructure Markus Elfring
2024-06-18 10:02 ` [PATCH v11 4/7] iio: buffer-dma: Enable support for DMABUFs Paul Cercueil
` (3 subsequent siblings)
6 siblings, 2 replies; 33+ messages in thread
From: Paul Cercueil @ 2024-06-18 10:02 UTC (permalink / raw)
To: Jonathan Cameron, Lars-Peter Clausen, Vinod Koul, Sumit Semwal,
Christian König
Cc: Jonathan Corbet, Nuno Sa, linux-iio, linux-doc, linux-kernel,
dmaengine, linux-media, dri-devel, linaro-mm-sig, Paul Cercueil
Add the necessary infrastructure to the IIO core to support a new
optional DMABUF based interface.
With this new interface, DMABUF objects (externally created) can be
attached to a IIO buffer, and subsequently used for data transfer.
A userspace application can then use this interface to share DMABUF
objects between several interfaces, allowing it to transfer data in a
zero-copy fashion, for instance between IIO and the USB stack.
The userspace application can also memory-map the DMABUF objects, and
access the sample data directly. The advantage of doing this vs. the
read() interface is that it avoids an extra copy of the data between the
kernel and userspace. This is particularly userful for high-speed
devices which produce several megabytes or even gigabytes of data per
second.
As part of the interface, 3 new IOCTLs have been added:
IIO_BUFFER_DMABUF_ATTACH_IOCTL(int fd):
Attach the DMABUF object identified by the given file descriptor to the
buffer.
IIO_BUFFER_DMABUF_DETACH_IOCTL(int fd):
Detach the DMABUF object identified by the given file descriptor from
the buffer. Note that closing the IIO buffer's file descriptor will
automatically detach all previously attached DMABUF objects.
IIO_BUFFER_DMABUF_ENQUEUE_IOCTL(struct iio_dmabuf *):
Request a data transfer to/from the given DMABUF object. Its file
descriptor, as well as the transfer size and flags are provided in the
"iio_dmabuf" structure.
These three IOCTLs have to be performed on the IIO buffer's file
descriptor, obtained using the IIO_BUFFER_GET_FD_IOCTL() ioctl.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Co-developed-by: Nuno Sa <nuno.sa@analog.com>
Signed-off-by: Nuno Sa <nuno.sa@analog.com>
---
v2: Only allow the new IOCTLs on the buffer FD created with
IIO_BUFFER_GET_FD_IOCTL().
v3: - Get rid of the old IOCTLs. The IIO subsystem does not create or
manage DMABUFs anymore, and only attaches/detaches externally
created DMABUFs.
- Add IIO_BUFFER_DMABUF_CYCLIC to the supported flags.
v5: - Use dev_err() instead of pr_err()
- Inline to_iio_dma_fence()
- Add comment to explain why we unref twice when detaching dmabuf
- Remove TODO comment. It is actually safe to free the file's
private data even when transfers are still pending because it
won't be accessed.
- Fix documentation of new fields in struct iio_buffer_access_funcs
- iio_dma_resv_lock() does not need to be exported, make it static
v6: - Remove dead code in iio_dma_resv_lock()
- Fix non-block actually blocking
- Cache dma_buf_attachment instead of mapping/unmapping it for every
transfer
- Return -EINVAL instead of IIO_IOCTL_UNHANDLED for unknown ioctl
- Make .block_enqueue() callback take a dma_fence pointer, which
will be passed to iio_buffer_signal_dmabuf_done() instead of the
dma_buf_attachment; and remove the backpointer from the priv
structure to the dma_fence.
- Use dma_fence_begin/end_signalling in the dma_fence critical
sections
- Unref dma_fence and dma_buf_attachment in worker, because they
might try to lock the dma_resv, which would deadlock.
- Add buffer ops to lock/unlock the queue. This is motivated by the
fact that once the dma_fence has been installed, we cannot lock
anything anymore - so the queue must be locked before the
dma_fence is installed.
- Use 'long retl' variable to handle the return value of
dma_resv_wait_timeout()
- Protect dmabufs list access with a mutex
- Rework iio_buffer_find_attachment() to use the internal dmabufs
list, instead of messing with dmabufs private data.
- Add an atomically-increasing sequence number for fences
v8 - Fix swapped fence direction
- Simplify fence wait mechanism
- Remove "Buffer closed with active transfers" print, as it was dead
code
- Un-export iio_buffer_dmabuf_{get,put}. They are not used anywhere
else so they can even be static.
- Prevent attaching already-attached DMABUFs
v9: - Select DMA_SHARED_BUFFER in Kconfig
- Remove useless forward declaration of 'iio_dma_fence'
- Import DMA-BUF namespace
- Add missing __user tag to iio_buffer_detach_dmabuf() argument
v11:
- Document .lock_queue / .unlock_queue buffer callbacks
- Add small comment to explain what the spinlock protects
- Use guard(mutex)
---
drivers/iio/Kconfig | 1 +
drivers/iio/industrialio-buffer.c | 457 ++++++++++++++++++++++++++++++
include/linux/iio/buffer_impl.h | 33 +++
include/uapi/linux/iio/buffer.h | 22 ++
4 files changed, 513 insertions(+)
diff --git a/drivers/iio/Kconfig b/drivers/iio/Kconfig
index 9c351ffc7bed..661127aed2f9 100644
--- a/drivers/iio/Kconfig
+++ b/drivers/iio/Kconfig
@@ -14,6 +14,7 @@ if IIO
config IIO_BUFFER
bool "Enable buffer support within IIO"
+ select DMA_SHARED_BUFFER
help
Provide core support for various buffer based data
acquisition methods.
diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
index 0138b21b244f..2c36354adc9e 100644
--- a/drivers/iio/industrialio-buffer.c
+++ b/drivers/iio/industrialio-buffer.c
@@ -9,15 +9,20 @@
* - Better memory allocation techniques?
* - Alternative access techniques?
*/
+#include <linux/atomic.h>
#include <linux/anon_inodes.h>
#include <linux/cleanup.h>
#include <linux/kernel.h>
#include <linux/export.h>
#include <linux/device.h>
+#include <linux/dma-buf.h>
+#include <linux/dma-fence.h>
+#include <linux/dma-resv.h>
#include <linux/file.h>
#include <linux/fs.h>
#include <linux/cdev.h>
#include <linux/slab.h>
+#include <linux/mm.h>
#include <linux/poll.h>
#include <linux/sched/signal.h>
@@ -29,6 +34,34 @@
#include <linux/iio/buffer.h>
#include <linux/iio/buffer_impl.h>
+#define DMABUF_ENQUEUE_TIMEOUT_MS 5000
+
+MODULE_IMPORT_NS(DMA_BUF);
+
+struct iio_dmabuf_priv {
+ struct list_head entry;
+ struct kref ref;
+
+ struct iio_buffer *buffer;
+ struct iio_dma_buffer_block *block;
+
+ u64 context;
+
+ /* Spinlock used for locking the dma_fence */
+ spinlock_t lock;
+
+ struct dma_buf_attachment *attach;
+ struct sg_table *sgt;
+ enum dma_data_direction dir;
+ atomic_t seqno;
+};
+
+struct iio_dma_fence {
+ struct dma_fence base;
+ struct iio_dmabuf_priv *priv;
+ struct work_struct work;
+};
+
static const char * const iio_endian_prefix[] = {
[IIO_BE] = "be",
[IIO_LE] = "le",
@@ -333,6 +366,8 @@ void iio_buffer_init(struct iio_buffer *buffer)
{
INIT_LIST_HEAD(&buffer->demux_list);
INIT_LIST_HEAD(&buffer->buffer_list);
+ INIT_LIST_HEAD(&buffer->dmabufs);
+ mutex_init(&buffer->dmabufs_mutex);
init_waitqueue_head(&buffer->pollq);
kref_init(&buffer->ref);
if (!buffer->watermark)
@@ -1526,14 +1561,55 @@ static void iio_buffer_unregister_legacy_sysfs_groups(struct iio_dev *indio_dev)
kfree(iio_dev_opaque->legacy_scan_el_group.attrs);
}
+static void iio_buffer_dmabuf_release(struct kref *ref)
+{
+ struct iio_dmabuf_priv *priv = container_of(ref, struct iio_dmabuf_priv, ref);
+ struct dma_buf_attachment *attach = priv->attach;
+ struct iio_buffer *buffer = priv->buffer;
+ struct dma_buf *dmabuf = attach->dmabuf;
+
+ dma_resv_lock(dmabuf->resv, NULL);
+ dma_buf_unmap_attachment(attach, priv->sgt, priv->dir);
+ dma_resv_unlock(dmabuf->resv);
+
+ buffer->access->detach_dmabuf(buffer, priv->block);
+
+ dma_buf_detach(attach->dmabuf, attach);
+ dma_buf_put(dmabuf);
+ kfree(priv);
+}
+
+static void iio_buffer_dmabuf_get(struct dma_buf_attachment *attach)
+{
+ struct iio_dmabuf_priv *priv = attach->importer_priv;
+
+ kref_get(&priv->ref);
+}
+
+static void iio_buffer_dmabuf_put(struct dma_buf_attachment *attach)
+{
+ struct iio_dmabuf_priv *priv = attach->importer_priv;
+
+ kref_put(&priv->ref, iio_buffer_dmabuf_release);
+}
+
static int iio_buffer_chrdev_release(struct inode *inode, struct file *filep)
{
struct iio_dev_buffer_pair *ib = filep->private_data;
struct iio_dev *indio_dev = ib->indio_dev;
struct iio_buffer *buffer = ib->buffer;
+ struct iio_dmabuf_priv *priv, *tmp;
wake_up(&buffer->pollq);
+ guard(mutex)(&buffer->dmabufs_mutex);
+
+ /* Close all attached DMABUFs */
+ list_for_each_entry_safe(priv, tmp, &buffer->dmabufs, entry) {
+ list_del_init(&priv->entry);
+ iio_buffer_dmabuf_put(priv->attach);
+ }
+
kfree(ib);
clear_bit(IIO_BUSY_BIT_POS, &buffer->flags);
iio_device_put(indio_dev);
@@ -1541,11 +1617,391 @@ static int iio_buffer_chrdev_release(struct inode *inode, struct file *filep)
return 0;
}
+static int iio_dma_resv_lock(struct dma_buf *dmabuf, bool nonblock)
+{
+ if (!nonblock)
+ return dma_resv_lock_interruptible(dmabuf->resv, NULL);
+
+ if (!dma_resv_trylock(dmabuf->resv))
+ return -EBUSY;
+
+ return 0;
+}
+
+static struct dma_buf_attachment *
+iio_buffer_find_attachment(struct iio_dev_buffer_pair *ib,
+ struct dma_buf *dmabuf, bool nonblock)
+{
+ struct device *dev = ib->indio_dev->dev.parent;
+ struct iio_buffer *buffer = ib->buffer;
+ struct dma_buf_attachment *attach = NULL;
+ struct iio_dmabuf_priv *priv;
+
+ guard(mutex)(&buffer->dmabufs_mutex);
+
+ list_for_each_entry(priv, &buffer->dmabufs, entry) {
+ if (priv->attach->dev == dev
+ && priv->attach->dmabuf == dmabuf) {
+ attach = priv->attach;
+ break;
+ }
+ }
+
+ if (attach)
+ iio_buffer_dmabuf_get(attach);
+
+ return attach ?: ERR_PTR(-EPERM);
+}
+
+static int iio_buffer_attach_dmabuf(struct iio_dev_buffer_pair *ib,
+ int __user *user_fd, bool nonblock)
+{
+ struct iio_dev *indio_dev = ib->indio_dev;
+ struct iio_buffer *buffer = ib->buffer;
+ struct dma_buf_attachment *attach;
+ struct iio_dmabuf_priv *priv, *each;
+ struct dma_buf *dmabuf;
+ int err, fd;
+
+ if (!buffer->access->attach_dmabuf
+ || !buffer->access->detach_dmabuf
+ || !buffer->access->enqueue_dmabuf)
+ return -EPERM;
+
+ if (copy_from_user(&fd, user_fd, sizeof(fd)))
+ return -EFAULT;
+
+ priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+
+ spin_lock_init(&priv->lock);
+ priv->context = dma_fence_context_alloc(1);
+
+ dmabuf = dma_buf_get(fd);
+ if (IS_ERR(dmabuf)) {
+ err = PTR_ERR(dmabuf);
+ goto err_free_priv;
+ }
+
+ attach = dma_buf_attach(dmabuf, indio_dev->dev.parent);
+ if (IS_ERR(attach)) {
+ err = PTR_ERR(attach);
+ goto err_dmabuf_put;
+ }
+
+ err = iio_dma_resv_lock(dmabuf, nonblock);
+ if (err)
+ goto err_dmabuf_detach;
+
+ priv->dir = buffer->direction == IIO_BUFFER_DIRECTION_IN
+ ? DMA_FROM_DEVICE : DMA_TO_DEVICE;
+
+ priv->sgt = dma_buf_map_attachment(attach, priv->dir);
+ if (IS_ERR(priv->sgt)) {
+ err = PTR_ERR(priv->sgt);
+ dev_err(&indio_dev->dev, "Unable to map attachment: %d\n", err);
+ goto err_resv_unlock;
+ }
+
+ kref_init(&priv->ref);
+ priv->buffer = buffer;
+ priv->attach = attach;
+ attach->importer_priv = priv;
+
+ priv->block = buffer->access->attach_dmabuf(buffer, attach);
+ if (IS_ERR(priv->block)) {
+ err = PTR_ERR(priv->block);
+ goto err_dmabuf_unmap_attachment;
+ }
+
+ dma_resv_unlock(dmabuf->resv);
+
+ guard(mutex)(&buffer->dmabufs_mutex);
+
+ /*
+ * Check whether we already have an attachment for this driver/DMABUF
+ * combo. If we do, refuse to attach.
+ */
+ list_for_each_entry(each, &buffer->dmabufs, entry) {
+ if (each->attach->dev == indio_dev->dev.parent
+ && each->attach->dmabuf == dmabuf) {
+ /*
+ * We unlocked the reservation object, so going through
+ * the cleanup code would mean re-locking it first.
+ * At this stage it is simpler to free the attachment
+ * using iio_buffer_dma_put().
+ */
+ iio_buffer_dmabuf_put(attach);
+ return -EBUSY;
+ }
+ }
+
+ /* Otherwise, add the new attachment to our dmabufs list. */
+ list_add(&priv->entry, &buffer->dmabufs);
+
+ return 0;
+
+err_dmabuf_unmap_attachment:
+ dma_buf_unmap_attachment(attach, priv->sgt, priv->dir);
+err_resv_unlock:
+ dma_resv_unlock(dmabuf->resv);
+err_dmabuf_detach:
+ dma_buf_detach(dmabuf, attach);
+err_dmabuf_put:
+ dma_buf_put(dmabuf);
+err_free_priv:
+ kfree(priv);
+
+ return err;
+}
+
+static int iio_buffer_detach_dmabuf(struct iio_dev_buffer_pair *ib,
+ int __user *user_req, bool nonblock)
+{
+ struct iio_buffer *buffer = ib->buffer;
+ struct iio_dev *indio_dev = ib->indio_dev;
+ struct iio_dmabuf_priv *priv;
+ struct dma_buf *dmabuf;
+ int dmabuf_fd, ret = -EPERM;
+
+ if (copy_from_user(&dmabuf_fd, user_req, sizeof(dmabuf_fd)))
+ return -EFAULT;
+
+ dmabuf = dma_buf_get(dmabuf_fd);
+ if (IS_ERR(dmabuf))
+ return PTR_ERR(dmabuf);
+
+ guard(mutex)(&buffer->dmabufs_mutex);
+
+ list_for_each_entry(priv, &buffer->dmabufs, entry) {
+ if (priv->attach->dev == indio_dev->dev.parent
+ && priv->attach->dmabuf == dmabuf) {
+ list_del(&priv->entry);
+
+ /* Unref the reference from iio_buffer_attach_dmabuf() */
+ iio_buffer_dmabuf_put(priv->attach);
+ ret = 0;
+ break;
+ }
+ }
+
+ dma_buf_put(dmabuf);
+
+ return ret;
+}
+
+static const char *
+iio_buffer_dma_fence_get_driver_name(struct dma_fence *fence)
+{
+ return "iio";
+}
+
+static void iio_buffer_dma_fence_release(struct dma_fence *fence)
+{
+ struct iio_dma_fence *iio_fence =
+ container_of(fence, struct iio_dma_fence, base);
+
+ kfree(iio_fence);
+}
+
+static const struct dma_fence_ops iio_buffer_dma_fence_ops = {
+ .get_driver_name = iio_buffer_dma_fence_get_driver_name,
+ .get_timeline_name = iio_buffer_dma_fence_get_driver_name,
+ .release = iio_buffer_dma_fence_release,
+};
+
+static int iio_buffer_enqueue_dmabuf(struct iio_dev_buffer_pair *ib,
+ struct iio_dmabuf __user *iio_dmabuf_req,
+ bool nonblock)
+{
+ struct iio_buffer *buffer = ib->buffer;
+ struct iio_dmabuf iio_dmabuf;
+ struct dma_buf_attachment *attach;
+ struct iio_dmabuf_priv *priv;
+ struct iio_dma_fence *fence;
+ struct dma_buf *dmabuf;
+ unsigned long timeout;
+ bool cookie, cyclic, dma_to_ram;
+ long retl;
+ u32 seqno;
+ int ret;
+
+ if (copy_from_user(&iio_dmabuf, iio_dmabuf_req, sizeof(iio_dmabuf)))
+ return -EFAULT;
+
+ if (iio_dmabuf.flags & ~IIO_BUFFER_DMABUF_SUPPORTED_FLAGS)
+ return -EINVAL;
+
+ cyclic = iio_dmabuf.flags & IIO_BUFFER_DMABUF_CYCLIC;
+
+ /* Cyclic flag is only supported on output buffers */
+ if (cyclic && buffer->direction != IIO_BUFFER_DIRECTION_OUT)
+ return -EINVAL;
+
+ dmabuf = dma_buf_get(iio_dmabuf.fd);
+ if (IS_ERR(dmabuf))
+ return PTR_ERR(dmabuf);
+
+ if (!iio_dmabuf.bytes_used || iio_dmabuf.bytes_used > dmabuf->size) {
+ ret = -EINVAL;
+ goto err_dmabuf_put;
+ }
+
+ attach = iio_buffer_find_attachment(ib, dmabuf, nonblock);
+ if (IS_ERR(attach)) {
+ ret = PTR_ERR(attach);
+ goto err_dmabuf_put;
+ }
+
+ priv = attach->importer_priv;
+
+ fence = kmalloc(sizeof(*fence), GFP_KERNEL);
+ if (!fence) {
+ ret = -ENOMEM;
+ goto err_attachment_put;
+ }
+
+ fence->priv = priv;
+
+ seqno = atomic_add_return(1, &priv->seqno);
+
+ /*
+ * The transfers are guaranteed to be processed in the order they are
+ * enqueued, so we can use a simple incrementing sequence number for
+ * the dma_fence.
+ */
+ dma_fence_init(&fence->base, &iio_buffer_dma_fence_ops,
+ &priv->lock, priv->context, seqno);
+
+ ret = iio_dma_resv_lock(dmabuf, nonblock);
+ if (ret)
+ goto err_fence_put;
+
+ timeout = nonblock ? 0 : msecs_to_jiffies(DMABUF_ENQUEUE_TIMEOUT_MS);
+ dma_to_ram = buffer->direction == IIO_BUFFER_DIRECTION_IN;
+
+ /* Make sure we don't have writers */
+ retl = dma_resv_wait_timeout(dmabuf->resv,
+ dma_resv_usage_rw(dma_to_ram),
+ true, timeout);
+ if (retl == 0)
+ retl = -EBUSY;
+ if (retl < 0) {
+ ret = (int)retl;
+ goto err_resv_unlock;
+ }
+
+ if (buffer->access->lock_queue)
+ buffer->access->lock_queue(buffer);
+
+ ret = dma_resv_reserve_fences(dmabuf->resv, 1);
+ if (ret)
+ goto err_queue_unlock;
+
+ dma_resv_add_fence(dmabuf->resv, &fence->base,
+ dma_to_ram ? DMA_RESV_USAGE_WRITE : DMA_RESV_USAGE_READ);
+ dma_resv_unlock(dmabuf->resv);
+
+ cookie = dma_fence_begin_signalling();
+
+ ret = buffer->access->enqueue_dmabuf(buffer, priv->block, &fence->base,
+ priv->sgt, iio_dmabuf.bytes_used,
+ cyclic);
+ if (ret) {
+ /*
+ * DMABUF enqueue failed, but we already added the fence.
+ * Signal the error through the fence completion mechanism.
+ */
+ iio_buffer_signal_dmabuf_done(&fence->base, ret);
+ }
+
+ if (buffer->access->unlock_queue)
+ buffer->access->unlock_queue(buffer);
+
+ dma_fence_end_signalling(cookie);
+ dma_buf_put(dmabuf);
+
+ return ret;
+
+err_queue_unlock:
+ if (buffer->access->unlock_queue)
+ buffer->access->unlock_queue(buffer);
+err_resv_unlock:
+ dma_resv_unlock(dmabuf->resv);
+err_fence_put:
+ dma_fence_put(&fence->base);
+err_attachment_put:
+ iio_buffer_dmabuf_put(attach);
+err_dmabuf_put:
+ dma_buf_put(dmabuf);
+
+ return ret;
+}
+
+static void iio_buffer_cleanup(struct work_struct *work)
+{
+ struct iio_dma_fence *fence =
+ container_of(work, struct iio_dma_fence, work);
+ struct iio_dmabuf_priv *priv = fence->priv;
+ struct dma_buf_attachment *attach = priv->attach;
+
+ dma_fence_put(&fence->base);
+ iio_buffer_dmabuf_put(attach);
+}
+
+void iio_buffer_signal_dmabuf_done(struct dma_fence *fence, int ret)
+{
+ struct iio_dma_fence *iio_fence =
+ container_of(fence, struct iio_dma_fence, base);
+ bool cookie = dma_fence_begin_signalling();
+
+ /*
+ * Get a reference to the fence, so that it's not freed as soon as
+ * it's signaled.
+ */
+ dma_fence_get(fence);
+
+ fence->error = ret;
+ dma_fence_signal(fence);
+ dma_fence_end_signalling(cookie);
+
+ /*
+ * The fence will be unref'd in iio_buffer_cleanup.
+ * It can't be done here, as the unref functions might try to lock the
+ * resv object, which can deadlock.
+ */
+ INIT_WORK(&iio_fence->work, iio_buffer_cleanup);
+ schedule_work(&iio_fence->work);
+}
+EXPORT_SYMBOL_GPL(iio_buffer_signal_dmabuf_done);
+
+static long iio_buffer_chrdev_ioctl(struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ struct iio_dev_buffer_pair *ib = filp->private_data;
+ void __user *_arg = (void __user *)arg;
+ bool nonblock = filp->f_flags & O_NONBLOCK;
+
+ switch (cmd) {
+ case IIO_BUFFER_DMABUF_ATTACH_IOCTL:
+ return iio_buffer_attach_dmabuf(ib, _arg, nonblock);
+ case IIO_BUFFER_DMABUF_DETACH_IOCTL:
+ return iio_buffer_detach_dmabuf(ib, _arg, nonblock);
+ case IIO_BUFFER_DMABUF_ENQUEUE_IOCTL:
+ return iio_buffer_enqueue_dmabuf(ib, _arg, nonblock);
+ default:
+ return -EINVAL;
+ }
+}
+
static const struct file_operations iio_buffer_chrdev_fileops = {
.owner = THIS_MODULE,
.llseek = noop_llseek,
.read = iio_buffer_read,
.write = iio_buffer_write,
+ .unlocked_ioctl = iio_buffer_chrdev_ioctl,
+ .compat_ioctl = compat_ptr_ioctl,
.poll = iio_buffer_poll,
.release = iio_buffer_chrdev_release,
};
@@ -1994,6 +2450,7 @@ static void iio_buffer_release(struct kref *ref)
{
struct iio_buffer *buffer = container_of(ref, struct iio_buffer, ref);
+ mutex_destroy(&buffer->dmabufs_mutex);
buffer->access->release(buffer);
}
diff --git a/include/linux/iio/buffer_impl.h b/include/linux/iio/buffer_impl.h
index 89c3fd7c29ca..e72552e026f3 100644
--- a/include/linux/iio/buffer_impl.h
+++ b/include/linux/iio/buffer_impl.h
@@ -9,8 +9,12 @@
#include <uapi/linux/iio/buffer.h>
#include <linux/iio/buffer.h>
+struct dma_buf_attachment;
+struct dma_fence;
struct iio_dev;
+struct iio_dma_buffer_block;
struct iio_buffer;
+struct sg_table;
/**
* INDIO_BUFFER_FLAG_FIXED_WATERMARK - Watermark level of the buffer can not be
@@ -39,6 +43,16 @@ struct iio_buffer;
* device stops sampling. Calles are balanced with @enable.
* @release: called when the last reference to the buffer is dropped,
* should free all resources allocated by the buffer.
+ * @attach_dmabuf: called from userspace via ioctl to attach one external
+ * DMABUF.
+ * @detach_dmabuf: called from userspace via ioctl to detach one previously
+ * attached DMABUF.
+ * @enqueue_dmabuf: called from userspace via ioctl to queue this DMABUF
+ * object to this buffer. Requires a valid DMABUF fd, that
+ * was previouly attached to this buffer.
+ * @lock_queue: called when the core needs to lock the buffer queue;
+ * it is used when enqueueing DMABUF objects.
+ * @unlock_queue: used to unlock a previously locked buffer queue
* @modes: Supported operating modes by this buffer type
* @flags: A bitmask combination of INDIO_BUFFER_FLAG_*
*
@@ -68,6 +82,17 @@ struct iio_buffer_access_funcs {
void (*release)(struct iio_buffer *buffer);
+ struct iio_dma_buffer_block * (*attach_dmabuf)(struct iio_buffer *buffer,
+ struct dma_buf_attachment *attach);
+ void (*detach_dmabuf)(struct iio_buffer *buffer,
+ struct iio_dma_buffer_block *block);
+ int (*enqueue_dmabuf)(struct iio_buffer *buffer,
+ struct iio_dma_buffer_block *block,
+ struct dma_fence *fence, struct sg_table *sgt,
+ size_t size, bool cyclic);
+ void (*lock_queue)(struct iio_buffer *buffer);
+ void (*unlock_queue)(struct iio_buffer *buffer);
+
unsigned int modes;
unsigned int flags;
};
@@ -136,6 +161,12 @@ struct iio_buffer {
/* @ref: Reference count of the buffer. */
struct kref ref;
+
+ /* @dmabufs: List of DMABUF attachments */
+ struct list_head dmabufs; /* P: dmabufs_mutex */
+
+ /* @dmabufs_mutex: Protects dmabufs */
+ struct mutex dmabufs_mutex;
};
/**
@@ -159,6 +190,8 @@ void iio_buffer_init(struct iio_buffer *buffer);
struct iio_buffer *iio_buffer_get(struct iio_buffer *buffer);
void iio_buffer_put(struct iio_buffer *buffer);
+void iio_buffer_signal_dmabuf_done(struct dma_fence *fence, int ret);
+
#else /* CONFIG_IIO_BUFFER */
static inline void iio_buffer_get(struct iio_buffer *buffer) {}
diff --git a/include/uapi/linux/iio/buffer.h b/include/uapi/linux/iio/buffer.h
index 13939032b3f6..c666aa95e532 100644
--- a/include/uapi/linux/iio/buffer.h
+++ b/include/uapi/linux/iio/buffer.h
@@ -5,6 +5,28 @@
#ifndef _UAPI_IIO_BUFFER_H_
#define _UAPI_IIO_BUFFER_H_
+#include <linux/types.h>
+
+/* Flags for iio_dmabuf.flags */
+#define IIO_BUFFER_DMABUF_CYCLIC (1 << 0)
+#define IIO_BUFFER_DMABUF_SUPPORTED_FLAGS 0x00000001
+
+/**
+ * struct iio_dmabuf - Descriptor for a single IIO DMABUF object
+ * @fd: file descriptor of the DMABUF object
+ * @flags: one or more IIO_BUFFER_DMABUF_* flags
+ * @bytes_used: number of bytes used in this DMABUF for the data transfer.
+ * Should generally be set to the DMABUF's size.
+ */
+struct iio_dmabuf {
+ __u32 fd;
+ __u32 flags;
+ __u64 bytes_used;
+};
+
#define IIO_BUFFER_GET_FD_IOCTL _IOWR('i', 0x91, int)
+#define IIO_BUFFER_DMABUF_ATTACH_IOCTL _IOW('i', 0x92, int)
+#define IIO_BUFFER_DMABUF_DETACH_IOCTL _IOW('i', 0x93, int)
+#define IIO_BUFFER_DMABUF_ENQUEUE_IOCTL _IOW('i', 0x94, struct iio_dmabuf)
#endif /* _UAPI_IIO_BUFFER_H_ */
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v11 4/7] iio: buffer-dma: Enable support for DMABUFs
2024-06-18 10:02 [PATCH v11 0/7] iio: new DMABUF based API v11 Paul Cercueil
` (2 preceding siblings ...)
2024-06-18 10:02 ` [PATCH v11 3/7] iio: core: Add new DMABUF interface infrastructure Paul Cercueil
@ 2024-06-18 10:02 ` Paul Cercueil
2024-06-18 10:03 ` [PATCH v11 5/7] iio: buffer-dmaengine: Support new DMABUF based userspace API Paul Cercueil
` (2 subsequent siblings)
6 siblings, 0 replies; 33+ messages in thread
From: Paul Cercueil @ 2024-06-18 10:02 UTC (permalink / raw)
To: Jonathan Cameron, Lars-Peter Clausen, Vinod Koul, Sumit Semwal,
Christian König
Cc: Jonathan Corbet, Nuno Sa, linux-iio, linux-doc, linux-kernel,
dmaengine, linux-media, dri-devel, linaro-mm-sig, Paul Cercueil
Implement iio_dma_buffer_attach_dmabuf(), iio_dma_buffer_detach_dmabuf()
and iio_dma_buffer_transfer_dmabuf(), which can then be used by the IIO
DMA buffer implementations.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Co-developed-by: Nuno Sa <nuno.sa@analog.com>
Signed-off-by: Nuno Sa <nuno.sa@analog.com>
---
v3: Update code to provide the functions that will be used as callbacks
for the new IOCTLs.
v6: - Update iio_dma_buffer_enqueue_dmabuf() to take a dma_fence pointer
- Pass that dma_fence pointer along to
iio_buffer_signal_dmabuf_done()
- Add iio_dma_buffer_lock_queue() / iio_dma_buffer_unlock_queue()
- Do not lock the queue in iio_dma_buffer_enqueue_dmabuf().
The caller will ensure that it has been locked already.
- Replace "int += bool;" by "if (bool) int++;"
- Use dma_fence_begin/end_signalling in the dma_fence critical
sections
- Use one "num_dmabufs" fields instead of one "num_blocks" and one
"num_fileio_blocks". Make it an atomic_t, which makes it possible
to decrement it atomically in iio_buffer_block_release() without
having to lock the queue mutex; and in turn, it means that we
don't need to use iio_buffer_block_put_atomic() everywhere to
avoid locking the queue mutex twice.
- Use cleanup.h guard(mutex) when possible
- Explicitely list all states in the switch in
iio_dma_can_enqueue_block()
- Rename iio_dma_buffer_fileio_mode() to
iio_dma_buffer_can_use_fileio(), and add a comment explaining why
it cannot race vs. DMABUF.
v11:
- Remove useless field "attach" in struct iio_dma_buffer_block
- Document "sg_table" and "fence" fields in struct iio_block_state
---
drivers/iio/buffer/industrialio-buffer-dma.c | 178 +++++++++++++++++--
include/linux/iio/buffer-dma.h | 31 ++++
2 files changed, 198 insertions(+), 11 deletions(-)
diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c
index 13b1a858969e..647f417a045e 100644
--- a/drivers/iio/buffer/industrialio-buffer-dma.c
+++ b/drivers/iio/buffer/industrialio-buffer-dma.c
@@ -4,6 +4,8 @@
* Author: Lars-Peter Clausen <lars@metafoo.de>
*/
+#include <linux/atomic.h>
+#include <linux/cleanup.h>
#include <linux/slab.h>
#include <linux/kernel.h>
#include <linux/module.h>
@@ -14,6 +16,8 @@
#include <linux/poll.h>
#include <linux/iio/buffer_impl.h>
#include <linux/iio/buffer-dma.h>
+#include <linux/dma-buf.h>
+#include <linux/dma-fence.h>
#include <linux/dma-mapping.h>
#include <linux/sizes.h>
@@ -94,13 +98,18 @@ static void iio_buffer_block_release(struct kref *kref)
{
struct iio_dma_buffer_block *block = container_of(kref,
struct iio_dma_buffer_block, kref);
+ struct iio_dma_buffer_queue *queue = block->queue;
- WARN_ON(block->state != IIO_BLOCK_STATE_DEAD);
+ WARN_ON(block->fileio && block->state != IIO_BLOCK_STATE_DEAD);
- dma_free_coherent(block->queue->dev, PAGE_ALIGN(block->size),
- block->vaddr, block->phys_addr);
+ if (block->fileio) {
+ dma_free_coherent(queue->dev, PAGE_ALIGN(block->size),
+ block->vaddr, block->phys_addr);
+ } else {
+ atomic_dec(&queue->num_dmabufs);
+ }
- iio_buffer_put(&block->queue->buffer);
+ iio_buffer_put(&queue->buffer);
kfree(block);
}
@@ -163,7 +172,7 @@ static struct iio_dma_buffer_queue *iio_buffer_to_queue(struct iio_buffer *buf)
}
static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(
- struct iio_dma_buffer_queue *queue, size_t size)
+ struct iio_dma_buffer_queue *queue, size_t size, bool fileio)
{
struct iio_dma_buffer_block *block;
@@ -171,13 +180,16 @@ static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(
if (!block)
return NULL;
- block->vaddr = dma_alloc_coherent(queue->dev, PAGE_ALIGN(size),
- &block->phys_addr, GFP_KERNEL);
- if (!block->vaddr) {
- kfree(block);
- return NULL;
+ if (fileio) {
+ block->vaddr = dma_alloc_coherent(queue->dev, PAGE_ALIGN(size),
+ &block->phys_addr, GFP_KERNEL);
+ if (!block->vaddr) {
+ kfree(block);
+ return NULL;
+ }
}
+ block->fileio = fileio;
block->size = size;
block->state = IIO_BLOCK_STATE_DONE;
block->queue = queue;
@@ -186,6 +198,9 @@ static struct iio_dma_buffer_block *iio_dma_buffer_alloc_block(
iio_buffer_get(&queue->buffer);
+ if (!fileio)
+ atomic_inc(&queue->num_dmabufs);
+
return block;
}
@@ -218,13 +233,20 @@ void iio_dma_buffer_block_done(struct iio_dma_buffer_block *block)
{
struct iio_dma_buffer_queue *queue = block->queue;
unsigned long flags;
+ bool cookie;
+
+ cookie = dma_fence_begin_signalling();
spin_lock_irqsave(&queue->list_lock, flags);
_iio_dma_buffer_block_done(block);
spin_unlock_irqrestore(&queue->list_lock, flags);
+ if (!block->fileio)
+ iio_buffer_signal_dmabuf_done(block->fence, 0);
+
iio_buffer_block_put_atomic(block);
iio_dma_buffer_queue_wake(queue);
+ dma_fence_end_signalling(cookie);
}
EXPORT_SYMBOL_GPL(iio_dma_buffer_block_done);
@@ -243,17 +265,27 @@ void iio_dma_buffer_block_list_abort(struct iio_dma_buffer_queue *queue,
{
struct iio_dma_buffer_block *block, *_block;
unsigned long flags;
+ bool cookie;
+
+ cookie = dma_fence_begin_signalling();
spin_lock_irqsave(&queue->list_lock, flags);
list_for_each_entry_safe(block, _block, list, head) {
list_del(&block->head);
block->bytes_used = 0;
_iio_dma_buffer_block_done(block);
+
+ if (!block->fileio)
+ iio_buffer_signal_dmabuf_done(block->fence, -EINTR);
iio_buffer_block_put_atomic(block);
}
spin_unlock_irqrestore(&queue->list_lock, flags);
+ if (queue->fileio.enabled)
+ queue->fileio.enabled = false;
+
iio_dma_buffer_queue_wake(queue);
+ dma_fence_end_signalling(cookie);
}
EXPORT_SYMBOL_GPL(iio_dma_buffer_block_list_abort);
@@ -273,6 +305,16 @@ static bool iio_dma_block_reusable(struct iio_dma_buffer_block *block)
}
}
+static bool iio_dma_buffer_can_use_fileio(struct iio_dma_buffer_queue *queue)
+{
+ /*
+ * Note that queue->num_dmabufs cannot increase while the queue is
+ * locked, it can only decrease, so it does not race against
+ * iio_dma_buffer_alloc_block().
+ */
+ return queue->fileio.enabled || !atomic_read(&queue->num_dmabufs);
+}
+
/**
* iio_dma_buffer_request_update() - DMA buffer request_update callback
* @buffer: The buffer which to request an update
@@ -299,6 +341,12 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer)
mutex_lock(&queue->lock);
+ queue->fileio.enabled = iio_dma_buffer_can_use_fileio(queue);
+
+ /* If DMABUFs were created, disable fileio interface */
+ if (!queue->fileio.enabled)
+ goto out_unlock;
+
/* Allocations are page aligned */
if (PAGE_ALIGN(queue->fileio.block_size) == PAGE_ALIGN(size))
try_reuse = true;
@@ -339,7 +387,7 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer)
}
if (!block) {
- block = iio_dma_buffer_alloc_block(queue, size);
+ block = iio_dma_buffer_alloc_block(queue, size, true);
if (!block) {
ret = -ENOMEM;
goto out_unlock;
@@ -412,8 +460,12 @@ static void iio_dma_buffer_submit_block(struct iio_dma_buffer_queue *queue,
block->state = IIO_BLOCK_STATE_ACTIVE;
iio_buffer_block_get(block);
+
ret = queue->ops->submit(queue, block);
if (ret) {
+ if (!block->fileio)
+ iio_buffer_signal_dmabuf_done(block->fence, ret);
+
/*
* This is a bit of a problem and there is not much we can do
* other then wait for the buffer to be disabled and re-enabled
@@ -646,6 +698,110 @@ size_t iio_dma_buffer_usage(struct iio_buffer *buf)
}
EXPORT_SYMBOL_GPL(iio_dma_buffer_usage);
+struct iio_dma_buffer_block *
+iio_dma_buffer_attach_dmabuf(struct iio_buffer *buffer,
+ struct dma_buf_attachment *attach)
+{
+ struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer);
+ struct iio_dma_buffer_block *block;
+
+ guard(mutex)(&queue->lock);
+
+ /*
+ * If the buffer is enabled and in fileio mode new blocks can't be
+ * allocated.
+ */
+ if (queue->fileio.enabled)
+ return ERR_PTR(-EBUSY);
+
+ block = iio_dma_buffer_alloc_block(queue, attach->dmabuf->size, false);
+ if (!block)
+ return ERR_PTR(-ENOMEM);
+
+ /* Free memory that might be in use for fileio mode */
+ iio_dma_buffer_fileio_free(queue);
+
+ return block;
+}
+EXPORT_SYMBOL_GPL(iio_dma_buffer_attach_dmabuf);
+
+void iio_dma_buffer_detach_dmabuf(struct iio_buffer *buffer,
+ struct iio_dma_buffer_block *block)
+{
+ block->state = IIO_BLOCK_STATE_DEAD;
+ iio_buffer_block_put_atomic(block);
+}
+EXPORT_SYMBOL_GPL(iio_dma_buffer_detach_dmabuf);
+
+static int iio_dma_can_enqueue_block(struct iio_dma_buffer_block *block)
+{
+ struct iio_dma_buffer_queue *queue = block->queue;
+
+ /* If in fileio mode buffers can't be enqueued. */
+ if (queue->fileio.enabled)
+ return -EBUSY;
+
+ switch (block->state) {
+ case IIO_BLOCK_STATE_QUEUED:
+ return -EPERM;
+ case IIO_BLOCK_STATE_ACTIVE:
+ case IIO_BLOCK_STATE_DEAD:
+ return -EBUSY;
+ case IIO_BLOCK_STATE_DONE:
+ break;
+ }
+
+ return 0;
+}
+
+int iio_dma_buffer_enqueue_dmabuf(struct iio_buffer *buffer,
+ struct iio_dma_buffer_block *block,
+ struct dma_fence *fence,
+ struct sg_table *sgt,
+ size_t size, bool cyclic)
+{
+ struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer);
+ bool cookie;
+ int ret;
+
+ WARN_ON(!mutex_is_locked(&queue->lock));
+
+ cookie = dma_fence_begin_signalling();
+
+ ret = iio_dma_can_enqueue_block(block);
+ if (ret < 0)
+ goto out_end_signalling;
+
+ block->bytes_used = size;
+ block->cyclic = cyclic;
+ block->sg_table = sgt;
+ block->fence = fence;
+
+ iio_dma_buffer_enqueue(queue, block);
+
+out_end_signalling:
+ dma_fence_end_signalling(cookie);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(iio_dma_buffer_enqueue_dmabuf);
+
+void iio_dma_buffer_lock_queue(struct iio_buffer *buffer)
+{
+ struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer);
+
+ mutex_lock(&queue->lock);
+}
+EXPORT_SYMBOL_GPL(iio_dma_buffer_lock_queue);
+
+void iio_dma_buffer_unlock_queue(struct iio_buffer *buffer)
+{
+ struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer);
+
+ mutex_unlock(&queue->lock);
+}
+EXPORT_SYMBOL_GPL(iio_dma_buffer_unlock_queue);
+
/**
* iio_dma_buffer_set_bytes_per_datum() - DMA buffer set_bytes_per_datum callback
* @buffer: Buffer to set the bytes-per-datum for
diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h
index 6e27e47077d5..5eb66a399002 100644
--- a/include/linux/iio/buffer-dma.h
+++ b/include/linux/iio/buffer-dma.h
@@ -7,6 +7,7 @@
#ifndef __INDUSTRIALIO_DMA_BUFFER_H__
#define __INDUSTRIALIO_DMA_BUFFER_H__
+#include <linux/atomic.h>
#include <linux/list.h>
#include <linux/kref.h>
#include <linux/spinlock.h>
@@ -16,6 +17,9 @@
struct iio_dma_buffer_queue;
struct iio_dma_buffer_ops;
struct device;
+struct dma_buf_attachment;
+struct dma_fence;
+struct sg_table;
/**
* enum iio_block_state - State of a struct iio_dma_buffer_block
@@ -41,6 +45,10 @@ enum iio_block_state {
* @queue: Parent DMA buffer queue
* @kref: kref used to manage the lifetime of block
* @state: Current state of the block
+ * @cyclic: True if this is a cyclic buffer
+ * @fileio: True if this buffer is used for fileio mode
+ * @sg_table: DMA table for the transfer when transferring a DMABUF
+ * @fence: DMA fence to be signaled when a DMABUF transfer is complete
*/
struct iio_dma_buffer_block {
/* May only be accessed by the owner of the block */
@@ -63,6 +71,12 @@ struct iio_dma_buffer_block {
* queue->list_lock if the block is not owned by the core.
*/
enum iio_block_state state;
+
+ bool cyclic;
+ bool fileio;
+
+ struct sg_table *sg_table;
+ struct dma_fence *fence;
};
/**
@@ -72,6 +86,7 @@ struct iio_dma_buffer_block {
* @pos: Read offset in the active block
* @block_size: Size of each block
* @next_dequeue: index of next block that will be dequeued
+ * @enabled: Whether the buffer is operating in fileio mode
*/
struct iio_dma_buffer_queue_fileio {
struct iio_dma_buffer_block *blocks[2];
@@ -80,6 +95,7 @@ struct iio_dma_buffer_queue_fileio {
size_t block_size;
unsigned int next_dequeue;
+ bool enabled;
};
/**
@@ -95,6 +111,7 @@ struct iio_dma_buffer_queue_fileio {
* the DMA controller
* @incoming: List of buffers on the incoming queue
* @active: Whether the buffer is currently active
+ * @num_dmabufs: Total number of DMABUFs attached to this queue
* @fileio: FileIO state
*/
struct iio_dma_buffer_queue {
@@ -107,6 +124,7 @@ struct iio_dma_buffer_queue {
struct list_head incoming;
bool active;
+ atomic_t num_dmabufs;
struct iio_dma_buffer_queue_fileio fileio;
};
@@ -144,4 +162,17 @@ int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue,
void iio_dma_buffer_exit(struct iio_dma_buffer_queue *queue);
void iio_dma_buffer_release(struct iio_dma_buffer_queue *queue);
+struct iio_dma_buffer_block *
+iio_dma_buffer_attach_dmabuf(struct iio_buffer *buffer,
+ struct dma_buf_attachment *attach);
+void iio_dma_buffer_detach_dmabuf(struct iio_buffer *buffer,
+ struct iio_dma_buffer_block *block);
+int iio_dma_buffer_enqueue_dmabuf(struct iio_buffer *buffer,
+ struct iio_dma_buffer_block *block,
+ struct dma_fence *fence,
+ struct sg_table *sgt,
+ size_t size, bool cyclic);
+void iio_dma_buffer_lock_queue(struct iio_buffer *buffer);
+void iio_dma_buffer_unlock_queue(struct iio_buffer *buffer);
+
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v11 5/7] iio: buffer-dmaengine: Support new DMABUF based userspace API
2024-06-18 10:02 [PATCH v11 0/7] iio: new DMABUF based API v11 Paul Cercueil
` (3 preceding siblings ...)
2024-06-18 10:02 ` [PATCH v11 4/7] iio: buffer-dma: Enable support for DMABUFs Paul Cercueil
@ 2024-06-18 10:03 ` Paul Cercueil
2024-06-18 10:03 ` [PATCH v11 6/7] Documentation: iio: Document high-speed DMABUF based API Paul Cercueil
2024-06-18 10:03 ` [PATCH v11 7/7] Documentation: dmaengine: Document new dma_vec API Paul Cercueil
6 siblings, 0 replies; 33+ messages in thread
From: Paul Cercueil @ 2024-06-18 10:03 UTC (permalink / raw)
To: Jonathan Cameron, Lars-Peter Clausen, Vinod Koul, Sumit Semwal,
Christian König
Cc: Jonathan Corbet, Nuno Sa, linux-iio, linux-doc, linux-kernel,
dmaengine, linux-media, dri-devel, linaro-mm-sig, Paul Cercueil
Use the functions provided by the buffer-dma core to implement the
DMABUF userspace API in the buffer-dmaengine IIO buffer implementation.
Since we want to be able to transfer an arbitrary number of bytes and
not necesarily the full DMABUF, the associated scatterlist is converted
to an array of DMA addresses + lengths, which is then passed to
dmaengine_prep_slave_dma_array().
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Co-developed-by: Nuno Sa <nuno.sa@analog.com>
Signed-off-by: Nuno Sa <nuno.sa@analog.com>
---
v3: Use the new dmaengine_prep_slave_dma_array(), and adapt the code to
work with the new functions introduced in industrialio-buffer-dma.c.
v5: - Use the new dmaengine_prep_slave_dma_vec().
- Restrict to input buffers, since output buffers are not yet
supported by IIO buffers.
v6: - Populate .lock_queue / .unlock_queue callbacks
- Switch to atomic memory allocations in .submit_queue, because of
the dma_fence critical section
- Make sure that the size of the scatterlist is enough
v7: Adapted patch for the changes made in patch 1.
v10:
- Remove extra flags parameter to dmaengine_prep_peripheral_dma_vec()
- Add support for TX transfers
---
.../buffer/industrialio-buffer-dmaengine.c | 62 ++++++++++++++++---
1 file changed, 53 insertions(+), 9 deletions(-)
diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
index 918f6f8d65b6..12aa1412dfa0 100644
--- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
+++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
@@ -65,25 +65,62 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue,
iio_buffer_to_dmaengine_buffer(&queue->buffer);
struct dma_async_tx_descriptor *desc;
enum dma_transfer_direction dma_dir;
+ struct scatterlist *sgl;
+ struct dma_vec *vecs;
size_t max_size;
dma_cookie_t cookie;
+ size_t len_total;
+ unsigned int i;
+ int nents;
max_size = min(block->size, dmaengine_buffer->max_size);
max_size = round_down(max_size, dmaengine_buffer->align);
- if (queue->buffer.direction == IIO_BUFFER_DIRECTION_IN) {
- block->bytes_used = max_size;
+ if (queue->buffer.direction == IIO_BUFFER_DIRECTION_IN)
dma_dir = DMA_DEV_TO_MEM;
- } else {
+ else
dma_dir = DMA_MEM_TO_DEV;
- }
- if (!block->bytes_used || block->bytes_used > max_size)
- return -EINVAL;
+ if (block->sg_table) {
+ sgl = block->sg_table->sgl;
+ nents = sg_nents_for_len(sgl, block->bytes_used);
+ if (nents < 0)
+ return nents;
+
+ vecs = kmalloc_array(nents, sizeof(*vecs), GFP_ATOMIC);
+ if (!vecs)
+ return -ENOMEM;
+
+ len_total = block->bytes_used;
+
+ for (i = 0; i < nents; i++) {
+ vecs[i].addr = sg_dma_address(sgl);
+ vecs[i].len = min(sg_dma_len(sgl), len_total);
+ len_total -= vecs[i].len;
+
+ sgl = sg_next(sgl);
+ }
- desc = dmaengine_prep_slave_single(dmaengine_buffer->chan,
- block->phys_addr, block->bytes_used, dma_dir,
- DMA_PREP_INTERRUPT);
+ desc = dmaengine_prep_peripheral_dma_vec(dmaengine_buffer->chan,
+ vecs, nents, dma_dir,
+ DMA_PREP_INTERRUPT);
+ kfree(vecs);
+ } else {
+ max_size = min(block->size, dmaengine_buffer->max_size);
+ max_size = round_down(max_size, dmaengine_buffer->align);
+
+ if (queue->buffer.direction == IIO_BUFFER_DIRECTION_IN)
+ block->bytes_used = max_size;
+
+ if (!block->bytes_used || block->bytes_used > max_size)
+ return -EINVAL;
+
+ desc = dmaengine_prep_slave_single(dmaengine_buffer->chan,
+ block->phys_addr,
+ block->bytes_used,
+ dma_dir,
+ DMA_PREP_INTERRUPT);
+ }
if (!desc)
return -ENOMEM;
@@ -133,6 +170,13 @@ static const struct iio_buffer_access_funcs iio_dmaengine_buffer_ops = {
.space_available = iio_dma_buffer_usage,
.release = iio_dmaengine_buffer_release,
+ .enqueue_dmabuf = iio_dma_buffer_enqueue_dmabuf,
+ .attach_dmabuf = iio_dma_buffer_attach_dmabuf,
+ .detach_dmabuf = iio_dma_buffer_detach_dmabuf,
+
+ .lock_queue = iio_dma_buffer_lock_queue,
+ .unlock_queue = iio_dma_buffer_unlock_queue,
+
.modes = INDIO_BUFFER_HARDWARE,
.flags = INDIO_BUFFER_FLAG_FIXED_WATERMARK,
};
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v11 6/7] Documentation: iio: Document high-speed DMABUF based API
2024-06-18 10:02 [PATCH v11 0/7] iio: new DMABUF based API v11 Paul Cercueil
` (4 preceding siblings ...)
2024-06-18 10:03 ` [PATCH v11 5/7] iio: buffer-dmaengine: Support new DMABUF based userspace API Paul Cercueil
@ 2024-06-18 10:03 ` Paul Cercueil
2024-06-19 12:22 ` Bagas Sanjaya
2024-06-18 10:03 ` [PATCH v11 7/7] Documentation: dmaengine: Document new dma_vec API Paul Cercueil
6 siblings, 1 reply; 33+ messages in thread
From: Paul Cercueil @ 2024-06-18 10:03 UTC (permalink / raw)
To: Jonathan Cameron, Lars-Peter Clausen, Vinod Koul, Sumit Semwal,
Christian König
Cc: Jonathan Corbet, Nuno Sa, linux-iio, linux-doc, linux-kernel,
dmaengine, linux-media, dri-devel, linaro-mm-sig, Paul Cercueil
Document the new DMABUF based API.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Co-developed-by: Nuno Sa <nuno.sa@analog.com>
Signed-off-by: Nuno Sa <nuno.sa@analog.com>
---
v2: - Explicitly state that the new interface is optional and is
not implemented by all drivers.
- The IOCTLs can now only be called on the buffer FD returned by
IIO_BUFFER_GET_FD_IOCTL.
- Move the page up a bit in the index since it is core stuff and not
driver-specific.
v3: Update the documentation to reflect the new API.
v5: Use description lists for the documentation of the three new IOCTLs
instead of abusing subsections.
v8: Renamed dmabuf_api.rst -> iio_dmabuf_api.rst, and updated index.rst
whose format changed in iio/togreg.
v11:
- "a IIO buffer" -> "an IIO buffer"
- Add variable name in IOCTL calls
---
Documentation/iio/iio_dmabuf_api.rst | 54 ++++++++++++++++++++++++++++
Documentation/iio/index.rst | 1 +
2 files changed, 55 insertions(+)
create mode 100644 Documentation/iio/iio_dmabuf_api.rst
diff --git a/Documentation/iio/iio_dmabuf_api.rst b/Documentation/iio/iio_dmabuf_api.rst
new file mode 100644
index 000000000000..ad0177c3fef2
--- /dev/null
+++ b/Documentation/iio/iio_dmabuf_api.rst
@@ -0,0 +1,54 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===================================
+High-speed DMABUF interface for IIO
+===================================
+
+1. Overview
+===========
+
+The Industrial I/O subsystem supports access to buffers through a
+file-based interface, with read() and write() access calls through the
+IIO device's dev node.
+
+It additionally supports a DMABUF based interface, where the userspace
+can attach DMABUF objects (externally created) to an IIO buffer, and
+subsequently use them for data transfers.
+
+A userspace application can then use this interface to share DMABUF
+objects between several interfaces, allowing it to transfer data in a
+zero-copy fashion, for instance between IIO and the USB stack.
+
+The userspace application can also memory-map the DMABUF objects, and
+access the sample data directly. The advantage of doing this vs. the
+read() interface is that it avoids an extra copy of the data between the
+kernel and userspace. This is particularly useful for high-speed devices
+which produce several megabytes or even gigabytes of data per second.
+It does however increase the userspace-kernelspace synchronization
+overhead, as the DMA_BUF_SYNC_START and DMA_BUF_SYNC_END IOCTLs have to
+be used for data integrity.
+
+2. User API
+===========
+
+As part of this interface, three new IOCTLs have been added. These three
+IOCTLs have to be performed on the IIO buffer's file descriptor,
+obtained using the IIO_BUFFER_GET_FD_IOCTL() ioctl.
+
+ ``IIO_BUFFER_DMABUF_ATTACH_IOCTL(int fd)``
+ Attach the DMABUF object, identified by its file descriptor, to the
+ IIO buffer. Returns zero on success, and a negative errno value on
+ error.
+
+ ``IIO_BUFFER_DMABUF_DETACH_IOCTL(int fd)``
+ Detach the given DMABUF object, identified by its file descriptor,
+ from the IIO buffer. Returns zero on success, and a negative errno
+ value on error.
+
+ Note that closing the IIO buffer's file descriptor will
+ automatically detach all previously attached DMABUF objects.
+
+ ``IIO_BUFFER_DMABUF_ENQUEUE_IOCTL(struct iio_dmabuf *iio_dmabuf)``
+ Enqueue a previously attached DMABUF object to the buffer queue.
+ Enqueued DMABUFs will be read from (if output buffer) or written to
+ (if input buffer) as long as the buffer is enabled.
diff --git a/Documentation/iio/index.rst b/Documentation/iio/index.rst
index 4c13bfa2865c..9cb4c50cb20d 100644
--- a/Documentation/iio/index.rst
+++ b/Documentation/iio/index.rst
@@ -9,6 +9,7 @@ Industrial I/O
iio_configfs
iio_devbuf
+ iio_dmabuf_api
iio_tools
Industrial I/O Kernel Drivers
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v11 7/7] Documentation: dmaengine: Document new dma_vec API
2024-06-18 10:02 [PATCH v11 0/7] iio: new DMABUF based API v11 Paul Cercueil
` (5 preceding siblings ...)
2024-06-18 10:03 ` [PATCH v11 6/7] Documentation: iio: Document high-speed DMABUF based API Paul Cercueil
@ 2024-06-18 10:03 ` Paul Cercueil
2024-06-19 12:34 ` Bagas Sanjaya
6 siblings, 1 reply; 33+ messages in thread
From: Paul Cercueil @ 2024-06-18 10:03 UTC (permalink / raw)
To: Jonathan Cameron, Lars-Peter Clausen, Vinod Koul, Sumit Semwal,
Christian König
Cc: Jonathan Corbet, Nuno Sa, linux-iio, linux-doc, linux-kernel,
dmaengine, linux-media, dri-devel, linaro-mm-sig, Paul Cercueil
Document the dmaengine_prep_peripheral_dma_vec() API function, the
device_prep_peripheral_dma_vec() backend function, and the dma_vec
struct.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
---
v11: New patch
---
Documentation/driver-api/dmaengine/client.rst | 9 +++++++++
Documentation/driver-api/dmaengine/provider.rst | 10 ++++++++++
2 files changed, 19 insertions(+)
diff --git a/Documentation/driver-api/dmaengine/client.rst b/Documentation/driver-api/dmaengine/client.rst
index ecf139f73da4..d491e385d61a 100644
--- a/Documentation/driver-api/dmaengine/client.rst
+++ b/Documentation/driver-api/dmaengine/client.rst
@@ -80,6 +80,10 @@ The details of these operations are:
- slave_sg: DMA a list of scatter gather buffers from/to a peripheral
+ - peripheral_dma_vec: DMA an array of scatter gather buffers from/to a
+ peripheral. Similar to slave_sg, but uses an array of dma_vec
+ structures instead of a scatterlist.
+
- dma_cyclic: Perform a cyclic DMA operation from/to a peripheral till the
operation is explicitly stopped.
@@ -102,6 +106,11 @@ The details of these operations are:
unsigned int sg_len, enum dma_data_direction direction,
unsigned long flags);
+ struct dma_async_tx_descriptor *dmaengine_prep_peripheral_dma_vec(
+ struct dma_chan *chan, const struct dma_vec *vecs,
+ size_t nents, enum dma_data_direction direction,
+ unsigned long flags);
+
struct dma_async_tx_descriptor *dmaengine_prep_dma_cyclic(
struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
size_t period_len, enum dma_data_direction direction);
diff --git a/Documentation/driver-api/dmaengine/provider.rst b/Documentation/driver-api/dmaengine/provider.rst
index ceac2a300e32..3085f8b460fa 100644
--- a/Documentation/driver-api/dmaengine/provider.rst
+++ b/Documentation/driver-api/dmaengine/provider.rst
@@ -433,6 +433,12 @@ supported.
- residue: Provides the residue bytes of the transfer for those that
support residue.
+- ``device_prep_peripheral_dma_vec``
+
+ - Similar to ``device_prep_slave_sg``, but it takes a pointer to a
+ array of ``dma_vec`` structures, which (in the long run) will replace
+ scatterlists.
+
- ``device_issue_pending``
- Takes the first transaction descriptor in the pending queue,
@@ -544,6 +550,10 @@ dma_cookie_t
- Not really relevant any more since the introduction of ``virt-dma``
that abstracts it away.
+dma_vec
+
+- A small structure that contains a DMA address and length.
+
DMA_CTRL_ACK
- If clear, the descriptor cannot be reused by provider until the
--
2.43.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [PATCH v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-18 10:02 ` [PATCH v11 3/7] iio: core: Add new DMABUF interface infrastructure Paul Cercueil
@ 2024-06-19 3:15 ` kernel test robot
2024-06-19 5:57 ` Nuno Sá
` (2 more replies)
2024-06-19 11:43 ` [PATCH v11 3/7] iio: core: Add new DMABUF interface infrastructure Markus Elfring
1 sibling, 3 replies; 33+ messages in thread
From: kernel test robot @ 2024-06-19 3:15 UTC (permalink / raw)
To: Paul Cercueil, Jonathan Cameron, Lars-Peter Clausen, Vinod Koul,
Sumit Semwal, Christian König
Cc: oe-kbuild-all, Jonathan Corbet, Nuno Sa, linux-iio, linux-doc,
linux-kernel, dmaengine, linux-media, dri-devel, linaro-mm-sig,
Paul Cercueil
Hi Paul,
kernel test robot noticed the following build errors:
[auto build test ERROR on jic23-iio/togreg]
[also build test ERROR on vkoul-dmaengine/next linus/master v6.10-rc4 next-20240618]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Paul-Cercueil/dmaengine-Add-API-function-dmaengine_prep_peripheral_dma_vec/20240618-180602
base: https://git.kernel.org/pub/scm/linux/kernel/git/jic23/iio.git togreg
patch link: https://lore.kernel.org/r/20240618100302.72886-4-paul%40crapouillou.net
patch subject: [PATCH v11 3/7] iio: core: Add new DMABUF interface infrastructure
config: x86_64-randconfig-161-20240619 (https://download.01.org/0day-ci/archive/20240619/202406191014.9JAzwRV6-lkp@intel.com/config)
compiler: clang version 18.1.5 (https://github.com/llvm/llvm-project 617a15a9eac96088ae5e9134248d8236e34b91b1)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240619/202406191014.9JAzwRV6-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202406191014.9JAzwRV6-lkp@intel.com/
All errors (new ones prefixed by >>):
>> drivers/iio/industrialio-buffer.c:1715:3: error: cannot jump from this goto statement to its label
1715 | goto err_dmabuf_unmap_attachment;
| ^
drivers/iio/industrialio-buffer.c:1720:2: note: jump bypasses initialization of variable with __attribute__((cleanup))
1720 | guard(mutex)(&buffer->dmabufs_mutex);
| ^
include/linux/cleanup.h:164:15: note: expanded from macro 'guard'
164 | CLASS(_name, __UNIQUE_ID(guard))
| ^
include/linux/compiler.h:189:29: note: expanded from macro '__UNIQUE_ID'
189 | #define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)
| ^
include/linux/compiler_types.h:84:22: note: expanded from macro '__PASTE'
84 | #define __PASTE(a,b) ___PASTE(a,b)
| ^
include/linux/compiler_types.h:83:23: note: expanded from macro '___PASTE'
83 | #define ___PASTE(a,b) a##b
| ^
<scratch space>:126:1: note: expanded from here
126 | __UNIQUE_ID_guard696
| ^
drivers/iio/industrialio-buffer.c:1704:3: error: cannot jump from this goto statement to its label
1704 | goto err_resv_unlock;
| ^
drivers/iio/industrialio-buffer.c:1720:2: note: jump bypasses initialization of variable with __attribute__((cleanup))
1720 | guard(mutex)(&buffer->dmabufs_mutex);
| ^
include/linux/cleanup.h:164:15: note: expanded from macro 'guard'
164 | CLASS(_name, __UNIQUE_ID(guard))
| ^
include/linux/compiler.h:189:29: note: expanded from macro '__UNIQUE_ID'
189 | #define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)
| ^
include/linux/compiler_types.h:84:22: note: expanded from macro '__PASTE'
84 | #define __PASTE(a,b) ___PASTE(a,b)
| ^
include/linux/compiler_types.h:83:23: note: expanded from macro '___PASTE'
83 | #define ___PASTE(a,b) a##b
| ^
<scratch space>:126:1: note: expanded from here
126 | __UNIQUE_ID_guard696
| ^
drivers/iio/industrialio-buffer.c:1695:3: error: cannot jump from this goto statement to its label
1695 | goto err_dmabuf_detach;
| ^
drivers/iio/industrialio-buffer.c:1720:2: note: jump bypasses initialization of variable with __attribute__((cleanup))
1720 | guard(mutex)(&buffer->dmabufs_mutex);
| ^
include/linux/cleanup.h:164:15: note: expanded from macro 'guard'
164 | CLASS(_name, __UNIQUE_ID(guard))
| ^
include/linux/compiler.h:189:29: note: expanded from macro '__UNIQUE_ID'
189 | #define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)
| ^
include/linux/compiler_types.h:84:22: note: expanded from macro '__PASTE'
84 | #define __PASTE(a,b) ___PASTE(a,b)
| ^
include/linux/compiler_types.h:83:23: note: expanded from macro '___PASTE'
83 | #define ___PASTE(a,b) a##b
| ^
<scratch space>:126:1: note: expanded from here
126 | __UNIQUE_ID_guard696
| ^
drivers/iio/industrialio-buffer.c:1690:3: error: cannot jump from this goto statement to its label
1690 | goto err_dmabuf_put;
| ^
drivers/iio/industrialio-buffer.c:1720:2: note: jump bypasses initialization of variable with __attribute__((cleanup))
1720 | guard(mutex)(&buffer->dmabufs_mutex);
| ^
include/linux/cleanup.h:164:15: note: expanded from macro 'guard'
164 | CLASS(_name, __UNIQUE_ID(guard))
| ^
include/linux/compiler.h:189:29: note: expanded from macro '__UNIQUE_ID'
189 | #define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)
| ^
include/linux/compiler_types.h:84:22: note: expanded from macro '__PASTE'
84 | #define __PASTE(a,b) ___PASTE(a,b)
| ^
include/linux/compiler_types.h:83:23: note: expanded from macro '___PASTE'
83 | #define ___PASTE(a,b) a##b
| ^
<scratch space>:126:1: note: expanded from here
126 | __UNIQUE_ID_guard696
| ^
drivers/iio/industrialio-buffer.c:1684:3: error: cannot jump from this goto statement to its label
1684 | goto err_free_priv;
| ^
drivers/iio/industrialio-buffer.c:1720:2: note: jump bypasses initialization of variable with __attribute__((cleanup))
1720 | guard(mutex)(&buffer->dmabufs_mutex);
| ^
include/linux/cleanup.h:164:15: note: expanded from macro 'guard'
164 | CLASS(_name, __UNIQUE_ID(guard))
| ^
include/linux/compiler.h:189:29: note: expanded from macro '__UNIQUE_ID'
189 | #define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)
| ^
include/linux/compiler_types.h:84:22: note: expanded from macro '__PASTE'
84 | #define __PASTE(a,b) ___PASTE(a,b)
| ^
include/linux/compiler_types.h:83:23: note: expanded from macro '___PASTE'
83 | #define ___PASTE(a,b) a##b
vim +1715 drivers/iio/industrialio-buffer.c
1655
1656 static int iio_buffer_attach_dmabuf(struct iio_dev_buffer_pair *ib,
1657 int __user *user_fd, bool nonblock)
1658 {
1659 struct iio_dev *indio_dev = ib->indio_dev;
1660 struct iio_buffer *buffer = ib->buffer;
1661 struct dma_buf_attachment *attach;
1662 struct iio_dmabuf_priv *priv, *each;
1663 struct dma_buf *dmabuf;
1664 int err, fd;
1665
1666 if (!buffer->access->attach_dmabuf
1667 || !buffer->access->detach_dmabuf
1668 || !buffer->access->enqueue_dmabuf)
1669 return -EPERM;
1670
1671 if (copy_from_user(&fd, user_fd, sizeof(fd)))
1672 return -EFAULT;
1673
1674 priv = kzalloc(sizeof(*priv), GFP_KERNEL);
1675 if (!priv)
1676 return -ENOMEM;
1677
1678 spin_lock_init(&priv->lock);
1679 priv->context = dma_fence_context_alloc(1);
1680
1681 dmabuf = dma_buf_get(fd);
1682 if (IS_ERR(dmabuf)) {
1683 err = PTR_ERR(dmabuf);
1684 goto err_free_priv;
1685 }
1686
1687 attach = dma_buf_attach(dmabuf, indio_dev->dev.parent);
1688 if (IS_ERR(attach)) {
1689 err = PTR_ERR(attach);
1690 goto err_dmabuf_put;
1691 }
1692
1693 err = iio_dma_resv_lock(dmabuf, nonblock);
1694 if (err)
1695 goto err_dmabuf_detach;
1696
1697 priv->dir = buffer->direction == IIO_BUFFER_DIRECTION_IN
1698 ? DMA_FROM_DEVICE : DMA_TO_DEVICE;
1699
1700 priv->sgt = dma_buf_map_attachment(attach, priv->dir);
1701 if (IS_ERR(priv->sgt)) {
1702 err = PTR_ERR(priv->sgt);
1703 dev_err(&indio_dev->dev, "Unable to map attachment: %d\n", err);
1704 goto err_resv_unlock;
1705 }
1706
1707 kref_init(&priv->ref);
1708 priv->buffer = buffer;
1709 priv->attach = attach;
1710 attach->importer_priv = priv;
1711
1712 priv->block = buffer->access->attach_dmabuf(buffer, attach);
1713 if (IS_ERR(priv->block)) {
1714 err = PTR_ERR(priv->block);
> 1715 goto err_dmabuf_unmap_attachment;
1716 }
1717
1718 dma_resv_unlock(dmabuf->resv);
1719
1720 guard(mutex)(&buffer->dmabufs_mutex);
1721
1722 /*
1723 * Check whether we already have an attachment for this driver/DMABUF
1724 * combo. If we do, refuse to attach.
1725 */
1726 list_for_each_entry(each, &buffer->dmabufs, entry) {
1727 if (each->attach->dev == indio_dev->dev.parent
1728 && each->attach->dmabuf == dmabuf) {
1729 /*
1730 * We unlocked the reservation object, so going through
1731 * the cleanup code would mean re-locking it first.
1732 * At this stage it is simpler to free the attachment
1733 * using iio_buffer_dma_put().
1734 */
1735 iio_buffer_dmabuf_put(attach);
1736 return -EBUSY;
1737 }
1738 }
1739
1740 /* Otherwise, add the new attachment to our dmabufs list. */
1741 list_add(&priv->entry, &buffer->dmabufs);
1742
1743 return 0;
1744
1745 err_dmabuf_unmap_attachment:
1746 dma_buf_unmap_attachment(attach, priv->sgt, priv->dir);
1747 err_resv_unlock:
1748 dma_resv_unlock(dmabuf->resv);
1749 err_dmabuf_detach:
1750 dma_buf_detach(dmabuf, attach);
1751 err_dmabuf_put:
1752 dma_buf_put(dmabuf);
1753 err_free_priv:
1754 kfree(priv);
1755
1756 return err;
1757 }
1758
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-19 3:15 ` kernel test robot
@ 2024-06-19 5:57 ` Nuno Sá
2024-06-19 10:03 ` [v11 " Markus Elfring
2024-06-20 10:45 ` Markus Elfring
2 siblings, 0 replies; 33+ messages in thread
From: Nuno Sá @ 2024-06-19 5:57 UTC (permalink / raw)
To: kernel test robot, Paul Cercueil, Jonathan Cameron,
Lars-Peter Clausen, Vinod Koul, Sumit Semwal,
Christian König
Cc: oe-kbuild-all, Jonathan Corbet, Nuno Sa, linux-iio, linux-doc,
linux-kernel, dmaengine, linux-media, dri-devel, linaro-mm-sig
On Wed, 2024-06-19 at 11:15 +0800, kernel test robot wrote:
> Hi Paul,
>
> kernel test robot noticed the following build errors:
>
> [auto build test ERROR on jic23-iio/togreg]
> [also build test ERROR on vkoul-dmaengine/next linus/master v6.10-rc4 next-
> 20240618]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch#_base_tree_information]
>
> url:
> https://github.com/intel-lab-lkp/linux/commits/Paul-Cercueil/dmaengine-Add-API-function-dmaengine_prep_peripheral_dma_vec/20240618-180602
> base: https://git.kernel.org/pub/scm/linux/kernel/git/jic23/iio.git togreg
> patch link:
> https://lore.kernel.org/r/20240618100302.72886-4-paul%40crapouillou.net
> patch subject: [PATCH v11 3/7] iio: core: Add new DMABUF interface infrastructure
> config: x86_64-randconfig-161-20240619
> (https://download.01.org/0day-ci/archive/20240619/202406191014.9JAzwRV6-lkp@intel.c
> om/config)
> compiler: clang version 18.1.5
> (https://github.com/llvm/llvm-project 617a15a9eac96088ae5e9134248d8236e34b91b1)
> reproduce (this is a W=1 build):
> (https://download.01.org/0day-ci/archive/20240619/202406191014.9JAzwRV6-lkp@intel.c
> om/reproduce)
>
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> > Reported-by: kernel test robot <lkp@intel.com>
> > Closes:
> > https://lore.kernel.org/oe-kbuild-all/202406191014.9JAzwRV6-lkp@intel.com/
>
> All errors (new ones prefixed by >>):
>
> > > drivers/iio/industrialio-buffer.c:1715:3: error: cannot jump from this goto
> > > statement to its label
> 1715 | goto err_dmabuf_unmap_attachment;
> | ^
> drivers/iio/industrialio-buffer.c:1720:2: note: jump bypasses initialization of
I guess the compiler produces code that will run the cleanup function on an
uninitialized variable. I would then go back to plain mutex() instead of moving
guard() to a place where it does not make sense only to shut up the warnings.
- Nuno Sá
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-19 3:15 ` kernel test robot
2024-06-19 5:57 ` Nuno Sá
@ 2024-06-19 10:03 ` Markus Elfring
2024-06-19 10:09 ` Paul Cercueil
2024-06-20 10:45 ` Markus Elfring
2 siblings, 1 reply; 33+ messages in thread
From: Markus Elfring @ 2024-06-19 10:03 UTC (permalink / raw)
To: lkp, Paul Cercueil, Nuno Sá, linux-iio, dmaengine,
linux-media, dri-devel, linaro-mm-sig, Christian König,
Jonathan Cameron, Lars-Peter Clausen, Sumit Semwal, Vinod Koul
Cc: oe-kbuild-all, LKML, linux-doc, Jonathan Corbet, Randy Dunlap
…
> All errors (new ones prefixed by >>):
>
> >> drivers/iio/industrialio-buffer.c:1715:3: error: cannot jump from this goto statement to its label
> 1715 | goto err_dmabuf_unmap_attachment;
…
Would you dare to transform the remaining goto chain into further applications
of scope-based resource management?
Regards,
Markus
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-19 10:03 ` [v11 " Markus Elfring
@ 2024-06-19 10:09 ` Paul Cercueil
2024-06-19 11:13 ` Markus Elfring
0 siblings, 1 reply; 33+ messages in thread
From: Paul Cercueil @ 2024-06-19 10:09 UTC (permalink / raw)
To: Markus Elfring, lkp, Nuno Sá, linux-iio, dmaengine,
linux-media, dri-devel, linaro-mm-sig, Christian König,
Jonathan Cameron, Lars-Peter Clausen, Sumit Semwal, Vinod Koul
Cc: oe-kbuild-all, LKML, linux-doc, Jonathan Corbet, Randy Dunlap
Hi Markus,
Le mercredi 19 juin 2024 à 12:03 +0200, Markus Elfring a écrit :
> …
> > All errors (new ones prefixed by >>):
> >
> > > > drivers/iio/industrialio-buffer.c:1715:3: error: cannot jump
> > > > from this goto statement to its label
> > 1715 | goto err_dmabuf_unmap_attachment;
> …
>
> Would you dare to transform the remaining goto chain into further
> applications
> of scope-based resource management?
We discussed this after v6 or v7, DRM/DMABUF maintainers were not keen
on doing that *just yet*.
Cheers,
-Paul
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-19 10:09 ` Paul Cercueil
@ 2024-06-19 11:13 ` Markus Elfring
2024-06-19 11:30 ` Paul Cercueil
0 siblings, 1 reply; 33+ messages in thread
From: Markus Elfring @ 2024-06-19 11:13 UTC (permalink / raw)
To: Paul Cercueil, lkp, Nuno Sá, linux-iio, dmaengine,
linux-media, dri-devel, linaro-mm-sig, Christian König,
Jonathan Cameron, Lars-Peter Clausen, Sumit Semwal, Vinod Koul
Cc: oe-kbuild-all, LKML, linux-doc, Jonathan Corbet, Randy Dunlap
>> Would you dare to transform the remaining goto chain into further applications
>> of scope-based resource management?
>
> We discussed this after v6 or v7, DRM/DMABUF maintainers were not keen
> on doing that *just yet*.
* Would you like to add any links for corresponding development discussions?
* Will the desire grow for further collateral evolution according to
affected software components?
Regards,
Markus
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-19 11:13 ` Markus Elfring
@ 2024-06-19 11:30 ` Paul Cercueil
2024-06-19 11:56 ` Markus Elfring
0 siblings, 1 reply; 33+ messages in thread
From: Paul Cercueil @ 2024-06-19 11:30 UTC (permalink / raw)
To: Markus Elfring, lkp, Nuno Sá, linux-iio, dmaengine,
linux-media, dri-devel, linaro-mm-sig, Christian König,
Jonathan Cameron, Lars-Peter Clausen, Sumit Semwal, Vinod Koul
Cc: oe-kbuild-all, LKML, linux-doc, Jonathan Corbet, Randy Dunlap
Le mercredi 19 juin 2024 à 13:13 +0200, Markus Elfring a écrit :
> > > Would you dare to transform the remaining goto chain into further
> > > applications
> > > of scope-based resource management?
> >
> > We discussed this after v6 or v7, DRM/DMABUF maintainers were not
> > keen
> > on doing that *just yet*.
>
> * Would you like to add any links for corresponding development
> discussions?
https://lore.kernel.org/linux-iio/219abc43b4fdd4a13b307ed2efaa0e6869e68e3f.camel@gmail.com/T/#eefd360069c4261aec9621fafde30924706571c94
(and responses below)
It's more nuanced than I remembered. Christian was OK to add cleanup.h
support to the DMABUF code as long as the examples were updated as
well, but those aren't good candidates as they don't free up the
resources in all code paths.
>
> * Will the desire grow for further collateral evolution according to
> affected software components?
Not sure what you mean by that.
Cheers,
-Paul
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-18 10:02 ` [PATCH v11 3/7] iio: core: Add new DMABUF interface infrastructure Paul Cercueil
2024-06-19 3:15 ` kernel test robot
@ 2024-06-19 11:43 ` Markus Elfring
2024-06-19 12:16 ` Paul Cercueil
1 sibling, 1 reply; 33+ messages in thread
From: Markus Elfring @ 2024-06-19 11:43 UTC (permalink / raw)
To: Paul Cercueil, Nuno Sá, linux-iio, dmaengine, linux-media,
dri-devel, linaro-mm-sig, Christian König, Jonathan Cameron,
Lars-Peter Clausen, Sumit Semwal, Vinod Koul
Cc: LKML, linux-doc, Jonathan Corbet, Randy Dunlap
…
> +++ b/drivers/iio/industrialio-buffer.c
…
> +static void iio_buffer_dmabuf_release(struct kref *ref)
> +{
…
> + dma_resv_lock(dmabuf->resv, NULL);
> + dma_buf_unmap_attachment(attach, priv->sgt, priv->dir);
> + dma_resv_unlock(dmabuf->resv);
…
Under which circumstances will another lock guard become applicable?
https://elixir.bootlin.com/linux/v6.10-rc4/source/include/linux/cleanup.h#L179
Regards,
Markus
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-19 11:30 ` Paul Cercueil
@ 2024-06-19 11:56 ` Markus Elfring
2024-06-19 12:21 ` Paul Cercueil
0 siblings, 1 reply; 33+ messages in thread
From: Markus Elfring @ 2024-06-19 11:56 UTC (permalink / raw)
To: Paul Cercueil, lkp, Nuno Sá, linux-iio, dmaengine,
linux-media, dri-devel, linaro-mm-sig, Christian König,
Jonathan Cameron, Lars-Peter Clausen, Sumit Semwal, Vinod Koul
Cc: oe-kbuild-all, LKML, linux-doc, Jonathan Corbet, Randy Dunlap
…
> https://lore.kernel.org/linux-iio/219abc43b4fdd4a13b307ed2efaa0e6869e68e3f.camel@gmail.com/T/#eefd360069c4261aec9621fafde30924706571c94
>
> (and responses below)
>
> It's more nuanced than I remembered.
…
>> * Will the desire grow for further collateral evolution according to
>> affected software components?
>
> Not sure what you mean by that.
Advanced programming interfaces were added a while ago.
Example:
https://elixir.bootlin.com/linux/v6.10-rc4/source/include/linux/cleanup.h#L8
Corresponding attempts for increasing API usage need to adapt to remaining change reluctance,
don't they?
Regards,
Markus
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-19 11:43 ` [PATCH v11 3/7] iio: core: Add new DMABUF interface infrastructure Markus Elfring
@ 2024-06-19 12:16 ` Paul Cercueil
2024-06-19 13:28 ` Markus Elfring
0 siblings, 1 reply; 33+ messages in thread
From: Paul Cercueil @ 2024-06-19 12:16 UTC (permalink / raw)
To: Markus Elfring, Nuno Sá, linux-iio, dmaengine, linux-media,
dri-devel, linaro-mm-sig, Christian König, Jonathan Cameron,
Lars-Peter Clausen, Sumit Semwal, Vinod Koul
Cc: LKML, linux-doc, Jonathan Corbet, Randy Dunlap
Le mercredi 19 juin 2024 à 13:43 +0200, Markus Elfring a écrit :
> …
> > +++ b/drivers/iio/industrialio-buffer.c
> …
> > +static void iio_buffer_dmabuf_release(struct kref *ref)
> > +{
> …
> > + dma_resv_lock(dmabuf->resv, NULL);
> > + dma_buf_unmap_attachment(attach, priv->sgt, priv->dir);
> > + dma_resv_unlock(dmabuf->resv);
> …
>
> Under which circumstances will another lock guard become applicable?
> https://elixir.bootlin.com/linux/v6.10-rc4/source/include/linux/cleanup.h#L179
As soon as "struct dma_resv" gets a DEFINE_GUARD().
-Paul
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-19 11:56 ` Markus Elfring
@ 2024-06-19 12:21 ` Paul Cercueil
2024-06-19 12:42 ` Nuno Sá
0 siblings, 1 reply; 33+ messages in thread
From: Paul Cercueil @ 2024-06-19 12:21 UTC (permalink / raw)
To: Markus Elfring, lkp, Nuno Sá, linux-iio, dmaengine,
linux-media, dri-devel, linaro-mm-sig, Christian König,
Jonathan Cameron, Lars-Peter Clausen, Sumit Semwal, Vinod Koul
Cc: oe-kbuild-all, LKML, linux-doc, Jonathan Corbet, Randy Dunlap
Le mercredi 19 juin 2024 à 13:56 +0200, Markus Elfring a écrit :
> …
> > https://lore.kernel.org/linux-iio/219abc43b4fdd4a13b307ed2efaa0e6869e68e3f.camel@gmail.com/T/#eefd360069c4261aec9621fafde30924706571c94
> >
> > (and responses below)
> >
> > It's more nuanced than I remembered.
> …
>
>
> > > * Will the desire grow for further collateral evolution according
> > > to
> > > affected software components?
> >
> > Not sure what you mean by that.
>
> Advanced programming interfaces were added a while ago.
>
> Example:
> https://elixir.bootlin.com/linux/v6.10-rc4/source/include/linux/cleanup.h#L8
>
> Corresponding attempts for increasing API usage need to adapt to
> remaining change reluctance,
> don't they?
Sure, I guess.
But that does not change the fact that I cannot use cleanup.h magic in
this patchset, yet, as the required changes would have to be done in a
separate one.
Cheers,
-Paul
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v11 6/7] Documentation: iio: Document high-speed DMABUF based API
2024-06-18 10:03 ` [PATCH v11 6/7] Documentation: iio: Document high-speed DMABUF based API Paul Cercueil
@ 2024-06-19 12:22 ` Bagas Sanjaya
0 siblings, 0 replies; 33+ messages in thread
From: Bagas Sanjaya @ 2024-06-19 12:22 UTC (permalink / raw)
To: Paul Cercueil, Jonathan Cameron, Lars-Peter Clausen, Vinod Koul,
Sumit Semwal, Christian König
Cc: Jonathan Corbet, Nuno Sa, linux-iio, linux-doc, linux-kernel,
dmaengine, linux-media, dri-devel, linaro-mm-sig
[-- Attachment #1: Type: text/plain, Size: 367 bytes --]
On Tue, Jun 18, 2024 at 12:03:01PM +0200, Paul Cercueil wrote:
> +As part of this interface, three new IOCTLs have been added. These three
> +IOCTLs have to be performed on the IIO buffer's file descriptor,
> +obtained using the IIO_BUFFER_GET_FD_IOCTL() ioctl.
"... which can be obtained using ..."
--
An old man doll... just what I always wanted! - Clara
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v11 7/7] Documentation: dmaengine: Document new dma_vec API
2024-06-18 10:03 ` [PATCH v11 7/7] Documentation: dmaengine: Document new dma_vec API Paul Cercueil
@ 2024-06-19 12:34 ` Bagas Sanjaya
0 siblings, 0 replies; 33+ messages in thread
From: Bagas Sanjaya @ 2024-06-19 12:34 UTC (permalink / raw)
To: Paul Cercueil, Jonathan Cameron, Lars-Peter Clausen, Vinod Koul,
Sumit Semwal, Christian König
Cc: Jonathan Corbet, Nuno Sa, linux-iio, linux-doc, linux-kernel,
dmaengine, linux-media, dri-devel, linaro-mm-sig
[-- Attachment #1: Type: text/plain, Size: 351 bytes --]
On Tue, Jun 18, 2024 at 12:03:02PM +0200, Paul Cercueil wrote:
> Document the dmaengine_prep_peripheral_dma_vec() API function, the
> device_prep_peripheral_dma_vec() backend function, and the dma_vec
> struct.
>
LGTM, thanks!
Reviewed-by: Bagas Sanjaya <bagasdotme@gmail.com>
--
An old man doll... just what I always wanted! - Clara
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-19 12:21 ` Paul Cercueil
@ 2024-06-19 12:42 ` Nuno Sá
0 siblings, 0 replies; 33+ messages in thread
From: Nuno Sá @ 2024-06-19 12:42 UTC (permalink / raw)
To: Paul Cercueil, Markus Elfring, lkp, Nuno Sá, linux-iio,
dmaengine, linux-media, dri-devel, linaro-mm-sig,
Christian König, Jonathan Cameron, Lars-Peter Clausen,
Sumit Semwal, Vinod Koul
Cc: oe-kbuild-all, LKML, linux-doc, Jonathan Corbet, Randy Dunlap
On Wed, 2024-06-19 at 14:21 +0200, Paul Cercueil wrote:
> Le mercredi 19 juin 2024 à 13:56 +0200, Markus Elfring a écrit :
> > …
> > > https://lore.kernel.org/linux-iio/219abc43b4fdd4a13b307ed2efaa0e6869e68e3f.camel@gmail.com/T/#eefd360069c4261aec9621fafde30924706571c94
> > >
> > > (and responses below)
> > >
> > > It's more nuanced than I remembered.
> > …
> >
> >
> > > > * Will the desire grow for further collateral evolution according
> > > > to
> > > > affected software components?
> > >
> > > Not sure what you mean by that.
> >
> > Advanced programming interfaces were added a while ago.
> >
> > Example:
> > https://elixir.bootlin.com/linux/v6.10-rc4/source/include/linux/cleanup.h#L8
> >
> > Corresponding attempts for increasing API usage need to adapt to
> > remaining change reluctance,
> > don't they?
>
> Sure, I guess.
>
> But that does not change the fact that I cannot use cleanup.h magic in
> this patchset, yet, as the required changes would have to be done in a
> separate one.
>
>
Not to speak on the added churn in doing that now. This is already v11 and
complicated enough for us to add another dependency.
Moreover, yes, cleanup stuff is very nice but if some interface/API does not support
it, it's not up to the developer using that interface/API on some other patch series
to add support for it.
- Nuno Sá
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-19 12:16 ` Paul Cercueil
@ 2024-06-19 13:28 ` Markus Elfring
0 siblings, 0 replies; 33+ messages in thread
From: Markus Elfring @ 2024-06-19 13:28 UTC (permalink / raw)
To: Paul Cercueil, Nuno Sá, linux-iio, dmaengine, linux-media,
dri-devel, linaro-mm-sig, Christian König, Jonathan Cameron,
Lars-Peter Clausen, Sumit Semwal, Vinod Koul
Cc: LKML, linux-doc, Jonathan Corbet, Randy Dunlap
>> …
>>> +++ b/drivers/iio/industrialio-buffer.c
>> …
>>> +static void iio_buffer_dmabuf_release(struct kref *ref)
>>> +{
>> …
>>> + dma_resv_lock(dmabuf->resv, NULL);
>>> + dma_buf_unmap_attachment(attach, priv->sgt, priv->dir);
>>> + dma_resv_unlock(dmabuf->resv);
>> …
>>
>> Under which circumstances will another lock guard become applicable?
>> https://elixir.bootlin.com/linux/v6.10-rc4/source/include/linux/cleanup.h#L179
>
> As soon as "struct dma_resv" gets a DEFINE_GUARD().
Would any contributor like to add a macro call accordingly?
Regards,
Markus
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-19 3:15 ` kernel test robot
2024-06-19 5:57 ` Nuno Sá
2024-06-19 10:03 ` [v11 " Markus Elfring
@ 2024-06-20 10:45 ` Markus Elfring
2024-06-20 16:09 ` Vinod Koul
2 siblings, 1 reply; 33+ messages in thread
From: Markus Elfring @ 2024-06-20 10:45 UTC (permalink / raw)
To: lkp, Paul Cercueil, Nuno Sá, linux-iio, dmaengine,
linux-media, dri-devel, linaro-mm-sig, Christian König,
Jonathan Cameron, Lars-Peter Clausen, Sumit Semwal, Vinod Koul
Cc: oe-kbuild-all, LKML, linux-doc, Jonathan Corbet, Julia Lawall,
Lee Jones, Randy Dunlap
…
> All errors (new ones prefixed by >>):
>
>>> drivers/iio/industrialio-buffer.c:1715:3: error: cannot jump from this goto statement to its label
> 1715 | goto err_dmabuf_unmap_attachment;
…
Which software design options would you like to try out next
so that such a questionable compilation error message will be avoided finally?
Regards,
Markus
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-20 10:45 ` Markus Elfring
@ 2024-06-20 16:09 ` Vinod Koul
2024-06-20 17:05 ` Lee Jones
0 siblings, 1 reply; 33+ messages in thread
From: Vinod Koul @ 2024-06-20 16:09 UTC (permalink / raw)
To: Markus Elfring
Cc: lkp, Paul Cercueil, Nuno Sá, linux-iio, dmaengine,
linux-media, dri-devel, linaro-mm-sig, Christian König,
Jonathan Cameron, Lars-Peter Clausen, Sumit Semwal, oe-kbuild-all,
LKML, linux-doc, Jonathan Corbet, Julia Lawall, Lee Jones,
Randy Dunlap
On 20-06-24, 12:45, Markus Elfring wrote:
> …
> > All errors (new ones prefixed by >>):
> >
> >>> drivers/iio/industrialio-buffer.c:1715:3: error: cannot jump from this goto statement to its label
> > 1715 | goto err_dmabuf_unmap_attachment;
> …
>
> Which software design options would you like to try out next
> so that such a questionable compilation error message will be avoided finally?
The one where all emails from Markus go to dev/null
--
~Vinod
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-20 16:09 ` Vinod Koul
@ 2024-06-20 17:05 ` Lee Jones
2024-06-21 7:09 ` Vinod Koul
0 siblings, 1 reply; 33+ messages in thread
From: Lee Jones @ 2024-06-20 17:05 UTC (permalink / raw)
To: Vinod Koul
Cc: Markus Elfring, lkp, Paul Cercueil, Nuno Sá, linux-iio,
dmaengine, linux-media, dri-devel, linaro-mm-sig,
Christian König, Jonathan Cameron, Lars-Peter Clausen,
Sumit Semwal, oe-kbuild-all, LKML, linux-doc, Jonathan Corbet,
Julia Lawall, Randy Dunlap
On Thu, 20 Jun 2024, Vinod Koul wrote:
> On 20-06-24, 12:45, Markus Elfring wrote:
> > …
> > > All errors (new ones prefixed by >>):
> > >
> > >>> drivers/iio/industrialio-buffer.c:1715:3: error: cannot jump from this goto statement to its label
> > > 1715 | goto err_dmabuf_unmap_attachment;
> > …
> >
> > Which software design options would you like to try out next
> > so that such a questionable compilation error message will be avoided finally?
>
> The one where all emails from Markus go to dev/null
Play nice please.
Documentation/process/code-of-conduct.rst
--
Lee Jones [李琼斯]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-20 17:05 ` Lee Jones
@ 2024-06-21 7:09 ` Vinod Koul
2024-06-21 7:18 ` Nuno Sá
2024-06-21 7:27 ` Markus Elfring
0 siblings, 2 replies; 33+ messages in thread
From: Vinod Koul @ 2024-06-21 7:09 UTC (permalink / raw)
To: Lee Jones
Cc: Markus Elfring, lkp, Paul Cercueil, Nuno Sá, linux-iio,
dmaengine, linux-media, dri-devel, linaro-mm-sig,
Christian König, Jonathan Cameron, Lars-Peter Clausen,
Sumit Semwal, oe-kbuild-all, LKML, linux-doc, Jonathan Corbet,
Julia Lawall, Randy Dunlap
On 20-06-24, 18:05, Lee Jones wrote:
> On Thu, 20 Jun 2024, Vinod Koul wrote:
>
> > On 20-06-24, 12:45, Markus Elfring wrote:
> > > …
> > > > All errors (new ones prefixed by >>):
> > > >
> > > >>> drivers/iio/industrialio-buffer.c:1715:3: error: cannot jump from this goto statement to its label
> > > > 1715 | goto err_dmabuf_unmap_attachment;
> > > …
> > >
> > > Which software design options would you like to try out next
> > > so that such a questionable compilation error message will be avoided finally?
> >
> > The one where all emails from Markus go to dev/null
>
> Play nice please.
Would love to... but Markus has been repeat offender
Sadly, I am yet to see a constructive approach or even better a helpful
patch which improve something, rather than vague suggestions on the list
--
~Vinod
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-21 7:09 ` Vinod Koul
@ 2024-06-21 7:18 ` Nuno Sá
2024-06-21 7:36 ` Markus Elfring
2024-06-21 7:44 ` Lee Jones
2024-06-21 7:27 ` Markus Elfring
1 sibling, 2 replies; 33+ messages in thread
From: Nuno Sá @ 2024-06-21 7:18 UTC (permalink / raw)
To: Vinod Koul, Lee Jones
Cc: Markus Elfring, lkp, Paul Cercueil, Nuno Sá, linux-iio,
dmaengine, linux-media, dri-devel, linaro-mm-sig,
Christian König, Jonathan Cameron, Lars-Peter Clausen,
Sumit Semwal, oe-kbuild-all, LKML, linux-doc, Jonathan Corbet,
Julia Lawall, Randy Dunlap
On Fri, 2024-06-21 at 12:39 +0530, Vinod Koul wrote:
> On 20-06-24, 18:05, Lee Jones wrote:
> > On Thu, 20 Jun 2024, Vinod Koul wrote:
> >
> > > On 20-06-24, 12:45, Markus Elfring wrote:
> > > > …
> > > > > All errors (new ones prefixed by >>):
> > > > >
> > > > > > > drivers/iio/industrialio-buffer.c:1715:3: error: cannot jump from
> > > > > > > this goto statement to its label
> > > > > 1715 | goto err_dmabuf_unmap_attachment;
> > > > …
> > > >
> > > > Which software design options would you like to try out next
> > > > so that such a questionable compilation error message will be avoided
> > > > finally?
> > >
> > > The one where all emails from Markus go to dev/null
> >
> > Play nice please.
>
> Would love to... but Markus has been repeat offender
>
> Sadly, I am yet to see a constructive approach or even better a helpful
> patch which improve something, rather than vague suggestions on the list
>
Yeah, just look at how many automatic replies he get's from Greg pretty much
saying to ignore his comments.
- Nuno Sá
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-21 7:09 ` Vinod Koul
2024-06-21 7:18 ` Nuno Sá
@ 2024-06-21 7:27 ` Markus Elfring
2024-06-21 7:51 ` Lee Jones
1 sibling, 1 reply; 33+ messages in thread
From: Markus Elfring @ 2024-06-21 7:27 UTC (permalink / raw)
To: Vinod Koul, Lee Jones, lkp, linux-iio, dmaengine, linux-media,
dri-devel, linaro-mm-sig
Cc: Paul Cercueil, Nuno Sá, Christian König,
Jonathan Cameron, Lars-Peter Clausen, Sumit Semwal, oe-kbuild-all,
LKML, linux-doc, Jonathan Corbet, Julia Lawall, Randy Dunlap
> Sadly, I am yet to see a constructive approach or even better a helpful
> patch which improve something, rather than vague suggestions on the list
Can you get any more constructive impressions from another data representation?
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/log/?qt=author&q=Elfring
Are you aware how many change suggestions (also from my selection) are still
in various waiting queues?
Regards,
Markus
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-21 7:18 ` Nuno Sá
@ 2024-06-21 7:36 ` Markus Elfring
2024-06-21 7:44 ` Lee Jones
1 sibling, 0 replies; 33+ messages in thread
From: Markus Elfring @ 2024-06-21 7:36 UTC (permalink / raw)
To: Nuno Sá, Vinod Koul, Lee Jones, lkp, linux-iio, dmaengine,
linux-media, dri-devel, linaro-mm-sig
Cc: Paul Cercueil, Nuno Sá, Christian König,
Jonathan Cameron, Lars-Peter Clausen, Sumit Semwal, oe-kbuild-all,
LKML, linux-doc, Jonathan Corbet, Julia Lawall, Randy Dunlap
> Yeah, just look at how many automatic replies he get's from Greg pretty much
> saying to ignore his comments.
Does your feedback just indicate recurring communication difficulties?
Regards,
Markus
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-21 7:18 ` Nuno Sá
2024-06-21 7:36 ` Markus Elfring
@ 2024-06-21 7:44 ` Lee Jones
1 sibling, 0 replies; 33+ messages in thread
From: Lee Jones @ 2024-06-21 7:44 UTC (permalink / raw)
To: Nuno Sá
Cc: Vinod Koul, Markus Elfring, lkp, Paul Cercueil, Nuno Sá,
linux-iio, dmaengine, linux-media, dri-devel, linaro-mm-sig,
Christian König, Jonathan Cameron, Lars-Peter Clausen,
Sumit Semwal, oe-kbuild-all, LKML, linux-doc, Jonathan Corbet,
Julia Lawall, Randy Dunlap
On Fri, 21 Jun 2024, Nuno Sá wrote:
> On Fri, 2024-06-21 at 12:39 +0530, Vinod Koul wrote:
> > On 20-06-24, 18:05, Lee Jones wrote:
> > > On Thu, 20 Jun 2024, Vinod Koul wrote:
> > >
> > > > On 20-06-24, 12:45, Markus Elfring wrote:
> > > > > …
> > > > > > All errors (new ones prefixed by >>):
> > > > > >
> > > > > > > > drivers/iio/industrialio-buffer.c:1715:3: error: cannot jump from
> > > > > > > > this goto statement to its label
> > > > > > 1715 | goto err_dmabuf_unmap_attachment;
> > > > > …
> > > > >
> > > > > Which software design options would you like to try out next
> > > > > so that such a questionable compilation error message will be avoided
> > > > > finally?
> > > >
> > > > The one where all emails from Markus go to dev/null
> > >
> > > Play nice please.
> >
> > Would love to... but Markus has been repeat offender
> >
> > Sadly, I am yet to see a constructive approach or even better a helpful
> > patch which improve something, rather than vague suggestions on the list
Right, there are communication issues.
Doesn't mean we have to lower our own standards.
> Yeah, just look at how many automatic replies he get's from Greg pretty much
> saying to ignore his comments.
Yes, Greg is also grumpy about it, but at least he remains polite.
--
Lee Jones [李琼斯]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [v11 3/7] iio: core: Add new DMABUF interface infrastructure
2024-06-21 7:27 ` Markus Elfring
@ 2024-06-21 7:51 ` Lee Jones
2024-06-21 8:10 ` [RFC] Patch review challenges Markus Elfring
0 siblings, 1 reply; 33+ messages in thread
From: Lee Jones @ 2024-06-21 7:51 UTC (permalink / raw)
To: Markus Elfring
Cc: Vinod Koul, lkp, linux-iio, dmaengine, linux-media, dri-devel,
linaro-mm-sig, Paul Cercueil, Nuno Sá, Christian König,
Jonathan Cameron, Lars-Peter Clausen, Sumit Semwal, oe-kbuild-all,
LKML, linux-doc, Jonathan Corbet, Julia Lawall, Randy Dunlap
On Fri, 21 Jun 2024, Markus Elfring wrote:
> > Sadly, I am yet to see a constructive approach or even better a helpful
> > patch which improve something, rather than vague suggestions on the list
>
> Can you get any more constructive impressions from another data representation?
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/log/?qt=author&q=Elfring
>
> Are you aware how many change suggestions (also from my selection) are still
> in various waiting queues?
No one is doubting your overall contributions Markus.
The issue is one of communication and the way reviews are conducted.
Reviewing other people's work is challenging and requires a certain
skill-set, of which _excellent_ communication skills are non-negotiable.
Why not concentrate on more complex submissions for a while and grow
your repertoire of common review points, rather than repeating the same
few over and over? Reading other, more experienced maintainer's reviews
would also be a good use of your time.
--
Lee Jones [李琼斯]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [RFC] Patch review challenges
2024-06-21 7:51 ` Lee Jones
@ 2024-06-21 8:10 ` Markus Elfring
2024-06-21 8:21 ` Lee Jones
0 siblings, 1 reply; 33+ messages in thread
From: Markus Elfring @ 2024-06-21 8:10 UTC (permalink / raw)
To: Lee Jones, lkp, linux-iio, dmaengine, linux-media, dri-devel,
linaro-mm-sig
Cc: Vinod Koul, Paul Cercueil, Nuno Sá, Christian König,
Jonathan Cameron, Lars-Peter Clausen, Sumit Semwal, oe-kbuild-all,
LKML, linux-doc, Jonathan Corbet, Julia Lawall, Randy Dunlap
> The issue is one of communication and the way reviews are conducted.
>
> Reviewing other people's work is challenging and requires a certain
> skill-set, of which _excellent_ communication skills are non-negotiable.
Patch feedback and change tolerance can vary also according to involved communities.
> Why not concentrate on more complex submissions for a while and grow
> your repertoire of common review points,
Further collateral evolution can be considered there depending on
corresponding development resources.
> rather than repeating the same few over and over?
Some factors are probably known also according to corresponding statistics.
Several contributors are stumbling on recurring improvement possibilities
in published information.
> Reading other, more experienced maintainer's reviews would also be a good use
> of your time.
I am trying to influence adjustments in desirable directions for a while.
Regards,
Markus
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [RFC] Patch review challenges
2024-06-21 8:10 ` [RFC] Patch review challenges Markus Elfring
@ 2024-06-21 8:21 ` Lee Jones
0 siblings, 0 replies; 33+ messages in thread
From: Lee Jones @ 2024-06-21 8:21 UTC (permalink / raw)
To: Markus Elfring
Cc: lkp, linux-iio, dmaengine, linux-media, dri-devel, linaro-mm-sig,
Vinod Koul, Paul Cercueil, Nuno Sá, Christian König,
Jonathan Cameron, Lars-Peter Clausen, Sumit Semwal, oe-kbuild-all,
LKML, linux-doc, Jonathan Corbet, Julia Lawall, Randy Dunlap
On Fri, 21 Jun 2024, Markus Elfring wrote:
> > The issue is one of communication and the way reviews are conducted.
> >
> > Reviewing other people's work is challenging and requires a certain
> > skill-set, of which _excellent_ communication skills are non-negotiable.
>
> Patch feedback and change tolerance can vary also according to involved communities.
Agreed.
For this community, I suggest you build your skills for a while longer.
> > Why not concentrate on more complex submissions for a while and grow
> > your repertoire of common review points,
>
> Further collateral evolution can be considered there depending on
> corresponding development resources.
>
> > rather than repeating the same few over and over?
>
> Some factors are probably known also according to corresponding statistics.
> Several contributors are stumbling on recurring improvement possibilities
> in published information.
Right, this will always be true, however the few you've picked up on
are not important enough to keep reiterating. By doing so, you're
receiving undesirable attention.
> > Reading other, more experienced maintainer's reviews would also be a good use
> > of your time.
>
> I am trying to influence adjustments in desirable directions for a while.
Never stop trying to improve.
These are only my opinions of course. Take the advice or leave it.
There's no need to reply to this.
--
Lee Jones [李琼斯]
^ permalink raw reply [flat|nested] 33+ messages in thread
end of thread, other threads:[~2024-06-21 8:21 UTC | newest]
Thread overview: 33+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-18 10:02 [PATCH v11 0/7] iio: new DMABUF based API v11 Paul Cercueil
2024-06-18 10:02 ` [PATCH v11 1/7] dmaengine: Add API function dmaengine_prep_peripheral_dma_vec() Paul Cercueil
2024-06-18 10:02 ` [PATCH v11 2/7] dmaengine: dma-axi-dmac: Implement device_prep_peripheral_dma_vec Paul Cercueil
2024-06-18 10:02 ` [PATCH v11 3/7] iio: core: Add new DMABUF interface infrastructure Paul Cercueil
2024-06-19 3:15 ` kernel test robot
2024-06-19 5:57 ` Nuno Sá
2024-06-19 10:03 ` [v11 " Markus Elfring
2024-06-19 10:09 ` Paul Cercueil
2024-06-19 11:13 ` Markus Elfring
2024-06-19 11:30 ` Paul Cercueil
2024-06-19 11:56 ` Markus Elfring
2024-06-19 12:21 ` Paul Cercueil
2024-06-19 12:42 ` Nuno Sá
2024-06-20 10:45 ` Markus Elfring
2024-06-20 16:09 ` Vinod Koul
2024-06-20 17:05 ` Lee Jones
2024-06-21 7:09 ` Vinod Koul
2024-06-21 7:18 ` Nuno Sá
2024-06-21 7:36 ` Markus Elfring
2024-06-21 7:44 ` Lee Jones
2024-06-21 7:27 ` Markus Elfring
2024-06-21 7:51 ` Lee Jones
2024-06-21 8:10 ` [RFC] Patch review challenges Markus Elfring
2024-06-21 8:21 ` Lee Jones
2024-06-19 11:43 ` [PATCH v11 3/7] iio: core: Add new DMABUF interface infrastructure Markus Elfring
2024-06-19 12:16 ` Paul Cercueil
2024-06-19 13:28 ` Markus Elfring
2024-06-18 10:02 ` [PATCH v11 4/7] iio: buffer-dma: Enable support for DMABUFs Paul Cercueil
2024-06-18 10:03 ` [PATCH v11 5/7] iio: buffer-dmaengine: Support new DMABUF based userspace API Paul Cercueil
2024-06-18 10:03 ` [PATCH v11 6/7] Documentation: iio: Document high-speed DMABUF based API Paul Cercueil
2024-06-19 12:22 ` Bagas Sanjaya
2024-06-18 10:03 ` [PATCH v11 7/7] Documentation: dmaengine: Document new dma_vec API Paul Cercueil
2024-06-19 12:34 ` Bagas Sanjaya
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).