From: David Lechner <dlechner@baylibre.com>
To: "Mark Brown" <broonie@kernel.org>,
"Jonathan Cameron" <jic23@kernel.org>,
"Rob Herring" <robh@kernel.org>,
"Krzysztof Kozlowski" <krzk+dt@kernel.org>,
"Conor Dooley" <conor+dt@kernel.org>,
"Nuno Sá" <nuno.sa@analog.com>
Cc: "Uwe Kleine-König" <ukleinek@kernel.org>,
"Michael Hennerich" <Michael.Hennerich@analog.com>,
"Lars-Peter Clausen" <lars@metafoo.de>,
"David Jander" <david@protonic.nl>,
"Martin Sperl" <kernel@martin.sperl.org>,
linux-spi@vger.kernel.org, devicetree@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-iio@vger.kernel.org,
"Jonathan Cameron" <Jonathan.Cameron@huawei.com>,
"David Lechner" <dlechner@baylibre.com>
Subject: [PATCH v8 08/17] iio: buffer-dmaengine: split requesting DMA channel from allocating buffer
Date: Fri, 07 Feb 2025 14:09:05 -0600 [thread overview]
Message-ID: <20250207-dlech-mainline-spi-engine-offload-2-v8-8-e48a489be48c@baylibre.com> (raw)
In-Reply-To: <20250207-dlech-mainline-spi-engine-offload-2-v8-0-e48a489be48c@baylibre.com>
Refactor the IIO dmaengine buffer code to split requesting the DMA
channel from allocating the buffer. We want to be able to add a new
function where the IIO device driver manages the DMA channel, so these
two actions need to be separate.
To do this, calling dma_request_chan() is moved from
iio_dmaengine_buffer_alloc() to iio_dmaengine_buffer_setup_ext(). A new
__iio_dmaengine_buffer_setup_ext() helper function is added to simplify
error unwinding and will also be used by a new function in a later
patch.
iio_dmaengine_buffer_free() now only frees the buffer and does not
release the DMA channel. A new iio_dmaengine_buffer_teardown() function
is added to unwind everything done in iio_dmaengine_buffer_setup_ext().
This keeps things more symmetrical with obvious pairs alloc/free and
setup/teardown.
Calling dma_get_slave_caps() in iio_dmaengine_buffer_alloc() is moved so
that we can avoid any gotos for error unwinding.
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Nuno Sa <nuno.sa@analog.com>
Signed-off-by: David Lechner <dlechner@baylibre.com>
---
v7 changes: none
v6 changes:
* Split out from patch that adds the new function
* Dropped owns_chan flag
* Introduced iio_dmaengine_buffer_teardown() so that
iio_dmaengine_buffer_free() doesn't have to manage the DMA channel
---
drivers/iio/adc/adi-axi-adc.c | 2 +-
drivers/iio/buffer/industrialio-buffer-dmaengine.c | 106 ++++++++++++---------
drivers/iio/dac/adi-axi-dac.c | 2 +-
include/linux/iio/buffer-dmaengine.h | 2 +-
4 files changed, 65 insertions(+), 47 deletions(-)
diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c
index c7357601f0f869e57636f00bb1e26c059c3ab15c..a55db308baabf7b26ea98431cab1e6af7fe2a5f3 100644
--- a/drivers/iio/adc/adi-axi-adc.c
+++ b/drivers/iio/adc/adi-axi-adc.c
@@ -305,7 +305,7 @@ static struct iio_buffer *axi_adc_request_buffer(struct iio_backend *back,
static void axi_adc_free_buffer(struct iio_backend *back,
struct iio_buffer *buffer)
{
- iio_dmaengine_buffer_free(buffer);
+ iio_dmaengine_buffer_teardown(buffer);
}
static int axi_adc_reg_access(struct iio_backend *back, unsigned int reg,
diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
index 614e1c4189a9cdd5a8d9d8c5ef91566983032951..02847d3962fcbb43ec76167db6482ab951f20942 100644
--- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
+++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
@@ -206,39 +206,29 @@ static const struct iio_dev_attr *iio_dmaengine_buffer_attrs[] = {
/**
* iio_dmaengine_buffer_alloc() - Allocate new buffer which uses DMAengine
- * @dev: DMA channel consumer device
- * @channel: DMA channel name, typically "rx".
+ * @chan: DMA channel.
*
* This allocates a new IIO buffer which internally uses the DMAengine framework
- * to perform its transfers. The parent device will be used to request the DMA
- * channel.
+ * to perform its transfers.
*
* Once done using the buffer iio_dmaengine_buffer_free() should be used to
* release it.
*/
-static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev,
- const char *channel)
+static struct iio_buffer *iio_dmaengine_buffer_alloc(struct dma_chan *chan)
{
struct dmaengine_buffer *dmaengine_buffer;
unsigned int width, src_width, dest_width;
struct dma_slave_caps caps;
- struct dma_chan *chan;
int ret;
+ ret = dma_get_slave_caps(chan, &caps);
+ if (ret < 0)
+ return ERR_PTR(ret);
+
dmaengine_buffer = kzalloc(sizeof(*dmaengine_buffer), GFP_KERNEL);
if (!dmaengine_buffer)
return ERR_PTR(-ENOMEM);
- chan = dma_request_chan(dev, channel);
- if (IS_ERR(chan)) {
- ret = PTR_ERR(chan);
- goto err_free;
- }
-
- ret = dma_get_slave_caps(chan, &caps);
- if (ret < 0)
- goto err_release;
-
/* Needs to be aligned to the maximum of the minimums */
if (caps.src_addr_widths)
src_width = __ffs(caps.src_addr_widths);
@@ -262,12 +252,6 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev,
dmaengine_buffer->queue.buffer.access = &iio_dmaengine_buffer_ops;
return &dmaengine_buffer->queue.buffer;
-
-err_release:
- dma_release_channel(chan);
-err_free:
- kfree(dmaengine_buffer);
- return ERR_PTR(ret);
}
/**
@@ -276,17 +260,57 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev,
*
* Frees a buffer previously allocated with iio_dmaengine_buffer_alloc().
*/
-void iio_dmaengine_buffer_free(struct iio_buffer *buffer)
+static void iio_dmaengine_buffer_free(struct iio_buffer *buffer)
{
struct dmaengine_buffer *dmaengine_buffer =
iio_buffer_to_dmaengine_buffer(buffer);
iio_dma_buffer_exit(&dmaengine_buffer->queue);
- dma_release_channel(dmaengine_buffer->chan);
-
iio_buffer_put(buffer);
}
-EXPORT_SYMBOL_NS_GPL(iio_dmaengine_buffer_free, "IIO_DMAENGINE_BUFFER");
+
+/**
+ * iio_dmaengine_buffer_teardown() - Releases DMA channel and frees buffer
+ * @buffer: Buffer to free
+ *
+ * Releases the DMA channel and frees the buffer previously setup with
+ * iio_dmaengine_buffer_setup_ext().
+ */
+void iio_dmaengine_buffer_teardown(struct iio_buffer *buffer)
+{
+ struct dmaengine_buffer *dmaengine_buffer =
+ iio_buffer_to_dmaengine_buffer(buffer);
+ struct dma_chan *chan = dmaengine_buffer->chan;
+
+ iio_dmaengine_buffer_free(buffer);
+ dma_release_channel(chan);
+}
+EXPORT_SYMBOL_NS_GPL(iio_dmaengine_buffer_teardown, "IIO_DMAENGINE_BUFFER");
+
+static struct iio_buffer
+*__iio_dmaengine_buffer_setup_ext(struct iio_dev *indio_dev,
+ struct dma_chan *chan,
+ enum iio_buffer_direction dir)
+{
+ struct iio_buffer *buffer;
+ int ret;
+
+ buffer = iio_dmaengine_buffer_alloc(chan);
+ if (IS_ERR(buffer))
+ return ERR_CAST(buffer);
+
+ indio_dev->modes |= INDIO_BUFFER_HARDWARE;
+
+ buffer->direction = dir;
+
+ ret = iio_device_attach_buffer(indio_dev, buffer);
+ if (ret) {
+ iio_dmaengine_buffer_free(buffer);
+ return ERR_PTR(ret);
+ }
+
+ return buffer;
+}
/**
* iio_dmaengine_buffer_setup_ext() - Setup a DMA buffer for an IIO device
@@ -300,7 +324,7 @@ EXPORT_SYMBOL_NS_GPL(iio_dmaengine_buffer_free, "IIO_DMAENGINE_BUFFER");
* It also appends the INDIO_BUFFER_HARDWARE mode to the supported modes of the
* IIO device.
*
- * Once done using the buffer iio_dmaengine_buffer_free() should be used to
+ * Once done using the buffer iio_dmaengine_buffer_teardown() should be used to
* release it.
*/
struct iio_buffer *iio_dmaengine_buffer_setup_ext(struct device *dev,
@@ -308,30 +332,24 @@ struct iio_buffer *iio_dmaengine_buffer_setup_ext(struct device *dev,
const char *channel,
enum iio_buffer_direction dir)
{
+ struct dma_chan *chan;
struct iio_buffer *buffer;
- int ret;
-
- buffer = iio_dmaengine_buffer_alloc(dev, channel);
- if (IS_ERR(buffer))
- return ERR_CAST(buffer);
-
- indio_dev->modes |= INDIO_BUFFER_HARDWARE;
- buffer->direction = dir;
+ chan = dma_request_chan(dev, channel);
+ if (IS_ERR(chan))
+ return ERR_CAST(chan);
- ret = iio_device_attach_buffer(indio_dev, buffer);
- if (ret) {
- iio_dmaengine_buffer_free(buffer);
- return ERR_PTR(ret);
- }
+ buffer = __iio_dmaengine_buffer_setup_ext(indio_dev, chan, dir);
+ if (IS_ERR(buffer))
+ dma_release_channel(chan);
return buffer;
}
EXPORT_SYMBOL_NS_GPL(iio_dmaengine_buffer_setup_ext, "IIO_DMAENGINE_BUFFER");
-static void __devm_iio_dmaengine_buffer_free(void *buffer)
+static void devm_iio_dmaengine_buffer_teardown(void *buffer)
{
- iio_dmaengine_buffer_free(buffer);
+ iio_dmaengine_buffer_teardown(buffer);
}
/**
@@ -357,7 +375,7 @@ int devm_iio_dmaengine_buffer_setup_ext(struct device *dev,
if (IS_ERR(buffer))
return PTR_ERR(buffer);
- return devm_add_action_or_reset(dev, __devm_iio_dmaengine_buffer_free,
+ return devm_add_action_or_reset(dev, devm_iio_dmaengine_buffer_teardown,
buffer);
}
EXPORT_SYMBOL_NS_GPL(devm_iio_dmaengine_buffer_setup_ext, "IIO_DMAENGINE_BUFFER");
diff --git a/drivers/iio/dac/adi-axi-dac.c b/drivers/iio/dac/adi-axi-dac.c
index ac4c96c4ccf37d752372b57acfcb8ad098930b58..7ed934dfc5fc9f08d59b2b0c33537669558ec884 100644
--- a/drivers/iio/dac/adi-axi-dac.c
+++ b/drivers/iio/dac/adi-axi-dac.c
@@ -168,7 +168,7 @@ static struct iio_buffer *axi_dac_request_buffer(struct iio_backend *back,
static void axi_dac_free_buffer(struct iio_backend *back,
struct iio_buffer *buffer)
{
- iio_dmaengine_buffer_free(buffer);
+ iio_dmaengine_buffer_teardown(buffer);
}
enum {
diff --git a/include/linux/iio/buffer-dmaengine.h b/include/linux/iio/buffer-dmaengine.h
index 81d9a19aeb9199dd58bb9d35a91f0ec4b00846df..72a2e3fd8a5bf5e8f27ee226ddd92979d233754b 100644
--- a/include/linux/iio/buffer-dmaengine.h
+++ b/include/linux/iio/buffer-dmaengine.h
@@ -12,7 +12,7 @@
struct iio_dev;
struct device;
-void iio_dmaengine_buffer_free(struct iio_buffer *buffer);
+void iio_dmaengine_buffer_teardown(struct iio_buffer *buffer);
struct iio_buffer *iio_dmaengine_buffer_setup_ext(struct device *dev,
struct iio_dev *indio_dev,
const char *channel,
--
2.43.0
next prev parent reply other threads:[~2025-02-07 20:09 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-07 20:08 [PATCH v8 00/17] spi: axi-spi-engine: add offload support David Lechner
2025-02-07 20:08 ` [PATCH v8 01/17] spi: add basic support for SPI offloading David Lechner
2025-02-10 16:45 ` Andy Shevchenko
2025-02-10 17:11 ` David Lechner
2025-02-10 17:48 ` Mark Brown
2025-02-10 20:33 ` Andy Shevchenko
2025-02-11 13:00 ` Mark Brown
2025-02-11 14:20 ` Andy Shevchenko
2025-02-11 14:29 ` Andy Shevchenko
2025-02-11 14:31 ` Andy Shevchenko
2025-02-11 14:35 ` Andy Shevchenko
2025-02-11 18:45 ` Uwe Kleine-König
2025-02-11 18:53 ` Mark Brown
2025-02-11 19:15 ` Andy Shevchenko
2025-02-11 19:12 ` Andy Shevchenko
2025-02-12 8:52 ` Uwe Kleine-König
2025-02-12 10:55 ` Andy Shevchenko
2025-02-12 10:58 ` Andy Shevchenko
2025-02-07 20:08 ` [PATCH v8 02/17] spi: offload: add support for hardware triggers David Lechner
2025-02-07 20:09 ` [PATCH v8 03/17] dt-bindings: trigger-source: add generic PWM trigger source David Lechner
2025-02-07 20:09 ` [PATCH v8 04/17] spi: offload-trigger: add PWM trigger driver David Lechner
2025-02-10 16:52 ` Andy Shevchenko
2025-02-07 20:09 ` [PATCH v8 05/17] spi: add offload TX/RX streaming APIs David Lechner
2025-02-07 20:09 ` [PATCH v8 06/17] spi: dt-bindings: axi-spi-engine: add SPI offload properties David Lechner
2025-02-07 20:09 ` [PATCH v8 07/17] spi: axi-spi-engine: implement offload support David Lechner
2025-02-07 20:09 ` David Lechner [this message]
2025-02-07 20:09 ` [PATCH v8 09/17] iio: buffer-dmaengine: add devm_iio_dmaengine_buffer_setup_with_handle() David Lechner
2025-02-07 20:09 ` [PATCH v8 10/17] iio: adc: ad7944: don't use storagebits for sizing David Lechner
2025-02-07 20:09 ` [PATCH v8 11/17] iio: adc: ad7944: add support for SPI offload David Lechner
2025-02-10 19:09 ` David Lechner
2025-02-11 19:32 ` Jonathan Cameron
2025-02-07 20:09 ` [PATCH v8 12/17] doc: iio: ad7944: describe offload support David Lechner
2025-02-07 20:09 ` [PATCH v8 13/17] dt-bindings: iio: adc: adi,ad4695: add SPI offload properties David Lechner
2025-02-07 20:09 ` [PATCH v8 14/17] iio: adc: ad4695: Add support for SPI offload David Lechner
2025-02-07 20:20 ` Mark Brown
2025-02-08 13:11 ` Jonathan Cameron
2025-02-10 16:01 ` David Lechner
2025-02-10 18:54 ` Jonathan Cameron
2025-02-07 20:09 ` [PATCH v8 15/17] doc: iio: ad4695: add SPI offload support David Lechner
2025-02-07 20:09 ` [PATCH v8 16/17] iio: dac: ad5791: sort include directives David Lechner
2025-02-07 20:09 ` [PATCH v8 17/17] iio: dac: ad5791: Add offload support David Lechner
2025-02-10 14:36 ` [PATCH v8 00/17] spi: axi-spi-engine: add " Mark Brown
2025-02-10 18:59 ` Jonathan Cameron
2025-02-10 16:07 ` (subset) " Mark Brown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250207-dlech-mainline-spi-engine-offload-2-v8-8-e48a489be48c@baylibre.com \
--to=dlechner@baylibre.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=Michael.Hennerich@analog.com \
--cc=broonie@kernel.org \
--cc=conor+dt@kernel.org \
--cc=david@protonic.nl \
--cc=devicetree@vger.kernel.org \
--cc=jic23@kernel.org \
--cc=kernel@martin.sperl.org \
--cc=krzk+dt@kernel.org \
--cc=lars@metafoo.de \
--cc=linux-iio@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-spi@vger.kernel.org \
--cc=nuno.sa@analog.com \
--cc=robh@kernel.org \
--cc=ukleinek@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).