* Re: [PATCH RFC v4 11/15] iio: buffer-dmaengine: add devm_iio_dmaengine_buffer_setup_ext2()
@ 2024-10-25 18:40 Nuno Sá
2024-10-26 15:48 ` Jonathan Cameron
0 siblings, 1 reply; 7+ messages in thread
From: Nuno Sá @ 2024-10-25 18:40 UTC (permalink / raw)
To: David Lechner
Cc: Mark Brown, Jonathan Cameron, Rob Herring, Krzysztof Kozlowski,
Conor Dooley, Nuno Sá, Uwe Kleine-König,
Michael Hennerich, Lars-Peter Clausen, David Jander, Martin Sperl,
linux-spi, devicetree, linux-kernel, linux-iio, linux-pwm
Oct 25, 2024 18:42:02 David Lechner <dlechner@baylibre.com>:
> On 10/25/24 8:24 AM, Nuno Sá wrote:
>> I still need to look better at this but I do have one though already
>> :)
>>
>> On Wed, 2024-10-23 at 15:59 -0500, David Lechner wrote:
>>> Add a new devm_iio_dmaengine_buffer_setup_ext2() function to handle
>>> cases where the DMA channel is managed by the caller rather than
>>> being
>>> requested and released by the iio_dmaengine module.
>>>
>>> Signed-off-by: David Lechner <dlechner@baylibre.com>
>>> ---
>>>
>>> v4 changes:
>>> * This replaces "iio: buffer-dmaengine: generalize requesting DMA
>>> channel"
>>> ---
>
> ...
>
>>> @@ -282,12 +281,38 @@ void iio_dmaengine_buffer_free(struct
>>> iio_buffer *buffer)
>>> iio_buffer_to_dmaengine_buffer(buffer);
>>>
>>> iio_dma_buffer_exit(&dmaengine_buffer->queue);
>>> - dma_release_channel(dmaengine_buffer->chan);
>>> -
>>> iio_buffer_put(buffer);
>>> +
>>> + if (dmaengine_buffer->owns_chan)
>>> + dma_release_channel(dmaengine_buffer->chan);
>>
>> Not sure if I agree much with this owns_chan flag. The way I see it,
>> we should always
>> handover the lifetime of the DMA channel to the IIO DMA framework.
>> Note that even the
>> device you pass in for both requesting the channel of the spi_offload
>> and for
>> setting up the DMA buffer is the same (and i suspect it will always
>> be) so I would
>> not go with the trouble. And with this assumption we could simplify a
>> bit more the
>> spi implementation.
>
> I tried something like this in v3 but Jonathan didn't seem to like it.
>
> https://lore.kernel.org/all/20240727144303.4a8604cb@jic23-huawei/
>
>>
>> And not even related but I even suspect the current implementation
>> could be
>> problematic. Basically I'm suspecting that the lifetime of the DMA
>> channel should be
>> attached to the lifetime of the iio_buffer. IOW, we should only
>> release the channel
>> in iio_dmaengine_buffer_release() - in which case the current
>> implementation with the
>> spi_offload would also be buggy.
>
> The buffer can outlive the iio device driver that created the buffer?
Yes, it can as the IIO device itself. In case a userspace app has an open
FD for the buffer chardev, we get a reference that is only released when
the FD is closed (which can outlive the device behind bound to its
driver). That is why we nullify indio_dev->info and check for it on the
read() and write() fops.
FWIW, I raised concerns about this in the past (as we don't have any lock
in those paths) but Jonathan rightfully wanted to see a real race. And I
was too lazy to try and reproduce one but I'm still fairly sure we have
theoretical (at least) races in those paths. And one of them could be (I
think) concurrently hitting a DMA submit block while the device is being
unbound. In that case the DMA chan would be already released and we could
still try to initiate a transfer. I did not check if that would crash or
something but it should still not happen.
- Nuno Sá
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH RFC v4 11/15] iio: buffer-dmaengine: add devm_iio_dmaengine_buffer_setup_ext2()
2024-10-25 18:40 [PATCH RFC v4 11/15] iio: buffer-dmaengine: add devm_iio_dmaengine_buffer_setup_ext2() Nuno Sá
@ 2024-10-26 15:48 ` Jonathan Cameron
2024-10-28 11:08 ` Nuno Sá
0 siblings, 1 reply; 7+ messages in thread
From: Jonathan Cameron @ 2024-10-26 15:48 UTC (permalink / raw)
To: Nuno Sá
Cc: David Lechner, Mark Brown, Rob Herring, Krzysztof Kozlowski,
Conor Dooley, Nuno Sá, Uwe Kleine-König,
Michael Hennerich, Lars-Peter Clausen, David Jander, Martin Sperl,
linux-spi, devicetree, linux-kernel, linux-iio, linux-pwm
On Fri, 25 Oct 2024 20:40:42 +0200 (GMT+02:00)
Nuno Sá <noname.nuno@gmail.com> wrote:
> Oct 25, 2024 18:42:02 David Lechner <dlechner@baylibre.com>:
>
> > On 10/25/24 8:24 AM, Nuno Sá wrote:
> >> I still need to look better at this but I do have one though already
> >> :)
> >>
> >> On Wed, 2024-10-23 at 15:59 -0500, David Lechner wrote:
> >>> Add a new devm_iio_dmaengine_buffer_setup_ext2() function to handle
> >>> cases where the DMA channel is managed by the caller rather than
> >>> being
> >>> requested and released by the iio_dmaengine module.
> >>>
> >>> Signed-off-by: David Lechner <dlechner@baylibre.com>
> >>> ---
> >>>
> >>> v4 changes:
> >>> * This replaces "iio: buffer-dmaengine: generalize requesting DMA
> >>> channel"
> >>> ---
> >
> > ...
> >
> >>> @@ -282,12 +281,38 @@ void iio_dmaengine_buffer_free(struct
> >>> iio_buffer *buffer)
> >>> iio_buffer_to_dmaengine_buffer(buffer);
> >>>
> >>> iio_dma_buffer_exit(&dmaengine_buffer->queue);
> >>> - dma_release_channel(dmaengine_buffer->chan);
> >>> -
> >>> iio_buffer_put(buffer);
> >>> +
> >>> + if (dmaengine_buffer->owns_chan)
> >>> + dma_release_channel(dmaengine_buffer->chan);
> >>
> >> Not sure if I agree much with this owns_chan flag. The way I see it,
> >> we should always
> >> handover the lifetime of the DMA channel to the IIO DMA framework.
> >> Note that even the
> >> device you pass in for both requesting the channel of the spi_offload
> >> and for
> >> setting up the DMA buffer is the same (and i suspect it will always
> >> be) so I would
> >> not go with the trouble. And with this assumption we could simplify a
> >> bit more the
> >> spi implementation.
> >
> > I tried something like this in v3 but Jonathan didn't seem to like it.
> >
> > https://lore.kernel.org/all/20240727144303.4a8604cb@jic23-huawei/
> >
> >>
> >> And not even related but I even suspect the current implementation
> >> could be
> >> problematic. Basically I'm suspecting that the lifetime of the DMA
> >> channel should be
> >> attached to the lifetime of the iio_buffer. IOW, we should only
> >> release the channel
> >> in iio_dmaengine_buffer_release() - in which case the current
> >> implementation with the
> >> spi_offload would also be buggy.
> >
> > The buffer can outlive the iio device driver that created the buffer?
>
> Yes, it can as the IIO device itself. In case a userspace app has an open
> FD for the buffer chardev, we get a reference that is only released when
> the FD is closed (which can outlive the device behind bound to its
> driver). That is why we nullify indio_dev->info and check for it on the
> read() and write() fops.
>
> FWIW, I raised concerns about this in the past (as we don't have any lock
> in those paths) but Jonathan rightfully wanted to see a real race. And I
> was too lazy to try and reproduce one but I'm still fairly sure we have
> theoretical (at least) races in those paths. And one of them could be (I
> think) concurrently hitting a DMA submit block while the device is being
> unbound. In that case the DMA chan would be already released and we could
> still try to initiate a transfer. I did not check if that would crash or
> something but it should still not happen.
>
There are a few places where I've been meaning to have another look
at our protections during unregister. May well be problems hiding here
and in general the thinking on how to do this in the kernel has slowly
been changing so we might be able to clean things up in general.
I 'think' the person who looked at this in most depth was Lars-Peter but
long long ago!
Jonathan
> - Nuno Sá
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH RFC v4 11/15] iio: buffer-dmaengine: add devm_iio_dmaengine_buffer_setup_ext2()
2024-10-26 15:48 ` Jonathan Cameron
@ 2024-10-28 11:08 ` Nuno Sá
2024-10-28 16:35 ` Jonathan Cameron
0 siblings, 1 reply; 7+ messages in thread
From: Nuno Sá @ 2024-10-28 11:08 UTC (permalink / raw)
To: Jonathan Cameron
Cc: David Lechner, Mark Brown, Rob Herring, Krzysztof Kozlowski,
Conor Dooley, Nuno Sá, Uwe Kleine-König,
Michael Hennerich, Lars-Peter Clausen, David Jander, Martin Sperl,
linux-spi, devicetree, linux-kernel, linux-iio, linux-pwm
On Sat, 2024-10-26 at 16:48 +0100, Jonathan Cameron wrote:
> On Fri, 25 Oct 2024 20:40:42 +0200 (GMT+02:00)
> Nuno Sá <noname.nuno@gmail.com> wrote:
>
> > Oct 25, 2024 18:42:02 David Lechner <dlechner@baylibre.com>:
> >
> > > On 10/25/24 8:24 AM, Nuno Sá wrote:
> > > > I still need to look better at this but I do have one though already
> > > > :)
> > > >
> > > > On Wed, 2024-10-23 at 15:59 -0500, David Lechner wrote:
> > > > > Add a new devm_iio_dmaengine_buffer_setup_ext2() function to handle
> > > > > cases where the DMA channel is managed by the caller rather than
> > > > > being
> > > > > requested and released by the iio_dmaengine module.
> > > > >
> > > > > Signed-off-by: David Lechner <dlechner@baylibre.com>
> > > > > ---
> > > > >
> > > > > v4 changes:
> > > > > * This replaces "iio: buffer-dmaengine: generalize requesting DMA
> > > > > channel"
> > > > > ---
> > >
> > > ...
> > >
> > > > > @@ -282,12 +281,38 @@ void iio_dmaengine_buffer_free(struct
> > > > > iio_buffer *buffer)
> > > > > iio_buffer_to_dmaengine_buffer(buffer);
> > > > >
> > > > > iio_dma_buffer_exit(&dmaengine_buffer->queue);
> > > > > - dma_release_channel(dmaengine_buffer->chan);
> > > > > -
> > > > > iio_buffer_put(buffer);
> > > > > +
> > > > > + if (dmaengine_buffer->owns_chan)
> > > > > + dma_release_channel(dmaengine_buffer->chan);
> > > >
> > > > Not sure if I agree much with this owns_chan flag. The way I see it,
> > > > we should always
> > > > handover the lifetime of the DMA channel to the IIO DMA framework.
> > > > Note that even the
> > > > device you pass in for both requesting the channel of the spi_offload
> > > > and for
> > > > setting up the DMA buffer is the same (and i suspect it will always
> > > > be) so I would
> > > > not go with the trouble. And with this assumption we could simplify a
> > > > bit more the
> > > > spi implementation.
> > >
> > > I tried something like this in v3 but Jonathan didn't seem to like it.
> > >
> > > https://lore.kernel.org/all/20240727144303.4a8604cb@jic23-huawei/
> > >
> > > >
> > > > And not even related but I even suspect the current implementation
> > > > could be
> > > > problematic. Basically I'm suspecting that the lifetime of the DMA
> > > > channel should be
> > > > attached to the lifetime of the iio_buffer. IOW, we should only
> > > > release the channel
> > > > in iio_dmaengine_buffer_release() - in which case the current
> > > > implementation with the
> > > > spi_offload would also be buggy.
> > >
> > > The buffer can outlive the iio device driver that created the buffer?
> >
> > Yes, it can as the IIO device itself. In case a userspace app has an open
> > FD for the buffer chardev, we get a reference that is only released when
> > the FD is closed (which can outlive the device behind bound to its
> > driver). That is why we nullify indio_dev->info and check for it on the
> > read() and write() fops.
> >
> > FWIW, I raised concerns about this in the past (as we don't have any lock
> > in those paths) but Jonathan rightfully wanted to see a real race. And I
> > was too lazy to try and reproduce one but I'm still fairly sure we have
> > theoretical (at least) races in those paths. And one of them could be (I
> > think) concurrently hitting a DMA submit block while the device is being
> > unbound. In that case the DMA chan would be already released and we could
> > still try to initiate a transfer. I did not check if that would crash or
> > something but it should still not happen.
> >
> There are a few places where I've been meaning to have another look
> at our protections during unregister. May well be problems hiding here
> and in general the thinking on how to do this in the kernel has slowly
> been changing so we might be able to clean things up in general.
>
Yeah, I'm fairly sure things like [1] are not enough in preventing potential nasty
races (though they should be hard to trigger). OTOH, in [2], we do have proper
locking.
Simple solution would be to use the info lock in the buffer read() and write() paths.
I do realize that's a fastpath but I don't think that would be such a contended lock.
But we can surely do better and RCU could be a good candidate for this (we could do
something similar to what gpiolib is doing) and I wouldn't expect it to be that
complicated to implement. Biggest issue by making info a __rcu pointer would be to
change all IIO drivers to set the pointer with rcu_assign_pointer(). Though during
probe there's no potential race so what we have today should be fine (just not sure
if things like sparse would not complain about the "raw" assignment).
[1]: https://elixir.bootlin.com/linux/v6.12-rc4/source/drivers/iio/industrialio-buffer.c#L176
[2]: https://elixir.bootlin.com/linux/v6.12-rc4/source/drivers/iio/industrialio-core.c#L1825
- Nuno Sá
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH RFC v4 11/15] iio: buffer-dmaengine: add devm_iio_dmaengine_buffer_setup_ext2()
2024-10-28 11:08 ` Nuno Sá
@ 2024-10-28 16:35 ` Jonathan Cameron
0 siblings, 0 replies; 7+ messages in thread
From: Jonathan Cameron @ 2024-10-28 16:35 UTC (permalink / raw)
To: Nuno Sá
Cc: Jonathan Cameron, David Lechner, Mark Brown, Rob Herring,
Krzysztof Kozlowski, Conor Dooley, Nuno Sá,
Uwe Kleine-König, Michael Hennerich, Lars-Peter Clausen,
David Jander, Martin Sperl, linux-spi, devicetree, linux-kernel,
linux-iio, linux-pwm
On Mon, 28 Oct 2024 12:08:49 +0100
Nuno Sá <noname.nuno@gmail.com> wrote:
> On Sat, 2024-10-26 at 16:48 +0100, Jonathan Cameron wrote:
> > On Fri, 25 Oct 2024 20:40:42 +0200 (GMT+02:00)
> > Nuno Sá <noname.nuno@gmail.com> wrote:
> >
> > > Oct 25, 2024 18:42:02 David Lechner <dlechner@baylibre.com>:
> > >
> > > > On 10/25/24 8:24 AM, Nuno Sá wrote:
> > > > > I still need to look better at this but I do have one though already
> > > > > :)
> > > > >
> > > > > On Wed, 2024-10-23 at 15:59 -0500, David Lechner wrote:
> > > > > > Add a new devm_iio_dmaengine_buffer_setup_ext2() function to handle
> > > > > > cases where the DMA channel is managed by the caller rather than
> > > > > > being
> > > > > > requested and released by the iio_dmaengine module.
> > > > > >
> > > > > > Signed-off-by: David Lechner <dlechner@baylibre.com>
> > > > > > ---
> > > > > >
> > > > > > v4 changes:
> > > > > > * This replaces "iio: buffer-dmaengine: generalize requesting DMA
> > > > > > channel"
> > > > > > ---
> > > >
> > > > ...
> > > >
> > > > > > @@ -282,12 +281,38 @@ void iio_dmaengine_buffer_free(struct
> > > > > > iio_buffer *buffer)
> > > > > > iio_buffer_to_dmaengine_buffer(buffer);
> > > > > >
> > > > > > iio_dma_buffer_exit(&dmaengine_buffer->queue);
> > > > > > - dma_release_channel(dmaengine_buffer->chan);
> > > > > > -
> > > > > > iio_buffer_put(buffer);
> > > > > > +
> > > > > > + if (dmaengine_buffer->owns_chan)
> > > > > > + dma_release_channel(dmaengine_buffer->chan);
> > > > >
> > > > > Not sure if I agree much with this owns_chan flag. The way I see it,
> > > > > we should always
> > > > > handover the lifetime of the DMA channel to the IIO DMA framework.
> > > > > Note that even the
> > > > > device you pass in for both requesting the channel of the spi_offload
> > > > > and for
> > > > > setting up the DMA buffer is the same (and i suspect it will always
> > > > > be) so I would
> > > > > not go with the trouble. And with this assumption we could simplify a
> > > > > bit more the
> > > > > spi implementation.
> > > >
> > > > I tried something like this in v3 but Jonathan didn't seem to like it.
> > > >
> > > > https://lore.kernel.org/all/20240727144303.4a8604cb@jic23-huawei/
> > > >
> > > > >
> > > > > And not even related but I even suspect the current implementation
> > > > > could be
> > > > > problematic. Basically I'm suspecting that the lifetime of the DMA
> > > > > channel should be
> > > > > attached to the lifetime of the iio_buffer. IOW, we should only
> > > > > release the channel
> > > > > in iio_dmaengine_buffer_release() - in which case the current
> > > > > implementation with the
> > > > > spi_offload would also be buggy.
> > > >
> > > > The buffer can outlive the iio device driver that created the buffer?
> > >
> > > Yes, it can as the IIO device itself. In case a userspace app has an open
> > > FD for the buffer chardev, we get a reference that is only released when
> > > the FD is closed (which can outlive the device behind bound to its
> > > driver). That is why we nullify indio_dev->info and check for it on the
> > > read() and write() fops.
> > >
> > > FWIW, I raised concerns about this in the past (as we don't have any lock
> > > in those paths) but Jonathan rightfully wanted to see a real race. And I
> > > was too lazy to try and reproduce one but I'm still fairly sure we have
> > > theoretical (at least) races in those paths. And one of them could be (I
> > > think) concurrently hitting a DMA submit block while the device is being
> > > unbound. In that case the DMA chan would be already released and we could
> > > still try to initiate a transfer. I did not check if that would crash or
> > > something but it should still not happen.
> > >
> > There are a few places where I've been meaning to have another look
> > at our protections during unregister. May well be problems hiding here
> > and in general the thinking on how to do this in the kernel has slowly
> > been changing so we might be able to clean things up in general.
> >
>
> Yeah, I'm fairly sure things like [1] are not enough in preventing potential nasty
> races (though they should be hard to trigger). OTOH, in [2], we do have proper
> locking.
>
> Simple solution would be to use the info lock in the buffer read() and write() paths.
> I do realize that's a fastpath but I don't think that would be such a contended lock.
> But we can surely do better and RCU could be a good candidate for this (we could do
> something similar to what gpiolib is doing) and I wouldn't expect it to be that
> complicated to implement. Biggest issue by making info a __rcu pointer would be to
> change all IIO drivers to set the pointer with rcu_assign_pointer(). Though during
> probe there's no potential race so what we have today should be fine (just not sure
> if things like sparse would not complain about the "raw" assignment).
Not tried it, but could probably do something cheeky in iio_device_register()
using a second pointer (maybe in a union with the first ;)
So I agree, smells bad but I've not chased far enough to know how real the problems
are. In many cases the read only accesses are to data, not to hardware so 'might'
be fine. I'm just not sure. I'll try and get time for a close look but won't
be until towards the end of November.
Jonathan
>
> [1]: https://elixir.bootlin.com/linux/v6.12-rc4/source/drivers/iio/industrialio-buffer.c#L176
> [2]: https://elixir.bootlin.com/linux/v6.12-rc4/source/drivers/iio/industrialio-core.c#L1825
>
>
> - Nuno Sá
>
>
>
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH RFC v4 00/15] spi: axi-spi-engine: add offload support
@ 2024-10-23 20:59 David Lechner
2024-10-23 20:59 ` [PATCH RFC v4 11/15] iio: buffer-dmaengine: add devm_iio_dmaengine_buffer_setup_ext2() David Lechner
0 siblings, 1 reply; 7+ messages in thread
From: David Lechner @ 2024-10-23 20:59 UTC (permalink / raw)
To: Mark Brown, Jonathan Cameron, Rob Herring, Krzysztof Kozlowski,
Conor Dooley, Nuno Sá, Uwe Kleine-König
Cc: Michael Hennerich, Lars-Peter Clausen, David Jander, Martin Sperl,
linux-spi, devicetree, linux-kernel, linux-iio, linux-pwm,
David Lechner
In this revision, I ended up changing quite a bit more that I was
expecting.
In the DT bindings, I ended up dropping the #spi-offload-cells and
spi-offload properties. A couple of reasons for this:
1. Several people commented that it is odd to have a separate provider/
consumer binding for something that already has a parent/child
relationship (both on this series and in another unrelated series
with io-backends). For now, the only SPI offload provider is the AXI
SPI Engine, which is a SPI controller.
2. In a discussion unrelated to this series, but related to the idea
of SPI offloads [1], it became apparent that the proposed use for
the cells to select triggers and tx/rx streams doesn't actually
work for that case.
3. In offline review, it was suggested that assigning specific offloads
to specific peripherals may be too restrictive, e.g. if there are
controllers that have pools of identical offloads. This idea of
pools of generic offloads has also come up in previous discussions
on the mailing list.
[1]: https://lore.kernel.org/linux-iio/e5310b63-9dc4-43af-9fbe-0cc3b604ab8b@baylibre.com/
So the idea is that if we do end up needing to use DT to assign certain
resources (triggers, DMA channels, etc.) to specific peripherals, we
would make a mapping attribute in the controller node rather than using
phandle cells. But we don't need this yet, so it isn't present in The
current patches.
And if we ever end up with a SPI offload provider that is not a SPI
controller, we can bring back the #spi-offload-cells and
spi-offload properties.
Regarding the SPI core changes, there are more details on each
individual patch, but a lot has changed there due to adding a second
ADC consumer that is wired up differently. The AD7944 is as pictured
below, but the AD4695 that has been added has the ADC chip itself as
the SPI offload trigger source, which I found to not be compatible with
many of the assumptions we made in previous revisions. So there isn't
much that is still the same as in previous revisions.
---
Changes in v4:
- Dropped #spi-offload-cells and spi-offload properties from DT bindings.
- Made an attempt at a more generic trigger interface instead of using
clk framework. This also includes a new driver for a generic PWM
trigger.
- Addressed IIO review comments.
- Added new patches for iio/adc/ad4695 as 2nd user of SPI offload.
- Link to v3: https://lore.kernel.org/r/20240722-dlech-mainline-spi-engine-offload-2-v3-0-7420e45df69b@baylibre.com
Changes in v3:
- Reworked DT bindings to have things physically connected to the SPI
controller be properties of the SPI controller and use more
conventional provider/consumer properties.
- Added more SPI APIs for peripheral drivers to use to get auxillary
offload resources, like triggers.
- Link to v2: https://lore.kernel.org/r/20240510-dlech-mainline-spi-engine-offload-2-v2-0-8707a870c435@baylibre.com
Individual patches have more details on these changes and earlier revisions too.
---
As a recap, here is the background and end goal of this series:
The AXI SPI Engine is a SPI controller that has the ability to record a
series of SPI transactions and then play them back using a hardware
trigger. This allows operations to be performed, repeating many times,
without any CPU intervention. This is needed for achieving high data
rates (millions of samples per second) from ADCs and DACs that are
connected via a SPI bus.
The offload hardware interface consists of a trigger input and a data
output for the RX data. These are connected to other hardware external
to the SPI controller.
To record one or more transactions, commands and TX data are written
to memories in the controller (RX buffer is not used since RX data gets
streamed to an external sink). This sequence of transactions can then be
played back when the trigger input is asserted.
This series includes core SPI support along with the first SPI
controller (AXI SPI Engine) and SPI peripheral (AD7944 ADC) that use
them. This enables capturing analog data at 2 million samples per
second.
The hardware setup looks like this:
+-------------------------------+ +------------------+
| | | |
| SOC/FPGA | | AD7944 ADC |
| +---------------------+ | | |
| | AXI SPI Engine | | | |
| | SPI Bus ============ SPI Bus |
| | | | | |
| | +---------------+ | | | |
| | | Offload 0 | | | +------------------+
| | | RX DATA OUT > > > > |
| | | TRIGGER IN < < < v |
| | +---------------+ | ^ v |
| +---------------------+ ^ v |
| | AXI PWM | ^ v |
| | CH0 > ^ v |
| +---------------------+ v |
| | AXI DMA | v |
| | CH0 < < < |
| +---------------------+ |
| |
+-------------------------------+
---
David Lechner (15):
pwm: core: export pwm_get_state_hw()
spi: add basic support for SPI offloading
spi: offload: add support for hardware triggers
spi: dt-bindings: add trigger-source.yaml
spi: dt-bindings: add PWM SPI offload trigger
spi: offload-trigger: add PWM trigger driver
spi: add offload TX/RX streaming APIs
spi: dt-bindings: axi-spi-engine: add SPI offload properties
spi: axi-spi-engine: implement offload support
iio: buffer-dmaengine: document iio_dmaengine_buffer_setup_ext
iio: buffer-dmaengine: add devm_iio_dmaengine_buffer_setup_ext2()
iio: adc: ad7944: don't use storagebits for sizing
iio: adc: ad7944: add support for SPI offload
dt-bindings: iio: adc: adi,ad4695: add SPI offload properties
iio: adc: ad4695: Add support for SPI offload
.../devicetree/bindings/iio/adc/adi,ad4695.yaml | 13 +-
.../bindings/spi/adi,axi-spi-engine.yaml | 22 +
.../devicetree/bindings/spi/trigger-pwm.yaml | 39 ++
.../devicetree/bindings/spi/trigger-source.yaml | 28 ++
drivers/iio/adc/Kconfig | 2 +
drivers/iio/adc/ad4695.c | 470 +++++++++++++++++++--
drivers/iio/adc/ad7944.c | 249 ++++++++++-
drivers/iio/buffer/industrialio-buffer-dmaengine.c | 104 ++++-
drivers/pwm/core.c | 55 ++-
drivers/spi/Kconfig | 16 +
drivers/spi/Makefile | 4 +
drivers/spi/spi-axi-spi-engine.c | 273 +++++++++++-
drivers/spi/spi-offload-trigger-pwm.c | 169 ++++++++
drivers/spi/spi-offload.c | 446 +++++++++++++++++++
drivers/spi/spi.c | 10 +
include/linux/iio/buffer-dmaengine.h | 5 +
include/linux/pwm.h | 1 +
include/linux/spi/spi-offload.h | 166 ++++++++
include/linux/spi/spi.h | 19 +
19 files changed, 1995 insertions(+), 96 deletions(-)
---
base-commit: 6c4b0dd7d0df3a803766d4954dc064dc57aeda17
change-id: 20240510-dlech-mainline-spi-engine-offload-2-afce3790b5ab
Best regards,
--
David Lechner <dlechner@baylibre.com>
^ permalink raw reply [flat|nested] 7+ messages in thread* [PATCH RFC v4 11/15] iio: buffer-dmaengine: add devm_iio_dmaengine_buffer_setup_ext2()
2024-10-23 20:59 [PATCH RFC v4 00/15] spi: axi-spi-engine: add offload support David Lechner
@ 2024-10-23 20:59 ` David Lechner
2024-10-25 13:24 ` Nuno Sá
0 siblings, 1 reply; 7+ messages in thread
From: David Lechner @ 2024-10-23 20:59 UTC (permalink / raw)
To: Mark Brown, Jonathan Cameron, Rob Herring, Krzysztof Kozlowski,
Conor Dooley, Nuno Sá, Uwe Kleine-König
Cc: Michael Hennerich, Lars-Peter Clausen, David Jander, Martin Sperl,
linux-spi, devicetree, linux-kernel, linux-iio, linux-pwm,
David Lechner
Add a new devm_iio_dmaengine_buffer_setup_ext2() function to handle
cases where the DMA channel is managed by the caller rather than being
requested and released by the iio_dmaengine module.
Signed-off-by: David Lechner <dlechner@baylibre.com>
---
v4 changes:
* This replaces "iio: buffer-dmaengine: generalize requesting DMA channel"
---
drivers/iio/buffer/industrialio-buffer-dmaengine.c | 107 +++++++++++++++------
include/linux/iio/buffer-dmaengine.h | 5 +
2 files changed, 81 insertions(+), 31 deletions(-)
diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
index 054af21dfa65..602cb2e147a6 100644
--- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
+++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
@@ -33,6 +33,7 @@ struct dmaengine_buffer {
struct iio_dma_buffer_queue queue;
struct dma_chan *chan;
+ bool owns_chan;
struct list_head active;
size_t align;
@@ -216,28 +217,23 @@ static const struct iio_dev_attr *iio_dmaengine_buffer_attrs[] = {
* Once done using the buffer iio_dmaengine_buffer_free() should be used to
* release it.
*/
-static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev,
- const char *channel)
+static struct iio_buffer *iio_dmaengine_buffer_alloc(struct dma_chan *chan,
+ bool owns_chan)
{
struct dmaengine_buffer *dmaengine_buffer;
unsigned int width, src_width, dest_width;
struct dma_slave_caps caps;
- struct dma_chan *chan;
int ret;
dmaengine_buffer = kzalloc(sizeof(*dmaengine_buffer), GFP_KERNEL);
- if (!dmaengine_buffer)
- return ERR_PTR(-ENOMEM);
-
- chan = dma_request_chan(dev, channel);
- if (IS_ERR(chan)) {
- ret = PTR_ERR(chan);
- goto err_free;
+ if (!dmaengine_buffer) {
+ ret = -ENOMEM;
+ goto err_release;
}
ret = dma_get_slave_caps(chan, &caps);
if (ret < 0)
- goto err_release;
+ goto err_free;
/* Needs to be aligned to the maximum of the minimums */
if (caps.src_addr_widths)
@@ -252,6 +248,7 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev,
INIT_LIST_HEAD(&dmaengine_buffer->active);
dmaengine_buffer->chan = chan;
+ dmaengine_buffer->owns_chan = owns_chan;
dmaengine_buffer->align = width;
dmaengine_buffer->max_size = dma_get_max_seg_size(chan->device->dev);
@@ -263,10 +260,12 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev,
return &dmaengine_buffer->queue.buffer;
-err_release:
- dma_release_channel(chan);
err_free:
kfree(dmaengine_buffer);
+err_release:
+ if (owns_chan)
+ dma_release_channel(chan);
+
return ERR_PTR(ret);
}
@@ -282,12 +281,38 @@ void iio_dmaengine_buffer_free(struct iio_buffer *buffer)
iio_buffer_to_dmaengine_buffer(buffer);
iio_dma_buffer_exit(&dmaengine_buffer->queue);
- dma_release_channel(dmaengine_buffer->chan);
-
iio_buffer_put(buffer);
+
+ if (dmaengine_buffer->owns_chan)
+ dma_release_channel(dmaengine_buffer->chan);
}
EXPORT_SYMBOL_NS_GPL(iio_dmaengine_buffer_free, IIO_DMAENGINE_BUFFER);
+static struct iio_buffer
+*__iio_dmaengine_buffer_setup_ext(struct iio_dev *indio_dev,
+ struct dma_chan *chan, bool owns_chan,
+ enum iio_buffer_direction dir)
+{
+ struct iio_buffer *buffer;
+ int ret;
+
+ buffer = iio_dmaengine_buffer_alloc(chan, owns_chan);
+ if (IS_ERR(buffer))
+ return ERR_CAST(buffer);
+
+ indio_dev->modes |= INDIO_BUFFER_HARDWARE;
+
+ buffer->direction = dir;
+
+ ret = iio_device_attach_buffer(indio_dev, buffer);
+ if (ret) {
+ iio_dmaengine_buffer_free(buffer);
+ return ERR_PTR(ret);
+ }
+
+ return buffer;
+}
+
/**
* iio_dmaengine_buffer_setup_ext() - Setup a DMA buffer for an IIO device
* @dev: DMA channel consumer device
@@ -308,24 +333,13 @@ struct iio_buffer *iio_dmaengine_buffer_setup_ext(struct device *dev,
const char *channel,
enum iio_buffer_direction dir)
{
- struct iio_buffer *buffer;
- int ret;
-
- buffer = iio_dmaengine_buffer_alloc(dev, channel);
- if (IS_ERR(buffer))
- return ERR_CAST(buffer);
-
- indio_dev->modes |= INDIO_BUFFER_HARDWARE;
-
- buffer->direction = dir;
+ struct dma_chan *chan;
- ret = iio_device_attach_buffer(indio_dev, buffer);
- if (ret) {
- iio_dmaengine_buffer_free(buffer);
- return ERR_PTR(ret);
- }
+ chan = dma_request_chan(dev, channel);
+ if (IS_ERR(chan))
+ return ERR_CAST(chan);
- return buffer;
+ return __iio_dmaengine_buffer_setup_ext(indio_dev, chan, true, dir);
}
EXPORT_SYMBOL_NS_GPL(iio_dmaengine_buffer_setup_ext, IIO_DMAENGINE_BUFFER);
@@ -362,6 +376,37 @@ int devm_iio_dmaengine_buffer_setup_ext(struct device *dev,
}
EXPORT_SYMBOL_NS_GPL(devm_iio_dmaengine_buffer_setup_ext, IIO_DMAENGINE_BUFFER);
+/**
+ * devm_iio_dmaengine_buffer_setup_ext2() - Setup a DMA buffer for an IIO device
+ * @dev: Device for devm ownership
+ * @indio_dev: IIO device to which to attach this buffer.
+ * @chan: DMA channel
+ * @dir: Direction of buffer (in or out)
+ *
+ * This allocates a new IIO buffer with devm_iio_dmaengine_buffer_alloc()
+ * and attaches it to an IIO device with iio_device_attach_buffer().
+ * It also appends the INDIO_BUFFER_HARDWARE mode to the supported modes of the
+ * IIO device.
+ *
+ * This is the same as devm_iio_dmaengine_buffer_setup_ext() except that the
+ * caller manages requesting and releasing the DMA channel.
+ */
+int devm_iio_dmaengine_buffer_setup_ext2(struct device *dev,
+ struct iio_dev *indio_dev,
+ struct dma_chan *chan,
+ enum iio_buffer_direction dir)
+{
+ struct iio_buffer *buffer;
+
+ buffer = __iio_dmaengine_buffer_setup_ext(indio_dev, chan, false, dir);
+ if (IS_ERR(buffer))
+ return PTR_ERR(buffer);
+
+ return devm_add_action_or_reset(dev, __devm_iio_dmaengine_buffer_free,
+ buffer);
+}
+EXPORT_SYMBOL_NS_GPL(devm_iio_dmaengine_buffer_setup_ext2, IIO_DMAENGINE_BUFFER);
+
MODULE_AUTHOR("Lars-Peter Clausen <lars@metafoo.de>");
MODULE_DESCRIPTION("DMA buffer for the IIO framework");
MODULE_LICENSE("GPL");
diff --git a/include/linux/iio/buffer-dmaengine.h b/include/linux/iio/buffer-dmaengine.h
index 81d9a19aeb91..7bdb979b59f2 100644
--- a/include/linux/iio/buffer-dmaengine.h
+++ b/include/linux/iio/buffer-dmaengine.h
@@ -11,6 +11,7 @@
struct iio_dev;
struct device;
+struct dma_chan;
void iio_dmaengine_buffer_free(struct iio_buffer *buffer);
struct iio_buffer *iio_dmaengine_buffer_setup_ext(struct device *dev,
@@ -26,6 +27,10 @@ int devm_iio_dmaengine_buffer_setup_ext(struct device *dev,
struct iio_dev *indio_dev,
const char *channel,
enum iio_buffer_direction dir);
+int devm_iio_dmaengine_buffer_setup_ext2(struct device *dev,
+ struct iio_dev *indio_dev,
+ struct dma_chan *chan,
+ enum iio_buffer_direction dir);
#define devm_iio_dmaengine_buffer_setup(dev, indio_dev, channel) \
devm_iio_dmaengine_buffer_setup_ext(dev, indio_dev, channel, \
--
2.43.0
^ permalink raw reply related [flat|nested] 7+ messages in thread* Re: [PATCH RFC v4 11/15] iio: buffer-dmaengine: add devm_iio_dmaengine_buffer_setup_ext2()
2024-10-23 20:59 ` [PATCH RFC v4 11/15] iio: buffer-dmaengine: add devm_iio_dmaengine_buffer_setup_ext2() David Lechner
@ 2024-10-25 13:24 ` Nuno Sá
2024-10-25 16:42 ` David Lechner
0 siblings, 1 reply; 7+ messages in thread
From: Nuno Sá @ 2024-10-25 13:24 UTC (permalink / raw)
To: David Lechner, Mark Brown, Jonathan Cameron, Rob Herring,
Krzysztof Kozlowski, Conor Dooley, Nuno Sá,
Uwe Kleine-König
Cc: Michael Hennerich, Lars-Peter Clausen, David Jander, Martin Sperl,
linux-spi, devicetree, linux-kernel, linux-iio, linux-pwm
I still need to look better at this but I do have one though already :)
On Wed, 2024-10-23 at 15:59 -0500, David Lechner wrote:
> Add a new devm_iio_dmaengine_buffer_setup_ext2() function to handle
> cases where the DMA channel is managed by the caller rather than being
> requested and released by the iio_dmaengine module.
>
> Signed-off-by: David Lechner <dlechner@baylibre.com>
> ---
>
> v4 changes:
> * This replaces "iio: buffer-dmaengine: generalize requesting DMA channel"
> ---
> drivers/iio/buffer/industrialio-buffer-dmaengine.c | 107 +++++++++++++++------
> include/linux/iio/buffer-dmaengine.h | 5 +
> 2 files changed, 81 insertions(+), 31 deletions(-)
>
> diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> index 054af21dfa65..602cb2e147a6 100644
> --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> @@ -33,6 +33,7 @@ struct dmaengine_buffer {
> struct iio_dma_buffer_queue queue;
>
> struct dma_chan *chan;
> + bool owns_chan;
> struct list_head active;
>
> size_t align;
> @@ -216,28 +217,23 @@ static const struct iio_dev_attr
> *iio_dmaengine_buffer_attrs[] = {
> * Once done using the buffer iio_dmaengine_buffer_free() should be used to
> * release it.
> */
> -static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev,
> - const char *channel)
> +static struct iio_buffer *iio_dmaengine_buffer_alloc(struct dma_chan *chan,
> + bool owns_chan)
> {
> struct dmaengine_buffer *dmaengine_buffer;
> unsigned int width, src_width, dest_width;
> struct dma_slave_caps caps;
> - struct dma_chan *chan;
> int ret;
>
> dmaengine_buffer = kzalloc(sizeof(*dmaengine_buffer), GFP_KERNEL);
> - if (!dmaengine_buffer)
> - return ERR_PTR(-ENOMEM);
> -
> - chan = dma_request_chan(dev, channel);
> - if (IS_ERR(chan)) {
> - ret = PTR_ERR(chan);
> - goto err_free;
> + if (!dmaengine_buffer) {
> + ret = -ENOMEM;
> + goto err_release;
> }
>
> ret = dma_get_slave_caps(chan, &caps);
> if (ret < 0)
> - goto err_release;
> + goto err_free;
>
> /* Needs to be aligned to the maximum of the minimums */
> if (caps.src_addr_widths)
> @@ -252,6 +248,7 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct
> device *dev,
>
> INIT_LIST_HEAD(&dmaengine_buffer->active);
> dmaengine_buffer->chan = chan;
> + dmaengine_buffer->owns_chan = owns_chan;
> dmaengine_buffer->align = width;
> dmaengine_buffer->max_size = dma_get_max_seg_size(chan->device->dev);
>
> @@ -263,10 +260,12 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct
> device *dev,
>
> return &dmaengine_buffer->queue.buffer;
>
> -err_release:
> - dma_release_channel(chan);
> err_free:
> kfree(dmaengine_buffer);
> +err_release:
> + if (owns_chan)
> + dma_release_channel(chan);
> +
> return ERR_PTR(ret);
> }
>
> @@ -282,12 +281,38 @@ void iio_dmaengine_buffer_free(struct iio_buffer *buffer)
> iio_buffer_to_dmaengine_buffer(buffer);
>
> iio_dma_buffer_exit(&dmaengine_buffer->queue);
> - dma_release_channel(dmaengine_buffer->chan);
> -
> iio_buffer_put(buffer);
> +
> + if (dmaengine_buffer->owns_chan)
> + dma_release_channel(dmaengine_buffer->chan);
Not sure if I agree much with this owns_chan flag. The way I see it, we should always
handover the lifetime of the DMA channel to the IIO DMA framework. Note that even the
device you pass in for both requesting the channel of the spi_offload and for
setting up the DMA buffer is the same (and i suspect it will always be) so I would
not go with the trouble. And with this assumption we could simplify a bit more the
spi implementation.
And not even related but I even suspect the current implementation could be
problematic. Basically I'm suspecting that the lifetime of the DMA channel should be
attached to the lifetime of the iio_buffer. IOW, we should only release the channel
in iio_dmaengine_buffer_release() - in which case the current implementation with the
spi_offload would also be buggy.
But bah, the second point is completely theoretical and likely very hard to reproduce
in real life (if reproducible at all - for now it's only something I suspect)
- Nuno Sá
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [PATCH RFC v4 11/15] iio: buffer-dmaengine: add devm_iio_dmaengine_buffer_setup_ext2()
2024-10-25 13:24 ` Nuno Sá
@ 2024-10-25 16:42 ` David Lechner
0 siblings, 0 replies; 7+ messages in thread
From: David Lechner @ 2024-10-25 16:42 UTC (permalink / raw)
To: Nuno Sá, Mark Brown, Jonathan Cameron, Rob Herring,
Krzysztof Kozlowski, Conor Dooley, Nuno Sá,
Uwe Kleine-König
Cc: Michael Hennerich, Lars-Peter Clausen, David Jander, Martin Sperl,
linux-spi, devicetree, linux-kernel, linux-iio, linux-pwm
On 10/25/24 8:24 AM, Nuno Sá wrote:
> I still need to look better at this but I do have one though already :)
>
> On Wed, 2024-10-23 at 15:59 -0500, David Lechner wrote:
>> Add a new devm_iio_dmaengine_buffer_setup_ext2() function to handle
>> cases where the DMA channel is managed by the caller rather than being
>> requested and released by the iio_dmaengine module.
>>
>> Signed-off-by: David Lechner <dlechner@baylibre.com>
>> ---
>>
>> v4 changes:
>> * This replaces "iio: buffer-dmaengine: generalize requesting DMA channel"
>> ---
...
>> @@ -282,12 +281,38 @@ void iio_dmaengine_buffer_free(struct iio_buffer *buffer)
>> iio_buffer_to_dmaengine_buffer(buffer);
>>
>> iio_dma_buffer_exit(&dmaengine_buffer->queue);
>> - dma_release_channel(dmaengine_buffer->chan);
>> -
>> iio_buffer_put(buffer);
>> +
>> + if (dmaengine_buffer->owns_chan)
>> + dma_release_channel(dmaengine_buffer->chan);
>
> Not sure if I agree much with this owns_chan flag. The way I see it, we should always
> handover the lifetime of the DMA channel to the IIO DMA framework. Note that even the
> device you pass in for both requesting the channel of the spi_offload and for
> setting up the DMA buffer is the same (and i suspect it will always be) so I would
> not go with the trouble. And with this assumption we could simplify a bit more the
> spi implementation.
I tried something like this in v3 but Jonathan didn't seem to like it.
https://lore.kernel.org/all/20240727144303.4a8604cb@jic23-huawei/
>
> And not even related but I even suspect the current implementation could be
> problematic. Basically I'm suspecting that the lifetime of the DMA channel should be
> attached to the lifetime of the iio_buffer. IOW, we should only release the channel
> in iio_dmaengine_buffer_release() - in which case the current implementation with the
> spi_offload would also be buggy.
The buffer can outlive the iio device driver that created the buffer?
>
> But bah, the second point is completely theoretical and likely very hard to reproduce
> in real life (if reproducible at all - for now it's only something I suspect)
>
> - Nuno Sá
>
>
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2024-10-28 16:36 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-25 18:40 [PATCH RFC v4 11/15] iio: buffer-dmaengine: add devm_iio_dmaengine_buffer_setup_ext2() Nuno Sá
2024-10-26 15:48 ` Jonathan Cameron
2024-10-28 11:08 ` Nuno Sá
2024-10-28 16:35 ` Jonathan Cameron
-- strict thread matches above, loose matches on Subject: below --
2024-10-23 20:59 [PATCH RFC v4 00/15] spi: axi-spi-engine: add offload support David Lechner
2024-10-23 20:59 ` [PATCH RFC v4 11/15] iio: buffer-dmaengine: add devm_iio_dmaengine_buffer_setup_ext2() David Lechner
2024-10-25 13:24 ` Nuno Sá
2024-10-25 16:42 ` David Lechner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).