public inbox for linux-gpio@vger.kernel.org
 help / color / mirror / Atom feed
* [libgpiod] - fast writing while waiting for edge events
@ 2023-12-12  9:55 Mathias Dobler
  2023-12-12 11:12 ` Bartosz Golaszewski
  0 siblings, 1 reply; 10+ messages in thread
From: Mathias Dobler @ 2023-12-12  9:55 UTC (permalink / raw)
  To: linux-gpio

Hello,
From reading other conversations I've learned that it's not a good
idea to have more than 1 thread accessing libgpiod objects. But this
raises the question of how to react to events and let reads/writes
through as quickly as possible at the same time. I have already played
around with the file descriptor of the request object to interrupt the
wait for edge events, but this solution is not good because it comes
at the expense of responsiveness to events, and requires complicated
synchronization.
How bad would it be to have 1 thread waiting for events and 1 other
thread reading/writing?

Regards,
Mathias

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [libgpiod] - fast writing while waiting for edge events
  2023-12-12  9:55 [libgpiod] - fast writing while waiting for edge events Mathias Dobler
@ 2023-12-12 11:12 ` Bartosz Golaszewski
  2023-12-12 13:01   ` Mathias Dobler
  0 siblings, 1 reply; 10+ messages in thread
From: Bartosz Golaszewski @ 2023-12-12 11:12 UTC (permalink / raw)
  To: Mathias Dobler; +Cc: linux-gpio

On Tue, Dec 12, 2023 at 10:55 AM Mathias Dobler <mathias.dob@gmail.com> wrote:
>
> Hello,
> From reading other conversations I've learned that it's not a good
> idea to have more than 1 thread accessing libgpiod objects. But this
> raises the question of how to react to events and let reads/writes
> through as quickly as possible at the same time. I have already played
> around with the file descriptor of the request object to interrupt the
> wait for edge events, but this solution is not good because it comes
> at the expense of responsiveness to events, and requires complicated
> synchronization.
> How bad would it be to have 1 thread waiting for events and 1 other
> thread reading/writing?
>
> Regards,
> Mathias
>

Are you bitbanging? It totally sounds like bitbanging. Have you
considered writing a kernel driver for whatever you're doing?

Bart

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [libgpiod] - fast writing while waiting for edge events
  2023-12-12 11:12 ` Bartosz Golaszewski
@ 2023-12-12 13:01   ` Mathias Dobler
  2023-12-12 16:02     ` Kent Gibson
  0 siblings, 1 reply; 10+ messages in thread
From: Mathias Dobler @ 2023-12-12 13:01 UTC (permalink / raw)
  To: Bartosz Golaszewski; +Cc: linux-gpio

Hello Bartosz,
I am writing a libgpiod 2 C# binding/abstraction for the dotnet
community that I want to use personally. My use case is to monitor
edge events from a hardware PWM signal, to react by reading some other
GPIOs in a relatively short amount of time. This is likely a very
specific use case but my goal is to make the binding cover as much
uses cases as possible. And before searching for an alternative (like
writing a kernel driver or so) I wanted to make sure I am not doing
things wrong.

Regards Mathias

Am Di., 12. Dez. 2023 um 12:12 Uhr schrieb Bartosz Golaszewski <brgl@bgdev.pl>:
>
> On Tue, Dec 12, 2023 at 10:55 AM Mathias Dobler <mathias.dob@gmail.com> wrote:
> >
> > Hello,
> > From reading other conversations I've learned that it's not a good
> > idea to have more than 1 thread accessing libgpiod objects. But this
> > raises the question of how to react to events and let reads/writes
> > through as quickly as possible at the same time. I have already played
> > around with the file descriptor of the request object to interrupt the
> > wait for edge events, but this solution is not good because it comes
> > at the expense of responsiveness to events, and requires complicated
> > synchronization.
> > How bad would it be to have 1 thread waiting for events and 1 other
> > thread reading/writing?
> >
> > Regards,
> > Mathias
> >
>
> Are you bitbanging? It totally sounds like bitbanging. Have you
> considered writing a kernel driver for whatever you're doing?
>
> Bart

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [libgpiod] - fast writing while waiting for edge events
  2023-12-12 13:01   ` Mathias Dobler
@ 2023-12-12 16:02     ` Kent Gibson
  2023-12-12 17:06       ` Mathias Dobler
  0 siblings, 1 reply; 10+ messages in thread
From: Kent Gibson @ 2023-12-12 16:02 UTC (permalink / raw)
  To: Mathias Dobler; +Cc: Bartosz Golaszewski, linux-gpio

On Tue, Dec 12, 2023 at 02:01:51PM +0100, Mathias Dobler wrote:
> Hello Bartosz,
> I am writing a libgpiod 2 C# binding/abstraction for the dotnet
> community that I want to use personally. My use case is to monitor
> edge events from a hardware PWM signal, to react by reading some other
> GPIOs in a relatively short amount of time. This is likely a very
> specific use case but my goal is to make the binding cover as much
> uses cases as possible. And before searching for an alternative (like
> writing a kernel driver or so) I wanted to make sure I am not doing
> things wrong.
>

Don't top reply - reply inline instead.

I'm going to jump in here, as I think the thread safety documentation is
overly restrictive given the current implementation:

 * libgpiod is thread-aware but does not provide any further thread-safety
 * guarantees. This requires the user to ensure that at most one thread may
 * work with an object at any time. Sharing objects across threads is allowed
 * if a suitable synchronization mechanism serializes the access. Different,
 * standalone objects can safely be used concurrently. Most libgpiod objects
 * are standalone. Exceptions - such as events allocated in buffers - exist and
 * are noted in the documentation.

Firstly, as noted, if you are talking separate requests then they are
separate objects and you can do what you like.  So have one request for
your PWM edge generator and another for the lines to read/write.

But in practice even a single request would work.
While the documentation states you need to serialize access, that only
really applies if at least one of the accesses is a mutation.
In the current implementation, the gpiod_line_request object is immutable
so it is safe to access it from multiple threads.

Of course if two threads write to the same line at the same time that
would constitute a race, in the sense that you can't be sure
of they order they would write, but it is otherwise safe to do.

Finally, most libgpiod objects ARE mutable, so the documentation may be
restrictive to cover those.  Or it may be a defensive measure - in case
a future change makes a currently immutable object mutable.
It would be nice to have the gpiod_line_request documented as immutable,
as otherwise the safe option is to follow the contract in the
documentation and only access a particular libgpiod object from one thread
at a time, and for your use case to use separate requests.

Cheers,
Kent.

> Regards Mathias
>
> Am Di., 12. Dez. 2023 um 12:12 Uhr schrieb Bartosz Golaszewski <brgl@bgdev.pl>:
> >
> > On Tue, Dec 12, 2023 at 10:55 AM Mathias Dobler <mathias.dob@gmail.com> wrote:
> > >
> > > Hello,
> > > From reading other conversations I've learned that it's not a good
> > > idea to have more than 1 thread accessing libgpiod objects. But this
> > > raises the question of how to react to events and let reads/writes
> > > through as quickly as possible at the same time. I have already played
> > > around with the file descriptor of the request object to interrupt the
> > > wait for edge events, but this solution is not good because it comes
> > > at the expense of responsiveness to events, and requires complicated
> > > synchronization.
> > > How bad would it be to have 1 thread waiting for events and 1 other
> > > thread reading/writing?
> > >
> > > Regards,
> > > Mathias
> > >
> >
> > Are you bitbanging? It totally sounds like bitbanging. Have you
> > considered writing a kernel driver for whatever you're doing?
> >
> > Bart

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [libgpiod] - fast writing while waiting for edge events
  2023-12-12 16:02     ` Kent Gibson
@ 2023-12-12 17:06       ` Mathias Dobler
  2023-12-13  1:07         ` Kent Gibson
  0 siblings, 1 reply; 10+ messages in thread
From: Mathias Dobler @ 2023-12-12 17:06 UTC (permalink / raw)
  To: Kent Gibson; +Cc: Bartosz Golaszewski, linux-gpio

Hello Kent,
> Don't top reply - reply inline instead.
Sorry, still new here.

> Firstly, as noted, if you are talking separate requests then they are
> separate objects and you can do what you like.  So have one request for
> your PWM edge generator and another for the lines to read/write.
It doesn't add much, but just to clarify, the PWM signal is not
generated through libgpiod, I only use line requests to read lines and
edge events.

> In the current implementation, the gpiod_line_request object is immutable
> so it is safe to access it from multiple threads.
Thanks for giving insights into the current implementation. Knowing
this opens up a lot of easier options for synchronization, for example
synchronizing only writes (maybe not even necessary).

> Or it may be a defensive measure - in case
> a future change makes a currently immutable object mutable.
However, I also understand that the restrictive nature of the
documentation could presumably be designed for changes in the future.

Due to this I am a bit undecided whether I should base the C# binding
on the current implementation, but there is probably not much else
left for me. Even creating a separate request object for every
individual line would not fully solve the problem to be quick in
reading edge events and reading/writing lines of the same request
object, while adhering to the restrictions of the current
documentation.

Regards,
Mathias

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [libgpiod] - fast writing while waiting for edge events
  2023-12-12 17:06       ` Mathias Dobler
@ 2023-12-13  1:07         ` Kent Gibson
  2023-12-13 10:49           ` Mathias Dobler
  0 siblings, 1 reply; 10+ messages in thread
From: Kent Gibson @ 2023-12-13  1:07 UTC (permalink / raw)
  To: Mathias Dobler; +Cc: Bartosz Golaszewski, linux-gpio

On Tue, Dec 12, 2023 at 06:06:54PM +0100, Mathias Dobler wrote:
> Hello Kent,
> > Don't top reply - reply inline instead.
> Sorry, still new here.
>
> > Firstly, as noted, if you are talking separate requests then they are
> > separate objects and you can do what you like.  So have one request for
> > your PWM edge generator and another for the lines to read/write.
> It doesn't add much, but just to clarify, the PWM signal is not
> generated through libgpiod, I only use line requests to read lines and
> edge events.
>

I wasn't suggesting that you were using libgpiod to generate the PWM - but
the PWM signal is being used to generate edge events, right?  And you
use a libgpiod request to receive those events?

Or are you using separate threads for lines within one request?
That is not what multi-line requests are intended for, and you would be
better off with separate requests for that case.
Multi-line requests are for cases where you need to read or write a set
of lines as atomically as possible - but you would do that from one
thread.  They can also be used where you only have a single thread
controlling several lines, so you only have to deal with the one request.
But if you want to independently control separate lines from separate
threads then separate requests is the way to go.

> > In the current implementation, the gpiod_line_request object is immutable
> > so it is safe to access it from multiple threads.
> Thanks for giving insights into the current implementation. Knowing
> this opens up a lot of easier options for synchronization, for example
> synchronizing only writes (maybe not even necessary).
>
> > Or it may be a defensive measure - in case
> > a future change makes a currently immutable object mutable.
> However, I also understand that the restrictive nature of the
> documentation could presumably be designed for changes in the future.
>
> Due to this I am a bit undecided whether I should base the C# binding
> on the current implementation, but there is probably not much else
> left for me. Even creating a separate request object for every
> individual line would not fully solve the problem to be quick in
> reading edge events and reading/writing lines of the same request
> object, while adhering to the restrictions of the current
> documentation.
>

Correct - going by the documentation the edge event handler thread
should be the only thread accessing that request.

My preference for higher level languages is to bypass libgpiod and go
direct to the kernel GPIO uAPI (the ioctl calls), which is thread safe.
Then you don't have to worry about libgpiod objects or the libgpiod API
and wrapping or mapping them to your language.
That is the approach I took for both my Go[1] and Rust[2] libraries.

OTOH I'm very familiar with the kernel uAPI (I wrote v2), which is a bit
convoluted due to the restrictions imposed by ioctl calls and kernel ABI
compatibility.  And I initially wrote the Go library to test the uAPI v2
as I wrote it, so libgpiod was not even an option then (libgpiod v1 only
supports the uAPI v1, and libgpiod v2 only supports uAPI v2).
Going direct to the uAPI also means you can support both uAPI v1 and v2,
if you want, though that is becoming less of an issue now uAPI v2 has
been out for a while.
Anyway, libgpiod is not the only option.

Cheers,
Kent.

[1] https://github.com/warthog618/gpiod (one of these days I'll get
around to renaming that to gpiocdev, but that day is not today)
[2] https://crates.io/crates/gpiocdev


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [libgpiod] - fast writing while waiting for edge events
  2023-12-13  1:07         ` Kent Gibson
@ 2023-12-13 10:49           ` Mathias Dobler
  2023-12-13 11:51             ` Kent Gibson
  0 siblings, 1 reply; 10+ messages in thread
From: Mathias Dobler @ 2023-12-13 10:49 UTC (permalink / raw)
  To: Kent Gibson; +Cc: Bartosz Golaszewski, linux-gpio

Am Mi., 13. Dez. 2023 um 02:08 Uhr schrieb Kent Gibson <warthog618@gmail.com>:
> I wasn't suggesting that you were using libgpiod to generate the PWM - but
> the PWM signal is being used to generate edge events, right?  And you
> use a libgpiod request to receive those events?
Yes, exactly.

> Or are you using separate threads for lines within one request?
> That is not what multi-line requests are intended for, and you would be
> better off with separate requests for that case.
> Multi-line requests are for cases where you need to read or write a set
> of lines as atomically as possible - but you would do that from one
> thread.  They can also be used where you only have a single thread
> controlling several lines, so you only have to deal with the one request.
> But if you want to independently control separate lines from separate
> threads then separate requests is the way to go.
I have one request object for 1 line 'req1' that's monitored (=wait
and read edge_events) by 1 thread 't1'. At the same time,
there lives another request object 'req2' for multiple lines, that's
monitored by another thread 't2'. My use case is, that
when t1 reads an edge event, it executes a callback that requires to
read multiple values of 'req2'. Now the dilemma occurs.
Does t1 interrupt t2 from waiting for edge events to gain the mutex,
risking t2 missing edge events while t1 holds the mutex, or does
t1 bypass the mutex and violate the libgpiod threading contract?
I think this scenario could also be boiled down to a having only 1
request for 1 line. Monitoring edge events, and at the same time
trying to read/write or other performing other operations on the
request object would result in the same dilemma.
Again, this might just be a a very specific use case with strong time
constraints (that I wish libgpiod supported out of the box).

> Correct - going by the documentation the edge event handler thread
> should be the only thread accessing that request.

> My preference for higher level languages is to bypass libgpiod and go
> direct to the kernel GPIO uAPI (the ioctl calls), which is thread safe.
> Then you don't have to worry about libgpiod objects or the libgpiod API
> and wrapping or mapping them to your language.
> That is the approach I took for both my Go[1] and Rust[2] libraries.

> OTOH I'm very familiar with the kernel uAPI (I wrote v2), which is a bit
> convoluted due to the restrictions imposed by ioctl calls and kernel ABI
> compatibility.  And I initially wrote the Go library to test the uAPI v2
> as I wrote it, so libgpiod was not even an option then (libgpiod v1 only
> supports the uAPI v1, and libgpiod v2 only supports uAPI v2).
> Going direct to the uAPI also means you can support both uAPI v1 and v2,
> if you want, though that is becoming less of an issue now uAPI v2 has
> been out for a while.
> Anyway, libgpiod is not the only option.
Hmm... sounds like an interesting challenge... and there would already
be something to look at.
Maybe a project for the future ;)

Regards,
Mathias

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [libgpiod] - fast writing while waiting for edge events
  2023-12-13 10:49           ` Mathias Dobler
@ 2023-12-13 11:51             ` Kent Gibson
  2023-12-13 12:52               ` Mathias Dobler
  0 siblings, 1 reply; 10+ messages in thread
From: Kent Gibson @ 2023-12-13 11:51 UTC (permalink / raw)
  To: Mathias Dobler; +Cc: Bartosz Golaszewski, linux-gpio

On Wed, Dec 13, 2023 at 11:49:43AM +0100, Mathias Dobler wrote:
> Am Mi., 13. Dez. 2023 um 02:08 Uhr schrieb Kent Gibson <warthog618@gmail.com>:
>
> > Or are you using separate threads for lines within one request?
> > That is not what multi-line requests are intended for, and you would be
> > better off with separate requests for that case.
> > Multi-line requests are for cases where you need to read or write a set
> > of lines as atomically as possible - but you would do that from one
> > thread.  They can also be used where you only have a single thread
> > controlling several lines, so you only have to deal with the one request.
> > But if you want to independently control separate lines from separate
> > threads then separate requests is the way to go.

> I have one request object for 1 line 'req1' that's monitored (=wait
> and read edge_events) by 1 thread 't1'. At the same time,
> there lives another request object 'req2' for multiple lines, that's
> monitored by another thread 't2'. My use case is, that
> when t1 reads an edge event, it executes a callback that requires to
> read multiple values of 'req2'. Now the dilemma occurs.
> Does t1 interrupt t2 from waiting for edge events to gain the mutex,
> risking t2 missing edge events while t1 holds the mutex, or does
> t1 bypass the mutex and violate the libgpiod threading contract?
> I think this scenario could also be boiled down to a having only 1
> request for 1 line. Monitoring edge events, and at the same time
> trying to read/write or other performing other operations on the
> request object would result in the same dilemma.
> Again, this might just be a a very specific use case with strong time
> constraints (that I wish libgpiod supported out of the box).
>

Firstly note that you cannot lose edge events. They are queued in the
kernel in case userspace is a bit busy.  It is still possible to overflow
the queue, but it takes serious effort.  You can check the seqnos in the
events to detect an overflow.

It is also a bit odd to be monitoring a line for edges AND polling it
at the same time.  You get edge events when it changes value, so polling
between edges is redundant.

I suggest not holding the mutex while waiting, only reading.
Holding a mutex while waiting is generally poor form.  Use a structural
mechanism to prevent the requests being closed while threads are waiting
on them. e.g. cancel the wait before performing the release.

Though if you are using a libgpiod function to perform the wait you are
still stuck, as going by the documentation you have to prevent other
access while you are waiting....

So you have to not use a libgpiod function and wait by poll()ing the
request fd.
At that point you may as well wait on both requests in the one thread.
And then you don't need the mutex as you only have one thread accessing the
requests.

Cheers,
Kent.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [libgpiod] - fast writing while waiting for edge events
  2023-12-13 11:51             ` Kent Gibson
@ 2023-12-13 12:52               ` Mathias Dobler
  2023-12-13 13:12                 ` Kent Gibson
  0 siblings, 1 reply; 10+ messages in thread
From: Mathias Dobler @ 2023-12-13 12:52 UTC (permalink / raw)
  To: Kent Gibson; +Cc: Bartosz Golaszewski, linux-gpio

> Firstly note that you cannot lose edge events. They are queued in the
> kernel in case userspace is a bit busy.  It is still possible to overflow
> the queue, but it takes serious effort.  You can check the seqnos in the
> events to detect an overflow.

I think the only thing that is lost is my memory sometimes.

> It is also a bit odd to be monitoring a line for edges AND polling it
> at the same time.  You get edge events when it changes value, so polling
> between edges is redundant.

Yeah, I might have to rethink my usage there...

> Though if you are using a libgpiod function to perform the wait you are
> still stuck, as going by the documentation you have to prevent other
> access while you are waiting....

> So you have to not use a libgpiod function and wait by poll()ing the
> request fd.
> At that point you may as well wait on both requests in the one thread.
> And then you don't need the mutex as you only have one thread accessing the
> requests.

I see. So that means waiting on the request fd is not affected by the
threading contract?
Thanks for your help.

Mathias

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [libgpiod] - fast writing while waiting for edge events
  2023-12-13 12:52               ` Mathias Dobler
@ 2023-12-13 13:12                 ` Kent Gibson
  0 siblings, 0 replies; 10+ messages in thread
From: Kent Gibson @ 2023-12-13 13:12 UTC (permalink / raw)
  To: Mathias Dobler; +Cc: Bartosz Golaszewski, linux-gpio

On Wed, Dec 13, 2023 at 01:52:47PM +0100, Mathias Dobler wrote:
> > Firstly note that you cannot lose edge events. They are queued in the
> > kernel in case userspace is a bit busy.  It is still possible to overflow
> > the queue, but it takes serious effort.  You can check the seqnos in the
> > events to detect an overflow.
>
> I think the only thing that is lost is my memory sometimes.
>
> > It is also a bit odd to be monitoring a line for edges AND polling it
> > at the same time.  You get edge events when it changes value, so polling
> > between edges is redundant.
>
> Yeah, I might have to rethink my usage there...
>
> > Though if you are using a libgpiod function to perform the wait you are
> > still stuck, as going by the documentation you have to prevent other
> > access while you are waiting....
>
> > So you have to not use a libgpiod function and wait by poll()ing the
> > request fd.
> > At that point you may as well wait on both requests in the one thread.
> > And then you don't need the mutex as you only have one thread accessing the
> > requests.
>
> I see. So that means waiting on the request fd is not affected by the
> threading contract?

That might technically be a breach of contract, but the fd is
immutable for the lifetime of the request, and I don't see how
that could possibly change, so it is pretty safe.

Cheers,
Kent.


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2023-12-13 13:12 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-12-12  9:55 [libgpiod] - fast writing while waiting for edge events Mathias Dobler
2023-12-12 11:12 ` Bartosz Golaszewski
2023-12-12 13:01   ` Mathias Dobler
2023-12-12 16:02     ` Kent Gibson
2023-12-12 17:06       ` Mathias Dobler
2023-12-13  1:07         ` Kent Gibson
2023-12-13 10:49           ` Mathias Dobler
2023-12-13 11:51             ` Kent Gibson
2023-12-13 12:52               ` Mathias Dobler
2023-12-13 13:12                 ` Kent Gibson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox