* [libgpiod] cxx bindings: time_point vs duration
@ 2020-10-15 8:38 Helmut Grohne
2020-10-15 9:26 ` Bartosz Golaszewski
0 siblings, 1 reply; 19+ messages in thread
From: Helmut Grohne @ 2020-10-15 8:38 UTC (permalink / raw)
To: linux-gpio, Bartosz Golaszewski
Hi,
I was trying to use the C++ bindings of libgpiod and wondered about
this.
| struct line_event {
| ...
| ::std::chrono::nanoseconds timestamp;
| ...
| };
std::chrono::nanoseconds is a duration, an interval of time. The member
is called timestamp. It is documented as an estimate of the time point
when the event actually happened. It seems to me that conceptually
std::chrono::time_point would have type of choice here. What was the
reason for not using it?
Helmut
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [libgpiod] cxx bindings: time_point vs duration 2020-10-15 8:38 [libgpiod] cxx bindings: time_point vs duration Helmut Grohne @ 2020-10-15 9:26 ` Bartosz Golaszewski 2020-10-15 9:35 ` Helmut Grohne 0 siblings, 1 reply; 19+ messages in thread From: Bartosz Golaszewski @ 2020-10-15 9:26 UTC (permalink / raw) To: Helmut Grohne; +Cc: open list:GPIO SUBSYSTEM, Bartosz Golaszewski On Thu, Oct 15, 2020 at 10:44 AM Helmut Grohne <helmut.grohne@intenta.de> wrote: > > Hi, > > I was trying to use the C++ bindings of libgpiod and wondered about > this. > > | struct line_event { > | ... > | ::std::chrono::nanoseconds timestamp; > | ... > | }; > > std::chrono::nanoseconds is a duration, an interval of time. The member > is called timestamp. It is documented as an estimate of the time point > when the event actually happened. It seems to me that conceptually > std::chrono::time_point would have type of choice here. What was the > reason for not using it? > > Helmut Hi Helmut! I probably just didn't know any better. :) I'm a kernel developer and writing these bindings was basically me learning C++. Thanks for the suggestion - it's a good moment to make it, because we're in the process of changing the API to accommodate the new uAPI that will be released in v5.10 so I'll definitely make sure to change it too. Are you by any chance well versed in C++ and would like to help out by giving me some advice? I want to fix the way GPIO line objects reference their owning chips but I'm not sure how to. Best Regards, Bartosz Golaszewski ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [libgpiod] cxx bindings: time_point vs duration 2020-10-15 9:26 ` Bartosz Golaszewski @ 2020-10-15 9:35 ` Helmut Grohne 2020-10-15 10:05 ` Bartosz Golaszewski 0 siblings, 1 reply; 19+ messages in thread From: Helmut Grohne @ 2020-10-15 9:35 UTC (permalink / raw) To: Bartosz Golaszewski; +Cc: open list:GPIO SUBSYSTEM, Bartosz Golaszewski Hi, On Thu, Oct 15, 2020 at 11:26:47AM +0200, Bartosz Golaszewski wrote: > I probably just didn't know any better. :) I'm a kernel developer and > writing these bindings was basically me learning C++. Oh, ok. Then let me elaborte on the difference of duration and time_point a little and explain the implications of changing the type. A duration denots an interval in time. A time_point denotes a specific point in time with respect to a particular clock. Every clock has an epoch and you can convert a time_stamp to a duration by computing the time_since_epoch. Comparing time_points with different clocks is thus meaningless. Internally a time_point is represented exactly like a duration. It just has the connection to the clock. So what clock would you use here? There are a few standard clocks such as std::chrono::steady_clock. This clock has an unspecified epoch. The epoch will not change during use, but e.g. after a reboot of the system, the epoch may be different. A clock also knows whether time increases monotonically and a steady_clock does. On the other hand, the system_clock has a well-specified epoch, but it is not required to be steady. If you change the time on your machine, your system_clock may go backwards. While system_clock may be what we want here, it has an unspecified representation. libgpiod wants a specific representation though to preserve the high precision of the included timestamps. We therefore cannot use any of the standard clocks. libgpiod should add its own clock class. The clock is never instaniated. It is more a collection of functionality and a tag to be included in templates to differentiate types. I think your clock would look like this: struct gpiod_clock { using duration = std::chrono::nanoseconds; using rep = duration::rep; using period = duration::period; using time_point = std::chrono::time_point<gpiod_clock>; static constexpr bool is_steady = std::chrono::system_clock::is_steady; static time_point now() { return time_point(std::chrono::system_clock::now().time_since_epoch()); } }; (The code might not work as is, but you get the idea.) In essence, it's a system_clock with a different underlying duration type for representing high resolution timestamps. Even though changing the type does likely not change the binary representation of timestamps, it still consitutes an API break and an ABI break (due to changed symbol names). > Thanks for the suggestion - it's a good moment to make it, because > we're in the process of changing the API to accommodate the new uAPI > that will be released in v5.10 so I'll definitely make sure to change > it too. Ok. I'm not particularly enthusiastic about updating client code all the time for covering upstream API breaks, but sometimes it is unavoidable. Seems to be the case here. > Are you by any chance well versed in C++ and would like to help out by > giving me some advice? I want to fix the way GPIO line objects > reference their owning chips but I'm not sure how to. I would consider myself experienced in C++. As far as I can see, the chip class uses the pimpl pattern, so a chip practically is a reference counted pointer to the actual gpiod_chip. A line has a plain chip member and this seems totally fine. As long as the line exists, so does the underlying chip. Is this not what you intended? What is less clear to me is the connection between a line and its underlying gpiod_line. It is unclear to me who "owns" a gpiod_line and who is responsible for disposing it. Since the line class is copy constructible, the line class clearly cannot own a gpiod_line. I would question whether this is a good decision. I'm not sure that a line must be copyable. It should be movable. So if you delete the copy constructor and the copy assignment for the line class, then you could argue that a line owns its referenced gpiod_line and then it could automatically handle release of resources upon destruction. Does this help? Helmut ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [libgpiod] cxx bindings: time_point vs duration 2020-10-15 9:35 ` Helmut Grohne @ 2020-10-15 10:05 ` Bartosz Golaszewski 2020-10-15 10:57 ` Helmut Grohne 0 siblings, 1 reply; 19+ messages in thread From: Bartosz Golaszewski @ 2020-10-15 10:05 UTC (permalink / raw) To: Helmut Grohne; +Cc: open list:GPIO SUBSYSTEM, Bartosz Golaszewski On Thu, Oct 15, 2020 at 11:35 AM Helmut Grohne <helmut.grohne@intenta.de> wrote: > > Hi, > > On Thu, Oct 15, 2020 at 11:26:47AM +0200, Bartosz Golaszewski wrote: > > I probably just didn't know any better. :) I'm a kernel developer and > > writing these bindings was basically me learning C++. > > Oh, ok. Then let me elaborte on the difference of duration and > time_point a little and explain the implications of changing the type. > > A duration denots an interval in time. A time_point denotes a specific > point in time with respect to a particular clock. Every clock has an > epoch and you can convert a time_stamp to a duration by computing the > time_since_epoch. Comparing time_points with different clocks is thus > meaningless. Internally a time_point is represented exactly like a > duration. It just has the connection to the clock. > > So what clock would you use here? There are a few standard clocks such > as std::chrono::steady_clock. This clock has an unspecified epoch. The > epoch will not change during use, but e.g. after a reboot of the system, > the epoch may be different. A clock also knows whether time increases > monotonically and a steady_clock does. On the other hand, the > system_clock has a well-specified epoch, but it is not required to be > steady. If you change the time on your machine, your system_clock may go > backwards. While system_clock may be what we want here, it has an > unspecified representation. libgpiod wants a specific representation > though to preserve the high precision of the included timestamps. We > therefore cannot use any of the standard clocks. > In case of the event timestamps - we get them from the kernel as 64-bit unsigned integers. They're then converted to struct timespec as defined by libc and eventually to ::std::chrono:duration. The timestamps use the MONOTONIC kernel clock - the same that you would use by calling clock_gettime(CLOCK_MONOTONIC, ...). Is there any way to couple the C++ time_point to this clock? > libgpiod should add its own clock class. The clock is never instaniated. > It is more a collection of functionality and a tag to be included in > templates to differentiate types. I think your clock would look like > this: > > struct gpiod_clock { > using duration = std::chrono::nanoseconds; > using rep = duration::rep; > using period = duration::period; > using time_point = std::chrono::time_point<gpiod_clock>; > static constexpr bool is_steady = std::chrono::system_clock::is_steady; > static time_point now() { > return time_point(std::chrono::system_clock::now().time_since_epoch()); > } > }; > > (The code might not work as is, but you get the idea.) > > In essence, it's a system_clock with a different underlying duration > type for representing high resolution timestamps. > > Even though changing the type does likely not change the binary > representation of timestamps, it still consitutes an API break and an > ABI break (due to changed symbol names). > > > Thanks for the suggestion - it's a good moment to make it, because > > we're in the process of changing the API to accommodate the new uAPI > > that will be released in v5.10 so I'll definitely make sure to change > > it too. > > Ok. I'm not particularly enthusiastic about updating client code all the > time for covering upstream API breaks, but sometimes it is unavoidable. > Seems to be the case here. > Me neither, but the new user API exposed by the kernel addresses a lot of issues and adds new features and it's really impossible to keep the current library API while also leveraging them. I'll keep supporting the v1.6 stable branch for a long time though. > > Are you by any chance well versed in C++ and would like to help out by > > giving me some advice? I want to fix the way GPIO line objects > > reference their owning chips but I'm not sure how to. > > I would consider myself experienced in C++. > > As far as I can see, the chip class uses the pimpl pattern, so a chip > practically is a reference counted pointer to the actual gpiod_chip. A > line has a plain chip member and this seems totally fine. As long as the > line exists, so does the underlying chip. Is this not what you intended? > Yes, it's what I intended indeed. I'm however worried that this isn't the best approach. Having learned more, I now think that lines should somehow weakly reference the chip - to emphasize that they don't control its lifetime in any way (which is the case now with each line storing a hard reference to the chip). Also what happens currently is the fact that a new line object is created each time get_line() method is called because we can't store lines in some array within the chip because that would lead to circular references. Maybe a line should store a weak_ptr to the underlying ::gpiod_chip? But then it would create a new chip object every time get_chip() is called. Is there any way around it? Making the chip and line non-copyable and expediting any shared_ptr management to the user? > What is less clear to me is the connection between a line and its > underlying gpiod_line. It is unclear to me who "owns" a gpiod_line and > who is responsible for disposing it. Since the line class is copy > constructible, the line class clearly cannot own a gpiod_line. I would > question whether this is a good decision. I'm not sure that a line must > be copyable. It should be movable. So if you delete the copy constructor > and the copy assignment for the line class, then you could argue that a > line owns its referenced gpiod_line and then it could automatically > handle release of resources upon destruction. > The thing with gpiod_line struct (the one from C libgpiod, not C++ class) is that the owner is the chip (struct gpiod_chip) - there's no need to free any resources, they'll be freed when the chip goes out of scope. You can copy the line pointer all you want, there's always a single line behind stored in the opaque struct gpiod_chip. So in C++ - I suppose - the chip should really own the C++ line (stored in a vector maybe) and the line should at most weakly reference the chip object. I'm just not sure how to correctly approach this so any advice is welcome. Best Regards Bartosz ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [libgpiod] cxx bindings: time_point vs duration 2020-10-15 10:05 ` Bartosz Golaszewski @ 2020-10-15 10:57 ` Helmut Grohne 2020-10-15 11:43 ` Bartosz Golaszewski 0 siblings, 1 reply; 19+ messages in thread From: Helmut Grohne @ 2020-10-15 10:57 UTC (permalink / raw) To: Bartosz Golaszewski; +Cc: open list:GPIO SUBSYSTEM, Bartosz Golaszewski Hi, On Thu, Oct 15, 2020 at 12:05:23PM +0200, Bartosz Golaszewski wrote: > In case of the event timestamps - we get them from the kernel as > 64-bit unsigned integers. They're then converted to struct timespec as > defined by libc and eventually to ::std::chrono:duration. The > timestamps use the MONOTONIC kernel clock - the same that you would > use by calling clock_gettime(CLOCK_MONOTONIC, ...). Is there any way > to couple the C++ time_point to this clock? I got that wrong then as I thought they were wall clock time. CLOCK_MONOTONIC is the thing that backs std::chrono::steady_clock on Linux. At least gcc and clang's libcxx implement steady_clock using nanosecond resolution. I don't thing nanosecond resolution is guarantueed, but maybe this is good enough and you can just use steady_clock? That would certainly be most welcome by consuming client code. > Me neither, but the new user API exposed by the kernel addresses a lot > of issues and adds new features and it's really impossible to keep the > current library API while also leveraging them. I'll keep supporting > the v1.6 stable branch for a long time though. Great news. Thank you. > Yes, it's what I intended indeed. I'm however worried that this isn't > the best approach. Having learned more, I now think that lines should > somehow weakly reference the chip - to emphasize that they don't > control its lifetime in any way (which is the case now with each line > storing a hard reference to the chip). Also what happens currently is > the fact that a new line object is created each time get_line() method > is called because we can't store lines in some array within the chip > because that would lead to circular references. Maybe a line should > store a weak_ptr to the underlying ::gpiod_chip? But then it would > create a new chip object every time get_chip() is called. Is there any > way around it? Making the chip and line non-copyable and expediting > any shared_ptr management to the user? First of all, I don't think any of these types need to be copyable. Keeping them moveable should be simple and makes handling them easy enough. It seems like you'd want a chip to include all lines as is the case for the C-api already. In that case, a line does not exist by itself, it only is an observable part of a chip. I'm not convinced that a line should weakly reference a chip. If you have a line and the underlying chip disappears, all the gpiod_lines get cleared and your line suddenly has a dangling reference. Actually, you line would be gone if it was part of the chip. That does not seem useful to me. So the current design of having lines control the lifetime of the chip seems useful to me. Why do you take issue with the fact that each get_line creates a new line object? This line object practically is a counted reference on the chip together with a line identifier. It kinda is a complex pointer. Why not have two distinct pointers point to the same thing? Likewise creating new chip objects is not a problem in my book, because a chip object is a (counted) reference to a gpiod_chip. Having two pointers should be ok. Creating them is cheap. > The thing with gpiod_line struct (the one from C libgpiod, not C++ > class) is that the owner is the chip (struct gpiod_chip) - there's no > need to free any resources, they'll be freed when the chip goes out of > scope. You can copy the line pointer all you want, there's always a > single line behind stored in the opaque struct gpiod_chip. So in C++ - > I suppose - the chip should really own the C++ line (stored in a > vector maybe) and the line should at most weakly reference the chip > object. I'm just not sure how to correctly approach this so any advice > is welcome. That helps a lot with my understanding. I mostly concur up to the point where you say that a chip should own the C++ line. The C++ line currently is a counted reference on a gpiod_chip together with a line identifier. I don't see what's so precious about this to ensure it is not copied. That seems entirely fine with me. Now if you go the route and have chips own lines, then your back reference from line to chip can be a plain pointer. The forward reference ensures that the pointer is always valid. It's still circular, but you don't have to mess with weak_ptrs as long as destruction is careful. So this seems mostly fine as is. Having lines manage the lifetime of chips is very convenient for using the library. Helmut ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [libgpiod] cxx bindings: time_point vs duration 2020-10-15 10:57 ` Helmut Grohne @ 2020-10-15 11:43 ` Bartosz Golaszewski 2020-10-15 12:13 ` Helmut Grohne 0 siblings, 1 reply; 19+ messages in thread From: Bartosz Golaszewski @ 2020-10-15 11:43 UTC (permalink / raw) To: Helmut Grohne; +Cc: open list:GPIO SUBSYSTEM, Bartosz Golaszewski On Thu, Oct 15, 2020 at 12:57 PM Helmut Grohne <helmut.grohne@intenta.de> wrote: > > Hi, > > On Thu, Oct 15, 2020 at 12:05:23PM +0200, Bartosz Golaszewski wrote: > > In case of the event timestamps - we get them from the kernel as > > 64-bit unsigned integers. They're then converted to struct timespec as > > defined by libc and eventually to ::std::chrono:duration. The > > timestamps use the MONOTONIC kernel clock - the same that you would > > use by calling clock_gettime(CLOCK_MONOTONIC, ...). Is there any way > > to couple the C++ time_point to this clock? > > I got that wrong then as I thought they were wall clock time. Well, it's a long story. It used to be what the kernel calls REALTIME clock, then it was changed to MONOTONIC and now there's a suggestion to make it configurable in v2. More on that here[1]. > CLOCK_MONOTONIC is the thing that backs std::chrono::steady_clock on > Linux. At least gcc and clang's libcxx implement steady_clock using > nanosecond resolution. I don't thing nanosecond resolution is > guarantueed, but maybe this is good enough and you can just use > steady_clock? That would certainly be most welcome by consuming client > code. > One question is: even if on linux the steady_clock is backed by CLOCK_MONOTONIC, is this a guarantee or just implementation? And can we rely on this if it's not defined? > > Me neither, but the new user API exposed by the kernel addresses a lot > > of issues and adds new features and it's really impossible to keep the > > current library API while also leveraging them. I'll keep supporting > > the v1.6 stable branch for a long time though. > > Great news. Thank you. > > > Yes, it's what I intended indeed. I'm however worried that this isn't > > the best approach. Having learned more, I now think that lines should > > somehow weakly reference the chip - to emphasize that they don't > > control its lifetime in any way (which is the case now with each line > > storing a hard reference to the chip). Also what happens currently is > > the fact that a new line object is created each time get_line() method > > is called because we can't store lines in some array within the chip > > because that would lead to circular references. Maybe a line should > > store a weak_ptr to the underlying ::gpiod_chip? But then it would > > create a new chip object every time get_chip() is called. Is there any > > way around it? Making the chip and line non-copyable and expediting > > any shared_ptr management to the user? > > First of all, I don't think any of these types need to be copyable. > Keeping them moveable should be simple and makes handling them easy > enough. > Right now we rely on them being copyable to store them in ::gpiod::line_bulk objects so that needs addressing too. > It seems like you'd want a chip to include all lines as is the case for > the C-api already. In that case, a line does not exist by itself, it > only is an observable part of a chip. I'm not convinced that a line > should weakly reference a chip. If you have a line and the underlying > chip disappears, all the gpiod_lines get cleared and your line suddenly > has a dangling reference. Actually, you line would be gone if it was > part of the chip. That does not seem useful to me. So the current design > of having lines control the lifetime of the chip seems useful to me. > Yes, except that with a weak_ptr we'd at least get a bad_weak_ptr exception instead of a segfault if the programmer goofs and lets the chip disappear before using a line. > Why do you take issue with the fact that each get_line creates a new > line object? This line object practically is a counted reference on the > chip together with a line identifier. It kinda is a complex pointer. Why > not have two distinct pointers point to the same thing? > It started with Python bindings actually. The pattern over there is similar except that Python objects are much more complex to create and initialize. Then I realized that C++ does the same. It also seems to me like reversed logic where a line has the power to keep the chip alive. Just doesn't feel good and I haven't seen this pattern anywhere. > Likewise creating new chip objects is not a problem in my book, because > a chip object is a (counted) reference to a gpiod_chip. Having two > pointers should be ok. Creating them is cheap. > Fair enough. > > The thing with gpiod_line struct (the one from C libgpiod, not C++ > > class) is that the owner is the chip (struct gpiod_chip) - there's no > > need to free any resources, they'll be freed when the chip goes out of > > scope. You can copy the line pointer all you want, there's always a > > single line behind stored in the opaque struct gpiod_chip. So in C++ - > > I suppose - the chip should really own the C++ line (stored in a > > vector maybe) and the line should at most weakly reference the chip > > object. I'm just not sure how to correctly approach this so any advice > > is welcome. > > That helps a lot with my understanding. I mostly concur up to the point > where you say that a chip should own the C++ line. The C++ line > currently is a counted reference on a gpiod_chip together with a line > identifier. I don't see what's so precious about this to ensure it is > not copied. That seems entirely fine with me. > > Now if you go the route and have chips own lines, then your back > reference from line to chip can be a plain pointer. The forward > reference ensures that the pointer is always valid. It's still circular, > but you don't have to mess with weak_ptrs as long as destruction is > careful. > > So this seems mostly fine as is. Having lines manage the lifetime of > chips is very convenient for using the library. > > Helmut That's interesting to read. I was under the impression that this is bad C++ (or OOP in general). I have to think about it some more though, because my gut is telling me this is just wrong on the logical level - even if it's convenient. Bartosz [1] https://www.spinics.net/lists/linux-gpio/msg53796.html ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [libgpiod] cxx bindings: time_point vs duration 2020-10-15 11:43 ` Bartosz Golaszewski @ 2020-10-15 12:13 ` Helmut Grohne 2020-10-15 12:16 ` Bartosz Golaszewski 0 siblings, 1 reply; 19+ messages in thread From: Helmut Grohne @ 2020-10-15 12:13 UTC (permalink / raw) To: Bartosz Golaszewski; +Cc: open list:GPIO SUBSYSTEM, Bartosz Golaszewski On Thu, Oct 15, 2020 at 01:43:32PM +0200, Bartosz Golaszewski wrote: > Well, it's a long story. It used to be what the kernel calls REALTIME > clock, then it was changed to MONOTONIC and now there's a suggestion > to make it configurable in v2. More on that here[1]. Ouch. I was wodering already as the timestamps looked like REALTIME here, but I'm simply using an older kernel. In that case the type of your timestamps should depend on the Linux kernel version, which is impossible to do. All you can do now is lie for older kernels. > One question is: even if on linux the steady_clock is backed by > CLOCK_MONOTONIC, is this a guarantee or just implementation? And can > we rely on this if it's not defined? Like the nanosecond resolution of steady_clock this is certainly not guarantueed. However, it is rooted so deeply that it is very unlikely to change. In theory, there could be a chance of changing it to CLOCK_BOOTTIME. I don't think anyon is going to try. At this point I recommend going with steady_clock. Helmut ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [libgpiod] cxx bindings: time_point vs duration 2020-10-15 12:13 ` Helmut Grohne @ 2020-10-15 12:16 ` Bartosz Golaszewski 2020-10-21 13:57 ` Jack Winch 0 siblings, 1 reply; 19+ messages in thread From: Bartosz Golaszewski @ 2020-10-15 12:16 UTC (permalink / raw) To: Helmut Grohne; +Cc: open list:GPIO SUBSYSTEM, Bartosz Golaszewski On Thu, Oct 15, 2020 at 2:13 PM Helmut Grohne <helmut.grohne@intenta.de> wrote: > > On Thu, Oct 15, 2020 at 01:43:32PM +0200, Bartosz Golaszewski wrote: > > Well, it's a long story. It used to be what the kernel calls REALTIME > > clock, then it was changed to MONOTONIC and now there's a suggestion > > to make it configurable in v2. More on that here[1]. > > Ouch. I was wodering already as the timestamps looked like REALTIME > here, but I'm simply using an older kernel. In that case the type of > your timestamps should depend on the Linux kernel version, which is > impossible to do. All you can do now is lie for older kernels. > The library doesn't define what the timestamp is really and so doesn't guarantee anything either. This will change in v2 as we'll make this clock configurable so we'll have to define that by default the clock is MONOTONIC and can be explicitly changed to REALTIME. > > One question is: even if on linux the steady_clock is backed by > > CLOCK_MONOTONIC, is this a guarantee or just implementation? And can > > we rely on this if it's not defined? > > Like the nanosecond resolution of steady_clock this is certainly not > guarantueed. However, it is rooted so deeply that it is very unlikely to > change. In theory, there could be a chance of changing it to > CLOCK_BOOTTIME. I don't think anyon is going to try. > > At this point I recommend going with steady_clock. > > Helmut Ok, will do. Bartosz ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [libgpiod] cxx bindings: time_point vs duration 2020-10-15 12:16 ` Bartosz Golaszewski @ 2020-10-21 13:57 ` Jack Winch 2020-10-21 14:35 ` Jack Winch ` (2 more replies) 0 siblings, 3 replies; 19+ messages in thread From: Jack Winch @ 2020-10-21 13:57 UTC (permalink / raw) To: Bartosz Golaszewski Cc: Helmut Grohne, open list:GPIO SUBSYSTEM, Bartosz Golaszewski Hello Folks, Me again. I'm a bit late on this discussion, but wanted to raise a couple of points on what's been discussed here. > As far as I can see, the chip class uses the pimpl pattern, so a chip > practically is a reference counted pointer to the actual gpiod_chip. This has absolutely nothing to do with the PImpl pattern. The Pointer-to-Implementation pattern is about separating class interface from implementation, in order to partially alleviate the absolute nightmare of managing ABI compatibility for shared libraries which expose a C++ interface. One of the questions I was going to raise on a separate thread, Bartosz and Kent, was if you care about ABI compatibility for major versions of the libgpiod API? This is because, currently, almost all changes to the C++ binding will result in an ABI breakage. Exposing public C++ interfaces from shared libraries is never really a good idea, even if ABI compatibility is properly managed and considered at the forefront of each development cycle. Granted, this is more troublesome on that other major mainstream OS, but you still face plenty of issues with this on GNU / Linux too. If so desired, I can start another thread on this topic. > I don't thing nanosecond resolution is > guarantueed, but maybe this is good enough and you can just use > steady_clock? That would certainly be most welcome by consuming client > code. You are correct - nanosecond resolution is not guaranteed. It is completely up to the standard library implementation. Which is why I, personally, would steer away from making the proposed change to struct line_event . The timestamp resolution is currently well defined in the existing implementation and changing this may not be desirable for users. If you really want a std::time_point, then you can construct one from a std::duration object. See https://en.cppreference.com/w/cpp/chrono/time_point/time_point. Jack ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [libgpiod] cxx bindings: time_point vs duration 2020-10-21 13:57 ` Jack Winch @ 2020-10-21 14:35 ` Jack Winch 2020-10-21 15:14 ` Bartosz Golaszewski 2020-10-22 6:39 ` Helmut Grohne 2 siblings, 0 replies; 19+ messages in thread From: Jack Winch @ 2020-10-21 14:35 UTC (permalink / raw) To: Bartosz Golaszewski Cc: Helmut Grohne, open list:GPIO SUBSYSTEM, Bartosz Golaszewski > If you really want a std::time_point, then you can construct > one from a std::duration object. Sorry that should be: - std::chrono::time_point - std::chrono::duration Jack ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [libgpiod] cxx bindings: time_point vs duration 2020-10-21 13:57 ` Jack Winch 2020-10-21 14:35 ` Jack Winch @ 2020-10-21 15:14 ` Bartosz Golaszewski 2020-10-21 15:44 ` Jack Winch 2020-10-22 6:39 ` Helmut Grohne 2 siblings, 1 reply; 19+ messages in thread From: Bartosz Golaszewski @ 2020-10-21 15:14 UTC (permalink / raw) To: Jack Winch; +Cc: Helmut Grohne, open list:GPIO SUBSYSTEM, Bartosz Golaszewski On Wed, Oct 21, 2020 at 3:57 PM Jack Winch <sunt.un.morcov@gmail.com> wrote: > > Hello Folks, > > Me again. I'm a bit late on this discussion, but wanted to raise a > couple of points on what's been discussed here. > > > As far as I can see, the chip class uses the pimpl pattern, so a chip > > practically is a reference counted pointer to the actual gpiod_chip. > > This has absolutely nothing to do with the PImpl pattern. The > Pointer-to-Implementation pattern is about separating class interface > from implementation, in order to partially alleviate the absolute > nightmare of managing ABI compatibility for shared libraries which > expose a C++ interface. > > One of the questions I was going to raise on a separate thread, > Bartosz and Kent, was if you care about ABI compatibility for major > versions of the libgpiod API? This is because, currently, almost all > changes to the C++ binding will result in an ABI breakage. > Yes, I am aware of this. We broke ABI in the core library once already - this is why libgpiod API version v1.6.0 has ABI version 2.2.1. I personally care a lot more about API compatibility than ABI - I'm mostly using libgpiod on bespoke embedded distros, built from scratch. I know this is important for desktop distros though so I try to avoid it whenever possible. This time however we'll be breaking both API and ABI, so that's not an issue - it really is a major release. > Exposing public C++ interfaces from shared libraries is never really a > good idea, even if ABI compatibility is properly managed and > considered at the forefront of each development cycle. Granted, this > is more troublesome on that other major mainstream OS, but you still > face plenty of issues with this on GNU / Linux too. > > If so desired, I can start another thread on this topic. > Yes, please. I'd love to learn what the alternative for C++ is then. > > I don't thing nanosecond resolution is > > guarantueed, but maybe this is good enough and you can just use > > steady_clock? That would certainly be most welcome by consuming client > > code. > > You are correct - nanosecond resolution is not guaranteed. It is > completely up to the standard library implementation. Which is why I, > personally, would steer away from making the proposed change to struct > line_event . The timestamp resolution is currently well defined in > the existing implementation and changing this may not be desirable for > users. If you really want a std::time_point, then you can construct > one from a std::duration object. See > https://en.cppreference.com/w/cpp/chrono/time_point/time_point. > > Jack Maybe a time_point returning helper in this case? Bartosz ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [libgpiod] cxx bindings: time_point vs duration 2020-10-21 15:14 ` Bartosz Golaszewski @ 2020-10-21 15:44 ` Jack Winch 0 siblings, 0 replies; 19+ messages in thread From: Jack Winch @ 2020-10-21 15:44 UTC (permalink / raw) To: Bartosz Golaszewski Cc: Helmut Grohne, open list:GPIO SUBSYSTEM, Bartosz Golaszewski > Yes, please. I'd love to learn what the alternative for C++ is then. I'll prep and send something out over the next coming days (possibly this weekend, as I've got some things to address by Friday). I'm currently putting together a talk on the subject, so I'll also share that with you once it is done too. That'll likely be in a few weeks with my other current priorities. > Maybe a time_point returning helper in this case? That would definitely satisfy both needs. Jack ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [libgpiod] cxx bindings: time_point vs duration 2020-10-21 13:57 ` Jack Winch 2020-10-21 14:35 ` Jack Winch 2020-10-21 15:14 ` Bartosz Golaszewski @ 2020-10-22 6:39 ` Helmut Grohne 2020-10-22 9:09 ` Jack Winch 2 siblings, 1 reply; 19+ messages in thread From: Helmut Grohne @ 2020-10-22 6:39 UTC (permalink / raw) To: Jack Winch Cc: Bartosz Golaszewski, open list:GPIO SUBSYSTEM, Bartosz Golaszewski On Wed, Oct 21, 2020 at 03:57:34PM +0200, Jack Winch wrote: > > I don't thing nanosecond resolution is > > guarantueed, but maybe this is good enough and you can just use > > steady_clock? That would certainly be most welcome by consuming client > > code. > > You are correct - nanosecond resolution is not guaranteed. It is > completely up to the standard library implementation. Which is why I, > personally, would steer away from making the proposed change to struct > line_event . The timestamp resolution is currently well defined in > the existing implementation and changing this may not be desirable for > users. If you really want a std::time_point, then you can construct > one from a std::duration object. See > https://en.cppreference.com/w/cpp/chrono/time_point/time_point. You're arguing that a std::chrono::steady_clock::time_point is not a good match due to its undefined ratio. That can be fixed by using a clock with a well-defined ratio. The key here is that while you can easily convert your duration to a time_point, a duration is conceptually the wrong thing to use. The field does not contain a duration, but a time_point. Using a clock would give the user the ability to compare returned timestamps to the current time as the underlying clock provides that functionality. So regardless of whether steady_clock is the right clock to use here, a duration clearly is not. If you are not satisfied with the resolution guarantuee of steady_clock, just make your own clock. Doing so results in a lot of type safety. For instance, if you accidentally compute a difference between a system_clock::time_point and a gpiod timestamp, using a duration would just work whereas a time_point would result in a compilation failure. Helmut ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [libgpiod] cxx bindings: time_point vs duration 2020-10-22 6:39 ` Helmut Grohne @ 2020-10-22 9:09 ` Jack Winch 2020-10-22 9:35 ` Bartosz Golaszewski 0 siblings, 1 reply; 19+ messages in thread From: Jack Winch @ 2020-10-22 9:09 UTC (permalink / raw) To: Helmut Grohne Cc: Bartosz Golaszewski, open list:GPIO SUBSYSTEM, Bartosz Golaszewski > You're arguing that a std::chrono::steady_clock::time_point is not a good match due to its undefined ratio. std::chrono::steady_clock::time_point is not a good match for the reasons you have quoted. > That can be fixed by using a clock with a well-defined ratio. I agree that defining a clock with a suitable ratio would remove the problems faced when using a clock with an undefined ratio. > The key here is that while you can easily convert your duration to a > time_point, a duration is conceptually the wrong thing to use. I agree that the timestamp value is conceptually a time_point. > If you are not satisfied with the resolution > guarantuee of steady_clock, just make your own clock. Doing so results > in a lot of type safety. For instance, if you accidentally compute a > difference between a system_clock::time_point and a gpiod timestamp, > using a duration would just work whereas a time_point would result in a > compilation failure. I also agree with the excellent point you raise regarding type safety and preferring compile-time errors to runtime errors. Where possible, care should be taken to mitigate or eradicate the potential for these types of runtime bug. The problem is, the solution you were suggesting to be suitable was not (in its current form). There are numerous issues with the proposal above, which require further discussion. As has been pointed out, changes were made to the char dev in Linux 5.7 such that line events are timestamped using the system 'monotonic' clock as opposed to the 'realtime' clock. So now the clock in which that timestamp is relative to is kernel version dependent. The ability to configure this will become available in the next version of the kernel, but the problem of which system clock is being used for older kernels will still remain. I rebuffed the suggestion in the manner that I did as the change would have caused issues. I completely understand and agree with the key rationale behind the proposed change. With further discussion, a suitable solution might be found, but it requires further thought. What I did not want to see happen is the change as currently proposed be applied and cause issues in the future. That being said, let's continue this discussion to see what can be done (and preferably without any p*ssing contest). The remainder of this week is no good for me, but would you be available to discuss potential solutions next week, Helmut? Jack ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [libgpiod] cxx bindings: time_point vs duration 2020-10-22 9:09 ` Jack Winch @ 2020-10-22 9:35 ` Bartosz Golaszewski 2020-10-22 9:47 ` Jack Winch 0 siblings, 1 reply; 19+ messages in thread From: Bartosz Golaszewski @ 2020-10-22 9:35 UTC (permalink / raw) To: Jack Winch; +Cc: Helmut Grohne, open list:GPIO SUBSYSTEM, Bartosz Golaszewski On Thu, Oct 22, 2020 at 11:09 AM Jack Winch <sunt.un.morcov@gmail.com> wrote: > [snip] > What I did not want to see happen is the change as currently proposed > be applied and cause issues in the future. > Jack: FYI the clock configuration didn't make it this merge window so it'll be released in v5.11. I also don't envision making a libgpiod v2.0 release any earlier than that so we have plenty of time to discuss it and come to the right conclusion. Bartosz [snip] ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [libgpiod] cxx bindings: time_point vs duration 2020-10-22 9:35 ` Bartosz Golaszewski @ 2020-10-22 9:47 ` Jack Winch 2020-10-22 11:55 ` Bartosz Golaszewski 0 siblings, 1 reply; 19+ messages in thread From: Jack Winch @ 2020-10-22 9:47 UTC (permalink / raw) To: Bartosz Golaszewski Cc: Helmut Grohne, open list:GPIO SUBSYSTEM, Bartosz Golaszewski > Jack: FYI the clock configuration didn't make it this merge window so > it'll be released in v5.11. Ah, that is a shame. Such is life, though. > I also don't envision making a libgpiod > v2.0 release any earlier than that so we have plenty of time to > discuss it and come to the right conclusion. That is also a shame, but at least that gives some time to undertake further review of libgpiod v2.0 and potentially make some further improvements. What's the time window for this? Jack On Thu, Oct 22, 2020 at 12:36 PM Bartosz Golaszewski <brgl@bgdev.pl> wrote: > > On Thu, Oct 22, 2020 at 11:09 AM Jack Winch <sunt.un.morcov@gmail.com> wrote: > > > > [snip] > > > What I did not want to see happen is the change as currently proposed > > be applied and cause issues in the future. > > > > Jack: FYI the clock configuration didn't make it this merge window so > it'll be released in v5.11. I also don't envision making a libgpiod > v2.0 release any earlier than that so we have plenty of time to > discuss it and come to the right conclusion. > > Bartosz > > [snip] ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [libgpiod] cxx bindings: time_point vs duration 2020-10-22 9:47 ` Jack Winch @ 2020-10-22 11:55 ` Bartosz Golaszewski 2020-10-22 12:22 ` Jack Winch 0 siblings, 1 reply; 19+ messages in thread From: Bartosz Golaszewski @ 2020-10-22 11:55 UTC (permalink / raw) To: Jack Winch; +Cc: Helmut Grohne, open list:GPIO SUBSYSTEM, Bartosz Golaszewski On Thu, Oct 22, 2020 at 11:47 AM Jack Winch <sunt.un.morcov@gmail.com> wrote: > [snip] > > > I also don't envision making a libgpiod > > v2.0 release any earlier than that so we have plenty of time to > > discuss it and come to the right conclusion. > > > That is also a shame, but at least that gives some time to undertake > further review of libgpiod v2.0 and potentially make some further > improvements. What's the time window for this? > Why would that be a shame? We have the chance to rework the API of the entire package, I want to give ourselves some time to get it right before we carve anything in stone for an indefinite period of time. There's no rush, really. Bartosz ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [libgpiod] cxx bindings: time_point vs duration 2020-10-22 11:55 ` Bartosz Golaszewski @ 2020-10-22 12:22 ` Jack Winch 2020-10-23 16:22 ` Bartosz Golaszewski 0 siblings, 1 reply; 19+ messages in thread From: Jack Winch @ 2020-10-22 12:22 UTC (permalink / raw) To: Bartosz Golaszewski Cc: Helmut Grohne, open list:GPIO SUBSYSTEM, Bartosz Golaszewski > Why would that be a shame? We have the chance to rework the API of the > entire package, I want to give ourselves some time to get it right > before we carve anything in stone for an indefinite period of time. > There's no rush, really. You're right. :) I will start the talk with you next week regarding ABI compatibility. I'll give you a rundown of the issues, the potential solutions and we will discuss the usage of libgpiod and the C++ bindings. Jack On Thu, Oct 22, 2020 at 2:56 PM Bartosz Golaszewski <brgl@bgdev.pl> wrote: > > On Thu, Oct 22, 2020 at 11:47 AM Jack Winch <sunt.un.morcov@gmail.com> wrote: > > > > [snip] > > > > > > I also don't envision making a libgpiod > > > v2.0 release any earlier than that so we have plenty of time to > > > discuss it and come to the right conclusion. > > > > > > That is also a shame, but at least that gives some time to undertake > > further review of libgpiod v2.0 and potentially make some further > > improvements. What's the time window for this? > > > > Why would that be a shame? We have the chance to rework the API of the > entire package, I want to give ourselves some time to get it right > before we carve anything in stone for an indefinite period of time. > There's no rush, really. > > Bartosz ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [libgpiod] cxx bindings: time_point vs duration 2020-10-22 12:22 ` Jack Winch @ 2020-10-23 16:22 ` Bartosz Golaszewski 0 siblings, 0 replies; 19+ messages in thread From: Bartosz Golaszewski @ 2020-10-23 16:22 UTC (permalink / raw) To: Jack Winch; +Cc: Helmut Grohne, open list:GPIO SUBSYSTEM, Bartosz Golaszewski On Thu, Oct 22, 2020 at 2:22 PM Jack Winch <sunt.un.morcov@gmail.com> wrote: > > > Why would that be a shame? We have the chance to rework the API of the > > entire package, I want to give ourselves some time to get it right > > before we carve anything in stone for an indefinite period of time. > > There's no rush, really. > > You're right. :) > > I will start the talk with you next week regarding ABI compatibility. > I'll give you a rundown of the issues, the potential solutions and we > will discuss the usage of libgpiod and the C++ bindings. > Hi Jack! I started reading on my own and I think I now have a slightly better idea about C++ and its ABI. I also see what a mess the original libgpiod bindings are in terms of ABI compatibility but fear not! Right now (v2.0) is the time to make it better! :) At a personal level I'm not too concerned about the ABI compatibility of C++ bindings - I much more care about the API. This is because libgpiod is aimed mostly at bespoke embedded distros (yocto, buildroot, openwrt etc.) I understand however that it's an important issue for distros. I didn't know any better at the time of writing libgpiodcxx so I just put all private members in the main header, exposing them to the users of the library. I'm not sure why I didn't realize that C++ classes are basically C structs (and exposing them amounts to exposing struct in a C header) but I just didn't know any better. I assume that you'll either propose to use the Pimpl pattern or a header-only library. I noticed that Pimpl is what Qt5 uses while header-only is more of a boost thing. If so - the timing is great as I'm open to either solution for libgpiod v2.0. Best regards, Bartosz Golaszewski ^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2020-10-23 16:22 UTC | newest] Thread overview: 19+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2020-10-15 8:38 [libgpiod] cxx bindings: time_point vs duration Helmut Grohne 2020-10-15 9:26 ` Bartosz Golaszewski 2020-10-15 9:35 ` Helmut Grohne 2020-10-15 10:05 ` Bartosz Golaszewski 2020-10-15 10:57 ` Helmut Grohne 2020-10-15 11:43 ` Bartosz Golaszewski 2020-10-15 12:13 ` Helmut Grohne 2020-10-15 12:16 ` Bartosz Golaszewski 2020-10-21 13:57 ` Jack Winch 2020-10-21 14:35 ` Jack Winch 2020-10-21 15:14 ` Bartosz Golaszewski 2020-10-21 15:44 ` Jack Winch 2020-10-22 6:39 ` Helmut Grohne 2020-10-22 9:09 ` Jack Winch 2020-10-22 9:35 ` Bartosz Golaszewski 2020-10-22 9:47 ` Jack Winch 2020-10-22 11:55 ` Bartosz Golaszewski 2020-10-22 12:22 ` Jack Winch 2020-10-23 16:22 ` Bartosz Golaszewski
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).