* [libgpiod] Rethinking struct gpiod_line_bulk
@ 2020-10-12 15:15 Bartosz Golaszewski
2020-10-12 15:21 ` Andy Shevchenko
2020-10-13 0:52 ` Kent Gibson
0 siblings, 2 replies; 17+ messages in thread
From: Bartosz Golaszewski @ 2020-10-12 15:15 UTC (permalink / raw)
To: linux-gpio, Kent Gibson, Andy Shevchenko
Hi!
One of the things I'd like to address in libgpiod v2.0 is excessive
stack usage with struct gpiod_line_bulk. This structure is pretty big
right now: it's an array 64 pointers + 4 bytes size. That amounts to
260 bytes on 32-bit and 516 bytes on 64-bit architectures
respectively. It's also used everywhere as all functions dealing with
single lines eventually end up calling bulk counterparts.
I have some ideas for making this structure smaller and I thought I'd
run them by you.
The most obvious approach would be to make struct gpiod_line_bulk
opaque and dynamically allocated. I don't like this idea due to the
amount of error checking this would involve and also calling malloc()
on virtually every value read, event poll etc.
Another idea is to use embedded list node structs (see include/list.h
in the kernel) in struct gpiod_line and chain the lines together with
struct gpiod_line_bulk containing the list head. That would mean only
being able to store each line in a single bulk object. This is
obviously too limiting.
An idea I think it relatively straightforward without completely
changing the current interface is making struct gpiod_line_bulk look
something like this:
struct gpiod_line_bulk {
unsigned int num_lines;
uint64_t lines;
};
Where lines would be a bitmap with set bits corresponding to offsets
of lines that are part of this bulk. We'd then provide a function that
would allow the user to get the line without it being updated (so
there's no ioctl() call that could fail). The only limit that we'd
need to introduce here is making it impossible to store lines from
different chips in a single line bulk object. This doesn't make sense
anyway so I'm fine with this.
What do you think? Do you have any other ideas?
Best regards,
Bartosz Golaszewski
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [libgpiod] Rethinking struct gpiod_line_bulk
2020-10-12 15:15 [libgpiod] Rethinking struct gpiod_line_bulk Bartosz Golaszewski
@ 2020-10-12 15:21 ` Andy Shevchenko
2020-10-13 0:52 ` Kent Gibson
1 sibling, 0 replies; 17+ messages in thread
From: Andy Shevchenko @ 2020-10-12 15:21 UTC (permalink / raw)
To: Bartosz Golaszewski, Geert Uytterhoeven
Cc: linux-gpio, Kent Gibson, Andy Shevchenko
+Cc: Geert as he might give some input to the last paragraphs.
To me, on the shallow glance, it looks fine.
On Mon, Oct 12, 2020 at 6:17 PM Bartosz Golaszewski
<bgolaszewski@baylibre.com> wrote:
>
> Hi!
>
> One of the things I'd like to address in libgpiod v2.0 is excessive
> stack usage with struct gpiod_line_bulk. This structure is pretty big
> right now: it's an array 64 pointers + 4 bytes size. That amounts to
> 260 bytes on 32-bit and 516 bytes on 64-bit architectures
> respectively. It's also used everywhere as all functions dealing with
> single lines eventually end up calling bulk counterparts.
>
> I have some ideas for making this structure smaller and I thought I'd
> run them by you.
>
> The most obvious approach would be to make struct gpiod_line_bulk
> opaque and dynamically allocated. I don't like this idea due to the
> amount of error checking this would involve and also calling malloc()
> on virtually every value read, event poll etc.
>
> Another idea is to use embedded list node structs (see include/list.h
> in the kernel) in struct gpiod_line and chain the lines together with
> struct gpiod_line_bulk containing the list head. That would mean only
> being able to store each line in a single bulk object. This is
> obviously too limiting.
>
> An idea I think it relatively straightforward without completely
> changing the current interface is making struct gpiod_line_bulk look
> something like this:
>
> struct gpiod_line_bulk {
> unsigned int num_lines;
> uint64_t lines;
> };
>
> Where lines would be a bitmap with set bits corresponding to offsets
> of lines that are part of this bulk. We'd then provide a function that
> would allow the user to get the line without it being updated (so
> there's no ioctl() call that could fail). The only limit that we'd
> need to introduce here is making it impossible to store lines from
> different chips in a single line bulk object. This doesn't make sense
> anyway so I'm fine with this.
>
> What do you think? Do you have any other ideas?
>
> Best regards,
> Bartosz Golaszewski
--
With Best Regards,
Andy Shevchenko
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [libgpiod] Rethinking struct gpiod_line_bulk
2020-10-12 15:15 [libgpiod] Rethinking struct gpiod_line_bulk Bartosz Golaszewski
2020-10-12 15:21 ` Andy Shevchenko
@ 2020-10-13 0:52 ` Kent Gibson
2020-10-13 7:45 ` Bartosz Golaszewski
1 sibling, 1 reply; 17+ messages in thread
From: Kent Gibson @ 2020-10-13 0:52 UTC (permalink / raw)
To: Bartosz Golaszewski; +Cc: linux-gpio, Andy Shevchenko
On Mon, Oct 12, 2020 at 05:15:25PM +0200, Bartosz Golaszewski wrote:
> Hi!
>
> One of the things I'd like to address in libgpiod v2.0 is excessive
> stack usage with struct gpiod_line_bulk. This structure is pretty big
> right now: it's an array 64 pointers + 4 bytes size. That amounts to
> 260 bytes on 32-bit and 516 bytes on 64-bit architectures
> respectively. It's also used everywhere as all functions dealing with
> single lines eventually end up calling bulk counterparts.
>
> I have some ideas for making this structure smaller and I thought I'd
> run them by you.
>
> The most obvious approach would be to make struct gpiod_line_bulk
> opaque and dynamically allocated. I don't like this idea due to the
> amount of error checking this would involve and also calling malloc()
> on virtually every value read, event poll etc.
>
> Another idea is to use embedded list node structs (see include/list.h
> in the kernel) in struct gpiod_line and chain the lines together with
> struct gpiod_line_bulk containing the list head. That would mean only
> being able to store each line in a single bulk object. This is
> obviously too limiting.
>
I don't think I've ever gotten my head fully around the libgpiod API,
or all its use cases, and I'm not clear on why this is too limiting.
What is the purpose of the gpiod_line_bulk, and how does that differ from the
gpio_v2_line_request?
> An idea I think it relatively straightforward without completely
> changing the current interface is making struct gpiod_line_bulk look
> something like this:
>
> struct gpiod_line_bulk {
> unsigned int num_lines;
> uint64_t lines;
> };
>
> Where lines would be a bitmap with set bits corresponding to offsets
> of lines that are part of this bulk. We'd then provide a function that
> would allow the user to get the line without it being updated (so
> there's no ioctl() call that could fail). The only limit that we'd
> need to introduce here is making it impossible to store lines from
> different chips in a single line bulk object. This doesn't make sense
> anyway so I'm fine with this.
>
> What do you think? Do you have any other ideas?
>
Doesn't that place a strict range limit on offset values, 0-63?
The uAPI limits the number of offsets requested to 64, not their value.
Otherwise I'd've used a bitmap there as well.
Or is there some other mapping happening in the background that I'm
missing?
Cheers,
Kent.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [libgpiod] Rethinking struct gpiod_line_bulk
2020-10-13 0:52 ` Kent Gibson
@ 2020-10-13 7:45 ` Bartosz Golaszewski
2020-10-13 8:53 ` Kent Gibson
0 siblings, 1 reply; 17+ messages in thread
From: Bartosz Golaszewski @ 2020-10-13 7:45 UTC (permalink / raw)
To: Kent Gibson
Cc: Bartosz Golaszewski, linux-gpio, Andy Shevchenko,
Geert Uytterhoeven
On Tue, Oct 13, 2020 at 2:53 AM Kent Gibson <warthog618@gmail.com> wrote:
>
> On Mon, Oct 12, 2020 at 05:15:25PM +0200, Bartosz Golaszewski wrote:
> > Hi!
> >
> > One of the things I'd like to address in libgpiod v2.0 is excessive
> > stack usage with struct gpiod_line_bulk. This structure is pretty big
> > right now: it's an array 64 pointers + 4 bytes size. That amounts to
> > 260 bytes on 32-bit and 516 bytes on 64-bit architectures
> > respectively. It's also used everywhere as all functions dealing with
> > single lines eventually end up calling bulk counterparts.
> >
> > I have some ideas for making this structure smaller and I thought I'd
> > run them by you.
> >
> > The most obvious approach would be to make struct gpiod_line_bulk
> > opaque and dynamically allocated. I don't like this idea due to the
> > amount of error checking this would involve and also calling malloc()
> > on virtually every value read, event poll etc.
> >
> > Another idea is to use embedded list node structs (see include/list.h
> > in the kernel) in struct gpiod_line and chain the lines together with
> > struct gpiod_line_bulk containing the list head. That would mean only
> > being able to store each line in a single bulk object. This is
> > obviously too limiting.
> >
>
> I don't think I've ever gotten my head fully around the libgpiod API,
> or all its use cases, and I'm not clear on why this is too limiting.
>
For instance: we pass one bulk object to gpiod_line_event_wait_bulk()
containing the lines to poll and use another to store the lines for
which events were detected. Lines would need to live in two bulks.
> What is the purpose of the gpiod_line_bulk, and how does that differ from the
> gpio_v2_line_request?
>
struct gpiod_line_bulk simply aggregates lines so that we can easily
operate on multiple lines at once. Just a convenience helper
basically.
> > An idea I think it relatively straightforward without completely
> > changing the current interface is making struct gpiod_line_bulk look
> > something like this:
> >
> > struct gpiod_line_bulk {
> > unsigned int num_lines;
> > uint64_t lines;
> > };
> >
> > Where lines would be a bitmap with set bits corresponding to offsets
> > of lines that are part of this bulk. We'd then provide a function that
> > would allow the user to get the line without it being updated (so
> > there's no ioctl() call that could fail). The only limit that we'd
> > need to introduce here is making it impossible to store lines from
> > different chips in a single line bulk object. This doesn't make sense
> > anyway so I'm fine with this.
> >
> > What do you think? Do you have any other ideas?
> >
>
> Doesn't that place a strict range limit on offset values, 0-63?
> The uAPI limits the number of offsets requested to 64, not their value.
> Otherwise I'd've used a bitmap there as well.
>
> Or is there some other mapping happening in the background that I'm
> missing?
>
Nah, you're right of course. The structure should actually look more like:
struct gpiod_line_bulk {
struct gpiod_chip *owner;
unsigned int num_lines;
uint64_t lines;
};
And the 'lines' bitmap should actually refer to offsets at which the
owning chip stores the line pointers in its own 'lines' array - up to
64 lines.
But we'd still have to sanitize the values when adding lines to a bulk
object and probably check the return value. I'm wondering if there's a
better way to store group references to lines on the stack but I'm out
of ideas.
Bartosz
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [libgpiod] Rethinking struct gpiod_line_bulk
2020-10-13 7:45 ` Bartosz Golaszewski
@ 2020-10-13 8:53 ` Kent Gibson
2020-10-13 12:05 ` Bartosz Golaszewski
2020-10-19 13:31 ` Bartosz Golaszewski
0 siblings, 2 replies; 17+ messages in thread
From: Kent Gibson @ 2020-10-13 8:53 UTC (permalink / raw)
To: Bartosz Golaszewski
Cc: Bartosz Golaszewski, linux-gpio, Andy Shevchenko,
Geert Uytterhoeven
On Tue, Oct 13, 2020 at 09:45:04AM +0200, Bartosz Golaszewski wrote:
> On Tue, Oct 13, 2020 at 2:53 AM Kent Gibson <warthog618@gmail.com> wrote:
> >
> > On Mon, Oct 12, 2020 at 05:15:25PM +0200, Bartosz Golaszewski wrote:
> > > Hi!
> > >
> > > One of the things I'd like to address in libgpiod v2.0 is excessive
> > > stack usage with struct gpiod_line_bulk. This structure is pretty big
> > > right now: it's an array 64 pointers + 4 bytes size. That amounts to
> > > 260 bytes on 32-bit and 516 bytes on 64-bit architectures
> > > respectively. It's also used everywhere as all functions dealing with
> > > single lines eventually end up calling bulk counterparts.
> > >
> > > I have some ideas for making this structure smaller and I thought I'd
> > > run them by you.
> > >
> > > The most obvious approach would be to make struct gpiod_line_bulk
> > > opaque and dynamically allocated. I don't like this idea due to the
> > > amount of error checking this would involve and also calling malloc()
> > > on virtually every value read, event poll etc.
> > >
> > > Another idea is to use embedded list node structs (see include/list.h
> > > in the kernel) in struct gpiod_line and chain the lines together with
> > > struct gpiod_line_bulk containing the list head. That would mean only
> > > being able to store each line in a single bulk object. This is
> > > obviously too limiting.
> > >
> >
> > I don't think I've ever gotten my head fully around the libgpiod API,
> > or all its use cases, and I'm not clear on why this is too limiting.
> >
>
> For instance: we pass one bulk object to gpiod_line_event_wait_bulk()
> containing the lines to poll and use another to store the lines for
> which events were detected. Lines would need to live in two bulks.
>
Ahh, ok. So you want to keep that? I prefer a streaming interface, but
I guess some prefer the select/poll style?
> > What is the purpose of the gpiod_line_bulk, and how does that differ from the
> > gpio_v2_line_request?
> >
>
> struct gpiod_line_bulk simply aggregates lines so that we can easily
> operate on multiple lines at once. Just a convenience helper
> basically.
>
> > > An idea I think it relatively straightforward without completely
> > > changing the current interface is making struct gpiod_line_bulk look
> > > something like this:
> > >
> > > struct gpiod_line_bulk {
> > > unsigned int num_lines;
> > > uint64_t lines;
> > > };
> > >
> > > Where lines would be a bitmap with set bits corresponding to offsets
> > > of lines that are part of this bulk. We'd then provide a function that
> > > would allow the user to get the line without it being updated (so
> > > there's no ioctl() call that could fail). The only limit that we'd
> > > need to introduce here is making it impossible to store lines from
> > > different chips in a single line bulk object. This doesn't make sense
> > > anyway so I'm fine with this.
> > >
> > > What do you think? Do you have any other ideas?
> > >
> >
> > Doesn't that place a strict range limit on offset values, 0-63?
> > The uAPI limits the number of offsets requested to 64, not their value.
> > Otherwise I'd've used a bitmap there as well.
> >
> > Or is there some other mapping happening in the background that I'm
> > missing?
> >
>
> Nah, you're right of course. The structure should actually look more like:
>
> struct gpiod_line_bulk {
> struct gpiod_chip *owner;
> unsigned int num_lines;
> uint64_t lines;
> };
>
> And the 'lines' bitmap should actually refer to offsets at which the
> owning chip stores the line pointers in its own 'lines' array - up to
> 64 lines.
>
> But we'd still have to sanitize the values when adding lines to a bulk
> object and probably check the return value. I'm wondering if there's a
> better way to store group references to lines on the stack but I'm out
> of ideas.
>
So you are proposing keeping the bulk of the bulk in the background and
passing around a flyweight in its place. Makes sense.
Cheers,
Kent.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [libgpiod] Rethinking struct gpiod_line_bulk
2020-10-13 8:53 ` Kent Gibson
@ 2020-10-13 12:05 ` Bartosz Golaszewski
2020-10-13 12:27 ` Kent Gibson
2020-10-19 13:31 ` Bartosz Golaszewski
1 sibling, 1 reply; 17+ messages in thread
From: Bartosz Golaszewski @ 2020-10-13 12:05 UTC (permalink / raw)
To: Kent Gibson
Cc: Bartosz Golaszewski, linux-gpio, Andy Shevchenko,
Geert Uytterhoeven
On Tue, Oct 13, 2020 at 10:53 AM Kent Gibson <warthog618@gmail.com> wrote:
>
> On Tue, Oct 13, 2020 at 09:45:04AM +0200, Bartosz Golaszewski wrote:
> > On Tue, Oct 13, 2020 at 2:53 AM Kent Gibson <warthog618@gmail.com> wrote:
> > >
> > > On Mon, Oct 12, 2020 at 05:15:25PM +0200, Bartosz Golaszewski wrote:
> > > > Hi!
> > > >
> > > > One of the things I'd like to address in libgpiod v2.0 is excessive
> > > > stack usage with struct gpiod_line_bulk. This structure is pretty big
> > > > right now: it's an array 64 pointers + 4 bytes size. That amounts to
> > > > 260 bytes on 32-bit and 516 bytes on 64-bit architectures
> > > > respectively. It's also used everywhere as all functions dealing with
> > > > single lines eventually end up calling bulk counterparts.
> > > >
> > > > I have some ideas for making this structure smaller and I thought I'd
> > > > run them by you.
> > > >
> > > > The most obvious approach would be to make struct gpiod_line_bulk
> > > > opaque and dynamically allocated. I don't like this idea due to the
> > > > amount of error checking this would involve and also calling malloc()
> > > > on virtually every value read, event poll etc.
> > > >
> > > > Another idea is to use embedded list node structs (see include/list.h
> > > > in the kernel) in struct gpiod_line and chain the lines together with
> > > > struct gpiod_line_bulk containing the list head. That would mean only
> > > > being able to store each line in a single bulk object. This is
> > > > obviously too limiting.
> > > >
> > >
> > > I don't think I've ever gotten my head fully around the libgpiod API,
> > > or all its use cases, and I'm not clear on why this is too limiting.
> > >
> >
> > For instance: we pass one bulk object to gpiod_line_event_wait_bulk()
> > containing the lines to poll and use another to store the lines for
> > which events were detected. Lines would need to live in two bulks.
> >
>
> Ahh, ok. So you want to keep that? I prefer a streaming interface, but
> I guess some prefer the select/poll style?
>
Yeah I wanted to keep it. Why? We allow plugging into external event
loops by providing a helper for accessing the underlying file
descriptor but I think we still should have some basic wrappers for
poll(). What exactly are you referring to as "streaming interface"?
> > > What is the purpose of the gpiod_line_bulk, and how does that differ from the
> > > gpio_v2_line_request?
> > >
> >
> > struct gpiod_line_bulk simply aggregates lines so that we can easily
> > operate on multiple lines at once. Just a convenience helper
> > basically.
> >
> > > > An idea I think it relatively straightforward without completely
> > > > changing the current interface is making struct gpiod_line_bulk look
> > > > something like this:
> > > >
> > > > struct gpiod_line_bulk {
> > > > unsigned int num_lines;
> > > > uint64_t lines;
> > > > };
> > > >
> > > > Where lines would be a bitmap with set bits corresponding to offsets
> > > > of lines that are part of this bulk. We'd then provide a function that
> > > > would allow the user to get the line without it being updated (so
> > > > there's no ioctl() call that could fail). The only limit that we'd
> > > > need to introduce here is making it impossible to store lines from
> > > > different chips in a single line bulk object. This doesn't make sense
> > > > anyway so I'm fine with this.
> > > >
> > > > What do you think? Do you have any other ideas?
> > > >
> > >
> > > Doesn't that place a strict range limit on offset values, 0-63?
> > > The uAPI limits the number of offsets requested to 64, not their value.
> > > Otherwise I'd've used a bitmap there as well.
> > >
> > > Or is there some other mapping happening in the background that I'm
> > > missing?
> > >
> >
> > Nah, you're right of course. The structure should actually look more like:
> >
> > struct gpiod_line_bulk {
> > struct gpiod_chip *owner;
> > unsigned int num_lines;
> > uint64_t lines;
> > };
> >
> > And the 'lines' bitmap should actually refer to offsets at which the
> > owning chip stores the line pointers in its own 'lines' array - up to
> > 64 lines.
> >
> > But we'd still have to sanitize the values when adding lines to a bulk
> > object and probably check the return value. I'm wondering if there's a
> > better way to store group references to lines on the stack but I'm out
> > of ideas.
> >
>
> So you are proposing keeping the bulk of the bulk in the background and
> passing around a flyweight in its place. Makes sense.
>
Precisely that.
Bartosz
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [libgpiod] Rethinking struct gpiod_line_bulk
2020-10-13 12:05 ` Bartosz Golaszewski
@ 2020-10-13 12:27 ` Kent Gibson
2020-10-13 12:53 ` Bartosz Golaszewski
0 siblings, 1 reply; 17+ messages in thread
From: Kent Gibson @ 2020-10-13 12:27 UTC (permalink / raw)
To: Bartosz Golaszewski
Cc: Bartosz Golaszewski, linux-gpio, Andy Shevchenko,
Geert Uytterhoeven
On Tue, Oct 13, 2020 at 02:05:35PM +0200, Bartosz Golaszewski wrote:
> On Tue, Oct 13, 2020 at 10:53 AM Kent Gibson <warthog618@gmail.com> wrote:
> >
> > On Tue, Oct 13, 2020 at 09:45:04AM +0200, Bartosz Golaszewski wrote:
> > > On Tue, Oct 13, 2020 at 2:53 AM Kent Gibson <warthog618@gmail.com> wrote:
> > > >
> > > > On Mon, Oct 12, 2020 at 05:15:25PM +0200, Bartosz Golaszewski wrote:
> > > > > Hi!
[snip]
> > > > >
> > > > > in the kernel) in struct gpiod_line and chain the lines together with
> > > > > struct gpiod_line_bulk containing the list head. That would mean only
> > > > > being able to store each line in a single bulk object. This is
> > > > > obviously too limiting.
> > > > >
> > > >
> > > > I don't think I've ever gotten my head fully around the libgpiod API,
> > > > or all its use cases, and I'm not clear on why this is too limiting.
> > > >
> > >
> > > For instance: we pass one bulk object to gpiod_line_event_wait_bulk()
> > > containing the lines to poll and use another to store the lines for
> > > which events were detected. Lines would need to live in two bulks.
> > >
> >
> > Ahh, ok. So you want to keep that? I prefer a streaming interface, but
> > I guess some prefer the select/poll style?
> >
>
> Yeah I wanted to keep it. Why? We allow plugging into external event
> loops by providing a helper for accessing the underlying file
> descriptor but I think we still should have some basic wrappers for
> poll(). What exactly are you referring to as "streaming interface"?
>
By "streaming interface" in this context I mean reading the events from
the fd. That stream of events retains ordering where select/poll on a bulk
doesn't.
Cheers,
Kent.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [libgpiod] Rethinking struct gpiod_line_bulk
2020-10-13 12:27 ` Kent Gibson
@ 2020-10-13 12:53 ` Bartosz Golaszewski
0 siblings, 0 replies; 17+ messages in thread
From: Bartosz Golaszewski @ 2020-10-13 12:53 UTC (permalink / raw)
To: Kent Gibson
Cc: Bartosz Golaszewski, linux-gpio, Andy Shevchenko,
Geert Uytterhoeven
On Tue, Oct 13, 2020 at 2:28 PM Kent Gibson <warthog618@gmail.com> wrote:
>
> On Tue, Oct 13, 2020 at 02:05:35PM +0200, Bartosz Golaszewski wrote:
> > On Tue, Oct 13, 2020 at 10:53 AM Kent Gibson <warthog618@gmail.com> wrote:
> > >
> > > On Tue, Oct 13, 2020 at 09:45:04AM +0200, Bartosz Golaszewski wrote:
> > > > On Tue, Oct 13, 2020 at 2:53 AM Kent Gibson <warthog618@gmail.com> wrote:
> > > > >
> > > > > On Mon, Oct 12, 2020 at 05:15:25PM +0200, Bartosz Golaszewski wrote:
> > > > > > Hi!
>
> [snip]
>
> > > > > >
> > > > > > in the kernel) in struct gpiod_line and chain the lines together with
> > > > > > struct gpiod_line_bulk containing the list head. That would mean only
> > > > > > being able to store each line in a single bulk object. This is
> > > > > > obviously too limiting.
> > > > > >
> > > > >
> > > > > I don't think I've ever gotten my head fully around the libgpiod API,
> > > > > or all its use cases, and I'm not clear on why this is too limiting.
> > > > >
> > > >
> > > > For instance: we pass one bulk object to gpiod_line_event_wait_bulk()
> > > > containing the lines to poll and use another to store the lines for
> > > > which events were detected. Lines would need to live in two bulks.
> > > >
> > >
> > > Ahh, ok. So you want to keep that? I prefer a streaming interface, but
> > > I guess some prefer the select/poll style?
> > >
> >
> > Yeah I wanted to keep it. Why? We allow plugging into external event
> > loops by providing a helper for accessing the underlying file
> > descriptor but I think we still should have some basic wrappers for
> > poll(). What exactly are you referring to as "streaming interface"?
> >
>
> By "streaming interface" in this context I mean reading the events from
> the fd. That stream of events retains ordering where select/poll on a bulk
> doesn't.
>
> Cheers,
> Kent.
>
Ah right! Yes, you're right. Let's ditch this then and return events
referencing lines on which they occurred.
Bartosz
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [libgpiod] Rethinking struct gpiod_line_bulk
2020-10-13 8:53 ` Kent Gibson
2020-10-13 12:05 ` Bartosz Golaszewski
@ 2020-10-19 13:31 ` Bartosz Golaszewski
2020-10-19 16:21 ` Kent Gibson
1 sibling, 1 reply; 17+ messages in thread
From: Bartosz Golaszewski @ 2020-10-19 13:31 UTC (permalink / raw)
To: Kent Gibson
Cc: Bartosz Golaszewski, linux-gpio, Andy Shevchenko,
Geert Uytterhoeven
On Tue, Oct 13, 2020 at 10:53 AM Kent Gibson <warthog618@gmail.com> wrote:
>
[snip]
> >
> > struct gpiod_line_bulk {
> > struct gpiod_chip *owner;
> > unsigned int num_lines;
> > uint64_t lines;
> > };
> >
> > And the 'lines' bitmap should actually refer to offsets at which the
> > owning chip stores the line pointers in its own 'lines' array - up to
> > 64 lines.
> >
> > But we'd still have to sanitize the values when adding lines to a bulk
> > object and probably check the return value. I'm wondering if there's a
> > better way to store group references to lines on the stack but I'm out
> > of ideas.
> >
>
> So you are proposing keeping the bulk of the bulk in the background and
> passing around a flyweight in its place. Makes sense.
>
So this won't fly of course because a bitmap doesn't hold enough
information to reference an arbitrary number of lines in the chip in
any meaningful way.
I have another idea. I can live with struct gpiod_bulk being allocated
dynamically whenever users of the library use it - because it's quite
unlikely they'd do it all that often. What I'd like to avoid is
allocating a new bulk object whenever we want to package a single line
passed to e.g. gpiod_line_request() before we propagate it to
gpiod_line_request_bulk().
How about we define struct gpiod_line_bulk as:
struct gpiod_line_bulk {
unsigned int num_lines;
unsigned int max_num_lines;
struct gpiod_line *lines[1];
};
And expose it in gpiod.h header?
That way we can still allocate it on the stack while using very little
memory - whenever packaging a single line - or we can allocate it
dynamically with the following interface:
struct gpiod_line_bulk *gpiod_line_bulk_new(unsigned int max_lines)
{
struct gpiod_line_bulk *ret;
if (max_lines < 1) {
errno = EINVAL;
return NULL;
}
ret = malloc(sizeof(struct gpiod_line_bulk) + (max_lines - 1) *
sizeof(struct gpiod_line *));
if (!ret)
return NULL;
gpiod_line_bulk_init(max_lines);
return ret;
}
Or we can even not expose it to users, make it completely opaque,
provide needed accessors and only allow internal users inside the
library to use the stack for single line bulks.
Bartosz
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [libgpiod] Rethinking struct gpiod_line_bulk
2020-10-19 13:31 ` Bartosz Golaszewski
@ 2020-10-19 16:21 ` Kent Gibson
2020-10-20 10:47 ` Bartosz Golaszewski
0 siblings, 1 reply; 17+ messages in thread
From: Kent Gibson @ 2020-10-19 16:21 UTC (permalink / raw)
To: Bartosz Golaszewski
Cc: Bartosz Golaszewski, linux-gpio, Andy Shevchenko,
Geert Uytterhoeven
On Mon, Oct 19, 2020 at 03:31:07PM +0200, Bartosz Golaszewski wrote:
> On Tue, Oct 13, 2020 at 10:53 AM Kent Gibson <warthog618@gmail.com> wrote:
> >
>
> [snip]
>
> > >
> > > struct gpiod_line_bulk {
> > > struct gpiod_chip *owner;
> > > unsigned int num_lines;
> > > uint64_t lines;
> > > };
> > >
> > > And the 'lines' bitmap should actually refer to offsets at which the
> > > owning chip stores the line pointers in its own 'lines' array - up to
> > > 64 lines.
> > >
> > > But we'd still have to sanitize the values when adding lines to a bulk
> > > object and probably check the return value. I'm wondering if there's a
> > > better way to store group references to lines on the stack but I'm out
> > > of ideas.
> > >
> >
> > So you are proposing keeping the bulk of the bulk in the background and
> > passing around a flyweight in its place. Makes sense.
> >
>
> So this won't fly of course because a bitmap doesn't hold enough
> information to reference an arbitrary number of lines in the chip in
> any meaningful way.
>
Yeah, that is what I was trying to get at earlier, but from your replies
I assumed you would have an array mapping from bitmap index to line
hidden somewhere behind the scenes on a per-bulk basis...
> I have another idea. I can live with struct gpiod_bulk being allocated
> dynamically whenever users of the library use it - because it's quite
> unlikely they'd do it all that often. What I'd like to avoid is
> allocating a new bulk object whenever we want to package a single line
> passed to e.g. gpiod_line_request() before we propagate it to
> gpiod_line_request_bulk().
>
> How about we define struct gpiod_line_bulk as:
>
> struct gpiod_line_bulk {
> unsigned int num_lines;
> unsigned int max_num_lines;
> struct gpiod_line *lines[1];
> };
>
> And expose it in gpiod.h header?
>
I don't have a problem with that definition, appropriately documented,
but I wouldn't want to expose that to the user, so my preference would
be to make it opaque, but....
... that will make the API more awkward to use, as the methods that
populate a bulk will now have to have that bulk constructed beforehand.
Or you change them to accept a struct gpiod_line_bulk ** to return the
bulk constructed by libgpiod. Either way the user will have to free the
bulk afterwards. But there are only a few of those of those methods.
And all the bulk accesor methods are currently inline - that would also
have to change if you go opaque, so you will get a code size and
performance hit as well.
Maybe keeping it exposed is the best - just heavily documented?
At least start out that way, since it is closest to what is already there.
Cheers,
Kent.
> That way we can still allocate it on the stack while using very little
> memory - whenever packaging a single line - or we can allocate it
> dynamically with the following interface:
>
> struct gpiod_line_bulk *gpiod_line_bulk_new(unsigned int max_lines)
> {
> struct gpiod_line_bulk *ret;
>
> if (max_lines < 1) {
> errno = EINVAL;
> return NULL;
> }
>
> ret = malloc(sizeof(struct gpiod_line_bulk) + (max_lines - 1) *
> sizeof(struct gpiod_line *));
> if (!ret)
> return NULL;
>
> gpiod_line_bulk_init(max_lines);
>
> return ret;
> }
>
> Or we can even not expose it to users, make it completely opaque,
> provide needed accessors and only allow internal users inside the
> library to use the stack for single line bulks.
>
> Bartosz
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [libgpiod] Rethinking struct gpiod_line_bulk
2020-10-19 16:21 ` Kent Gibson
@ 2020-10-20 10:47 ` Bartosz Golaszewski
2020-10-20 11:05 ` Andy Shevchenko
2020-10-20 15:05 ` Kent Gibson
0 siblings, 2 replies; 17+ messages in thread
From: Bartosz Golaszewski @ 2020-10-20 10:47 UTC (permalink / raw)
To: Kent Gibson
Cc: Bartosz Golaszewski, linux-gpio, Andy Shevchenko,
Geert Uytterhoeven
On Mon, Oct 19, 2020 at 6:21 PM Kent Gibson <warthog618@gmail.com> wrote:
>
> On Mon, Oct 19, 2020 at 03:31:07PM +0200, Bartosz Golaszewski wrote:
> > On Tue, Oct 13, 2020 at 10:53 AM Kent Gibson <warthog618@gmail.com> wrote:
> > >
> >
> > [snip]
> >
> > > >
> > > > struct gpiod_line_bulk {
> > > > struct gpiod_chip *owner;
> > > > unsigned int num_lines;
> > > > uint64_t lines;
> > > > };
> > > >
> > > > And the 'lines' bitmap should actually refer to offsets at which the
> > > > owning chip stores the line pointers in its own 'lines' array - up to
> > > > 64 lines.
> > > >
> > > > But we'd still have to sanitize the values when adding lines to a bulk
> > > > object and probably check the return value. I'm wondering if there's a
> > > > better way to store group references to lines on the stack but I'm out
> > > > of ideas.
> > > >
> > >
> > > So you are proposing keeping the bulk of the bulk in the background and
> > > passing around a flyweight in its place. Makes sense.
> > >
> >
> > So this won't fly of course because a bitmap doesn't hold enough
> > information to reference an arbitrary number of lines in the chip in
> > any meaningful way.
> >
>
> Yeah, that is what I was trying to get at earlier, but from your replies
> I assumed you would have an array mapping from bitmap index to line
> hidden somewhere behind the scenes on a per-bulk basis...
>
> > I have another idea. I can live with struct gpiod_bulk being allocated
> > dynamically whenever users of the library use it - because it's quite
> > unlikely they'd do it all that often. What I'd like to avoid is
> > allocating a new bulk object whenever we want to package a single line
> > passed to e.g. gpiod_line_request() before we propagate it to
> > gpiod_line_request_bulk().
> >
> > How about we define struct gpiod_line_bulk as:
> >
> > struct gpiod_line_bulk {
> > unsigned int num_lines;
> > unsigned int max_num_lines;
> > struct gpiod_line *lines[1];
> > };
> >
> > And expose it in gpiod.h header?
> >
>
> I don't have a problem with that definition, appropriately documented,
> but I wouldn't want to expose that to the user, so my preference would
> be to make it opaque, but....
>
> ... that will make the API more awkward to use, as the methods that
> populate a bulk will now have to have that bulk constructed beforehand.
> Or you change them to accept a struct gpiod_line_bulk ** to return the
> bulk constructed by libgpiod. Either way the user will have to free the
> bulk afterwards. But there are only a few of those of those methods.
>
Or we can even make these methods return struct gpiod_line_bulk *
which is cleaner than passing struct gpiod_line_bulk ** as argument
IMO. I'm fine with that.
> And all the bulk accesor methods are currently inline - that would also
> have to change if you go opaque, so you will get a code size and
> performance hit as well.
This may slightly increase the size of the library indeed but it will
decrease the size of the user programs. Anyway: we're talking about
miniscule changes. Performance hit isn't that much of a concern either
- function calls are fast and we're not adding new system calls that
could lead to more context switches.
>
> Maybe keeping it exposed is the best - just heavily documented?
> At least start out that way, since it is closest to what is already there.
>
I'm now actually leaning more towards making it opaque but I need to
find a way to make gpiod_line_bulk_foreach_line work with hidden bulk
struct.
Bartosz
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [libgpiod] Rethinking struct gpiod_line_bulk
2020-10-20 10:47 ` Bartosz Golaszewski
@ 2020-10-20 11:05 ` Andy Shevchenko
2020-10-20 15:05 ` Kent Gibson
1 sibling, 0 replies; 17+ messages in thread
From: Andy Shevchenko @ 2020-10-20 11:05 UTC (permalink / raw)
To: Bartosz Golaszewski
Cc: Kent Gibson, Bartosz Golaszewski, linux-gpio, Geert Uytterhoeven
On Tue, Oct 20, 2020 at 12:47:02PM +0200, Bartosz Golaszewski wrote:
> On Mon, Oct 19, 2020 at 6:21 PM Kent Gibson <warthog618@gmail.com> wrote:
...
> I'm now actually leaning more towards making it opaque but I need to
> find a way to make gpiod_line_bulk_foreach_line work with hidden bulk
> struct.
+1 for opaque if technically possible.
For the issue you describe, perhaps callbacks somewhere? (Although it might
look not so nice)
--
With Best Regards,
Andy Shevchenko
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [libgpiod] Rethinking struct gpiod_line_bulk
2020-10-20 10:47 ` Bartosz Golaszewski
2020-10-20 11:05 ` Andy Shevchenko
@ 2020-10-20 15:05 ` Kent Gibson
2020-10-20 15:53 ` Bartosz Golaszewski
1 sibling, 1 reply; 17+ messages in thread
From: Kent Gibson @ 2020-10-20 15:05 UTC (permalink / raw)
To: Bartosz Golaszewski
Cc: Bartosz Golaszewski, linux-gpio, Andy Shevchenko,
Geert Uytterhoeven
On Tue, Oct 20, 2020 at 12:47:02PM +0200, Bartosz Golaszewski wrote:
> On Mon, Oct 19, 2020 at 6:21 PM Kent Gibson <warthog618@gmail.com> wrote:
> >
> > On Mon, Oct 19, 2020 at 03:31:07PM +0200, Bartosz Golaszewski wrote:
> > > On Tue, Oct 13, 2020 at 10:53 AM Kent Gibson <warthog618@gmail.com> wrote:
> > > >
> > >
> > > [snip]
> > >
> > > > >
> > > > > struct gpiod_line_bulk {
> > > > > struct gpiod_chip *owner;
> > > > > unsigned int num_lines;
> > > > > uint64_t lines;
> > > > > };
> > > > >
> > > > > And the 'lines' bitmap should actually refer to offsets at which the
> > > > > owning chip stores the line pointers in its own 'lines' array - up to
> > > > > 64 lines.
> > > > >
> > > > > But we'd still have to sanitize the values when adding lines to a bulk
> > > > > object and probably check the return value. I'm wondering if there's a
> > > > > better way to store group references to lines on the stack but I'm out
> > > > > of ideas.
> > > > >
> > > >
> > > > So you are proposing keeping the bulk of the bulk in the background and
> > > > passing around a flyweight in its place. Makes sense.
> > > >
> > >
> > > So this won't fly of course because a bitmap doesn't hold enough
> > > information to reference an arbitrary number of lines in the chip in
> > > any meaningful way.
> > >
> >
> > Yeah, that is what I was trying to get at earlier, but from your replies
> > I assumed you would have an array mapping from bitmap index to line
> > hidden somewhere behind the scenes on a per-bulk basis...
> >
> > > I have another idea. I can live with struct gpiod_bulk being allocated
> > > dynamically whenever users of the library use it - because it's quite
> > > unlikely they'd do it all that often. What I'd like to avoid is
> > > allocating a new bulk object whenever we want to package a single line
> > > passed to e.g. gpiod_line_request() before we propagate it to
> > > gpiod_line_request_bulk().
> > >
> > > How about we define struct gpiod_line_bulk as:
> > >
> > > struct gpiod_line_bulk {
> > > unsigned int num_lines;
> > > unsigned int max_num_lines;
> > > struct gpiod_line *lines[1];
> > > };
> > >
> > > And expose it in gpiod.h header?
> > >
> >
> > I don't have a problem with that definition, appropriately documented,
> > but I wouldn't want to expose that to the user, so my preference would
> > be to make it opaque, but....
> >
> > ... that will make the API more awkward to use, as the methods that
> > populate a bulk will now have to have that bulk constructed beforehand.
> > Or you change them to accept a struct gpiod_line_bulk ** to return the
> > bulk constructed by libgpiod. Either way the user will have to free the
> > bulk afterwards. But there are only a few of those of those methods.
> >
>
> Or we can even make these methods return struct gpiod_line_bulk *
> which is cleaner than passing struct gpiod_line_bulk ** as argument
> IMO. I'm fine with that.
>
Returning NULL for the error cases, as per gpiod_chip_find_line()?
> > And all the bulk accesor methods are currently inline - that would also
> > have to change if you go opaque, so you will get a code size and
> > performance hit as well.
>
> This may slightly increase the size of the library indeed but it will
> decrease the size of the user programs. Anyway: we're talking about
> miniscule changes. Performance hit isn't that much of a concern either
> - function calls are fast and we're not adding new system calls that
> could lead to more context switches.
>
Those inlines are just simple field accessors and mutators, and you'd hope
that has less overhead than a function call.
Opaque is certainly preferable - it reduces the API surface and you are
also free to change the implementation later.
Can we also add in range checking to functions that accept offsets while
you are at it? e.g. gpiod_line_bulk_get_line() should return NULL if the
offset is out of range. Or are we assuming that the user will always
ensure offset < num_lines first? If the latter can we at least document
the bad behaviour if offset >= num_lines?
> >
> > Maybe keeping it exposed is the best - just heavily documented?
> > At least start out that way, since it is closest to what is already there.
> >
>
> I'm now actually leaning more towards making it opaque but I need to
> find a way to make gpiod_line_bulk_foreach_line work with hidden bulk
> struct.
>
Why not just drop it in favour of gpiod_line_bulk_foreach_line_off()?
Cheers,
Kent.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [libgpiod] Rethinking struct gpiod_line_bulk
2020-10-20 15:05 ` Kent Gibson
@ 2020-10-20 15:53 ` Bartosz Golaszewski
2020-10-20 22:24 ` Kent Gibson
0 siblings, 1 reply; 17+ messages in thread
From: Bartosz Golaszewski @ 2020-10-20 15:53 UTC (permalink / raw)
To: Kent Gibson
Cc: Bartosz Golaszewski, linux-gpio, Andy Shevchenko,
Geert Uytterhoeven
On Tue, Oct 20, 2020 at 5:06 PM Kent Gibson <warthog618@gmail.com> wrote:
>
[snip]
> > >
> > > I don't have a problem with that definition, appropriately documented,
> > > but I wouldn't want to expose that to the user, so my preference would
> > > be to make it opaque, but....
> > >
> > > ... that will make the API more awkward to use, as the methods that
> > > populate a bulk will now have to have that bulk constructed beforehand.
> > > Or you change them to accept a struct gpiod_line_bulk ** to return the
> > > bulk constructed by libgpiod. Either way the user will have to free the
> > > bulk afterwards. But there are only a few of those of those methods.
> > >
> >
> > Or we can even make these methods return struct gpiod_line_bulk *
> > which is cleaner than passing struct gpiod_line_bulk ** as argument
> > IMO. I'm fine with that.
> >
>
> Returning NULL for the error cases, as per gpiod_chip_find_line()?
>
Yes, definitely.
> > > And all the bulk accesor methods are currently inline - that would also
> > > have to change if you go opaque, so you will get a code size and
> > > performance hit as well.
> >
> > This may slightly increase the size of the library indeed but it will
> > decrease the size of the user programs. Anyway: we're talking about
> > miniscule changes. Performance hit isn't that much of a concern either
> > - function calls are fast and we're not adding new system calls that
> > could lead to more context switches.
> >
>
> Those inlines are just simple field accessors and mutators, and you'd hope
> that has less overhead than a function call.
>
> Opaque is certainly preferable - it reduces the API surface and you are
> also free to change the implementation later.
>
> Can we also add in range checking to functions that accept offsets while
> you are at it? e.g. gpiod_line_bulk_get_line() should return NULL if the
> offset is out of range. Or are we assuming that the user will always
> ensure offset < num_lines first? If the latter can we at least document
> the bad behaviour if offset >= num_lines?
>
Yes, that's what I already have in my dev branch - functions that
return errors but I imagine, the users wouldn't be forced to check
these return values if they are sure the supplied arguments are
correct. If we ever use the warn_unused_result attribute in the
library - the bulk functions would not be annotated with it.
> > >
> > > Maybe keeping it exposed is the best - just heavily documented?
> > > At least start out that way, since it is closest to what is already there.
> > >
> >
> > I'm now actually leaning more towards making it opaque but I need to
> > find a way to make gpiod_line_bulk_foreach_line work with hidden bulk
> > struct.
> >
>
> Why not just drop it in favour of gpiod_line_bulk_foreach_line_off()?
>
The one with the line being supplied to the user automatically is more
elegant. If anything - I'd prefer to drop
gpiod_line_bulk_foreach_line_off(). Callbacks as suggested by Andy is
a good idea - something like what GLib does in a lot of helpers for
lists etc.
Bartosz
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [libgpiod] Rethinking struct gpiod_line_bulk
2020-10-20 15:53 ` Bartosz Golaszewski
@ 2020-10-20 22:24 ` Kent Gibson
2020-10-21 7:33 ` Bartosz Golaszewski
0 siblings, 1 reply; 17+ messages in thread
From: Kent Gibson @ 2020-10-20 22:24 UTC (permalink / raw)
To: Bartosz Golaszewski
Cc: Bartosz Golaszewski, linux-gpio, Andy Shevchenko,
Geert Uytterhoeven
On Tue, Oct 20, 2020 at 05:53:31PM +0200, Bartosz Golaszewski wrote:
> On Tue, Oct 20, 2020 at 5:06 PM Kent Gibson <warthog618@gmail.com> wrote:
> >
>
[snip]
> > >
> > > I'm now actually leaning more towards making it opaque but I need to
> > > find a way to make gpiod_line_bulk_foreach_line work with hidden bulk
> > > struct.
> > >
> >
> > Why not just drop it in favour of gpiod_line_bulk_foreach_line_off()?
> >
>
> The one with the line being supplied to the user automatically is more
> elegant. If anything - I'd prefer to drop
> gpiod_line_bulk_foreach_line_off(). Callbacks as suggested by Andy is
> a good idea - something like what GLib does in a lot of helpers for
> lists etc.
>
Not sure what you mean here - they both return the line, the difference
is how they store the loop state, with gpiod_line_bulk_foreach_line()
exposing the bulk->lines array via the lineptr. That is the source of
your problem if you go opaque - that array becomes hidden, as it
probably should be.
Cheers,
Kent.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [libgpiod] Rethinking struct gpiod_line_bulk
2020-10-20 22:24 ` Kent Gibson
@ 2020-10-21 7:33 ` Bartosz Golaszewski
2020-10-21 8:29 ` Kent Gibson
0 siblings, 1 reply; 17+ messages in thread
From: Bartosz Golaszewski @ 2020-10-21 7:33 UTC (permalink / raw)
To: Kent Gibson
Cc: Bartosz Golaszewski, linux-gpio, Andy Shevchenko,
Geert Uytterhoeven
On Wed, Oct 21, 2020 at 12:24 AM Kent Gibson <warthog618@gmail.com> wrote:
>
> On Tue, Oct 20, 2020 at 05:53:31PM +0200, Bartosz Golaszewski wrote:
> > On Tue, Oct 20, 2020 at 5:06 PM Kent Gibson <warthog618@gmail.com> wrote:
> > >
> >
> [snip]
> > > >
> > > > I'm now actually leaning more towards making it opaque but I need to
> > > > find a way to make gpiod_line_bulk_foreach_line work with hidden bulk
> > > > struct.
> > > >
> > >
> > > Why not just drop it in favour of gpiod_line_bulk_foreach_line_off()?
> > >
> >
> > The one with the line being supplied to the user automatically is more
> > elegant. If anything - I'd prefer to drop
> > gpiod_line_bulk_foreach_line_off(). Callbacks as suggested by Andy is
> > a good idea - something like what GLib does in a lot of helpers for
> > lists etc.
> >
>
> Not sure what you mean here - they both return the line, the difference
> is how they store the loop state, with gpiod_line_bulk_foreach_line()
> exposing the bulk->lines array via the lineptr. That is the source of
> your problem if you go opaque - that array becomes hidden, as it
> probably should be.
>
No idea what I meant either. :)
When using a function with a callback we no longer need the user to
supply the memory for storing the loop state - it can be stored in the
stack frame of said function - so the callback should only take the
line as argument (+ void * user data) and the user can store the loop
state however they like. This is how I see it.
Bartosz
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [libgpiod] Rethinking struct gpiod_line_bulk
2020-10-21 7:33 ` Bartosz Golaszewski
@ 2020-10-21 8:29 ` Kent Gibson
0 siblings, 0 replies; 17+ messages in thread
From: Kent Gibson @ 2020-10-21 8:29 UTC (permalink / raw)
To: Bartosz Golaszewski
Cc: Bartosz Golaszewski, linux-gpio, Andy Shevchenko,
Geert Uytterhoeven
On Wed, Oct 21, 2020 at 09:33:03AM +0200, Bartosz Golaszewski wrote:
> On Wed, Oct 21, 2020 at 12:24 AM Kent Gibson <warthog618@gmail.com> wrote:
> >
> > On Tue, Oct 20, 2020 at 05:53:31PM +0200, Bartosz Golaszewski wrote:
> > > On Tue, Oct 20, 2020 at 5:06 PM Kent Gibson <warthog618@gmail.com> wrote:
> > > >
> > >
> > [snip]
> > > > >
> > > > > I'm now actually leaning more towards making it opaque but I need to
> > > > > find a way to make gpiod_line_bulk_foreach_line work with hidden bulk
> > > > > struct.
> > > > >
> > > >
> > > > Why not just drop it in favour of gpiod_line_bulk_foreach_line_off()?
> > > >
> > >
> > > The one with the line being supplied to the user automatically is more
> > > elegant. If anything - I'd prefer to drop
> > > gpiod_line_bulk_foreach_line_off(). Callbacks as suggested by Andy is
> > > a good idea - something like what GLib does in a lot of helpers for
> > > lists etc.
> > >
> >
> > Not sure what you mean here - they both return the line, the difference
> > is how they store the loop state, with gpiod_line_bulk_foreach_line()
> > exposing the bulk->lines array via the lineptr. That is the source of
> > your problem if you go opaque - that array becomes hidden, as it
> > probably should be.
> >
>
> No idea what I meant either. :)
>
> When using a function with a callback we no longer need the user to
> supply the memory for storing the loop state - it can be stored in the
> stack frame of said function - so the callback should only take the
> line as argument (+ void * user data) and the user can store the loop
> state however they like. This is how I see it.
>
That makes sense - if you are going opaque then the callback is cleaner.
Will the callback have any return code, say to trigger a break from the
loop?
Cheers,
Kent.
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2020-10-21 8:29 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-10-12 15:15 [libgpiod] Rethinking struct gpiod_line_bulk Bartosz Golaszewski
2020-10-12 15:21 ` Andy Shevchenko
2020-10-13 0:52 ` Kent Gibson
2020-10-13 7:45 ` Bartosz Golaszewski
2020-10-13 8:53 ` Kent Gibson
2020-10-13 12:05 ` Bartosz Golaszewski
2020-10-13 12:27 ` Kent Gibson
2020-10-13 12:53 ` Bartosz Golaszewski
2020-10-19 13:31 ` Bartosz Golaszewski
2020-10-19 16:21 ` Kent Gibson
2020-10-20 10:47 ` Bartosz Golaszewski
2020-10-20 11:05 ` Andy Shevchenko
2020-10-20 15:05 ` Kent Gibson
2020-10-20 15:53 ` Bartosz Golaszewski
2020-10-20 22:24 ` Kent Gibson
2020-10-21 7:33 ` Bartosz Golaszewski
2020-10-21 8:29 ` Kent Gibson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).