linux-gpio.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Bartosz Golaszewski <brgl@bgdev.pl>
To: Kent Gibson <warthog618@gmail.com>
Cc: Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-gpio <linux-gpio@vger.kernel.org>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>
Subject: Re: [libgpiod] Rethinking struct gpiod_line_bulk
Date: Tue, 13 Oct 2020 09:45:04 +0200	[thread overview]
Message-ID: <CAMRc=Mf_ZG5FqEAd0CSCqx_GeEG_4ghEXf8S3Sdws4+XOFV2Ag@mail.gmail.com> (raw)
In-Reply-To: <20201013005252.GA9387@sol>

On Tue, Oct 13, 2020 at 2:53 AM Kent Gibson <warthog618@gmail.com> wrote:
>
> On Mon, Oct 12, 2020 at 05:15:25PM +0200, Bartosz Golaszewski wrote:
> > Hi!
> >
> > One of the things I'd like to address in libgpiod v2.0 is excessive
> > stack usage with struct gpiod_line_bulk. This structure is pretty big
> > right now: it's an array 64 pointers + 4 bytes size. That amounts to
> > 260 bytes on 32-bit and 516 bytes on 64-bit architectures
> > respectively. It's also used everywhere as all functions dealing with
> > single lines eventually end up calling bulk counterparts.
> >
> > I have some ideas for making this structure smaller and I thought I'd
> > run them by you.
> >
> > The most obvious approach would be to make struct gpiod_line_bulk
> > opaque and dynamically allocated. I don't like this idea due to the
> > amount of error checking this would involve and also calling malloc()
> > on virtually every value read, event poll etc.
> >
> > Another idea is to use embedded list node structs (see include/list.h
> > in the kernel) in struct gpiod_line and chain the lines together with
> > struct gpiod_line_bulk containing the list head. That would mean only
> > being able to store each line in a single bulk object. This is
> > obviously too limiting.
> >
>
> I don't think I've ever gotten my head fully around the libgpiod API,
> or all its use cases, and I'm not clear on why this is too limiting.
>

For instance: we pass one bulk object to gpiod_line_event_wait_bulk()
containing the lines to poll and use another to store the lines for
which events were detected. Lines would need to live in two bulks.

> What is the purpose of the gpiod_line_bulk, and how does that differ from the
> gpio_v2_line_request?
>

struct gpiod_line_bulk simply aggregates lines so that we can easily
operate on multiple lines at once. Just a convenience helper
basically.

> > An idea I think it relatively straightforward without completely
> > changing the current interface is making struct gpiod_line_bulk look
> > something like this:
> >
> > struct gpiod_line_bulk {
> >     unsigned int num_lines;
> >     uint64_t lines;
> > };
> >
> > Where lines would be a bitmap with set bits corresponding to offsets
> > of lines that are part of this bulk. We'd then provide a function that
> > would allow the user to get the line without it being updated (so
> > there's no ioctl() call that could fail). The only limit that we'd
> > need to introduce here is making it impossible to store lines from
> > different chips in a single line bulk object. This doesn't make sense
> > anyway so I'm fine with this.
> >
> > What do you think? Do you have any other ideas?
> >
>
> Doesn't that place a strict range limit on offset values, 0-63?
> The uAPI limits the number of offsets requested to 64, not their value.
> Otherwise I'd've used a bitmap there as well.
>
> Or is there some other mapping happening in the background that I'm
> missing?
>

Nah, you're right of course. The structure should actually look more like:

struct gpiod_line_bulk {
    struct gpiod_chip *owner;
    unsigned int num_lines;
    uint64_t lines;
};

And the 'lines' bitmap should actually refer to offsets at which the
owning chip stores the line pointers in its own 'lines' array - up to
64 lines.

But we'd still have to sanitize the values when adding lines to a bulk
object and probably check the return value. I'm wondering if there's a
better way to store group references to lines on the stack but I'm out
of ideas.

Bartosz

  reply	other threads:[~2020-10-13  7:45 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-12 15:15 [libgpiod] Rethinking struct gpiod_line_bulk Bartosz Golaszewski
2020-10-12 15:21 ` Andy Shevchenko
2020-10-13  0:52 ` Kent Gibson
2020-10-13  7:45   ` Bartosz Golaszewski [this message]
2020-10-13  8:53     ` Kent Gibson
2020-10-13 12:05       ` Bartosz Golaszewski
2020-10-13 12:27         ` Kent Gibson
2020-10-13 12:53           ` Bartosz Golaszewski
2020-10-19 13:31       ` Bartosz Golaszewski
2020-10-19 16:21         ` Kent Gibson
2020-10-20 10:47           ` Bartosz Golaszewski
2020-10-20 11:05             ` Andy Shevchenko
2020-10-20 15:05             ` Kent Gibson
2020-10-20 15:53               ` Bartosz Golaszewski
2020-10-20 22:24                 ` Kent Gibson
2020-10-21  7:33                   ` Bartosz Golaszewski
2020-10-21  8:29                     ` Kent Gibson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAMRc=Mf_ZG5FqEAd0CSCqx_GeEG_4ghEXf8S3Sdws4+XOFV2Ag@mail.gmail.com' \
    --to=brgl@bgdev.pl \
    --cc=andriy.shevchenko@linux.intel.com \
    --cc=bgolaszewski@baylibre.com \
    --cc=geert@linux-m68k.org \
    --cc=linux-gpio@vger.kernel.org \
    --cc=warthog618@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).