From: Kent Gibson <warthog618@gmail.com>
To: andy pugh <bodgesoc@gmail.com>
Cc: linux-gpio@vger.kernel.org
Subject: Re: [libgpiod] gpiod_line_get_value_bulk may be broken?
Date: Fri, 28 Jul 2023 05:55:27 +0800 [thread overview]
Message-ID: <ZMLnz25brQvcwBVW@sol> (raw)
In-Reply-To: <CAN1+YZX1m8iZPg1EM8ivqCft83hT1ERcmb2kxx53rNFA7NTJ3w@mail.gmail.com>
On Thu, Jul 27, 2023 at 10:17:05PM +0100, andy pugh wrote:
> On Thu, 27 Jul 2023 at 21:54, Kent Gibson <warthog618@gmail.com> wrote:
>
> > That is not how the line_bulk API is used.
> > You don't request the lines separately and then add them to the bulk,
> > you add them to the bulk then request them with
> > gpiod_line_request_bulk_input(), or one of the other
> > gpiod_line_request_bulk_XXX() functions.
>
> I did try that way first, but it didn't seem to be working for me.
> I am currently upgrading the system to Bookworm (gpiod v1.6) to try again.
>
If you can repeat it, and ideally provide a failing test case, then we can
take a look at it.
> > Btw, the primary use case for the bulk is for when you need to perform
> > operations on a set of lines as simultaneously as possible.
>
> I am trying to do things as quickly as possible on a predetermined set
> of lines.
> I am experimenting with gpiod as a replacement for an existing (and
> no-longer-working) driver that is part of LinuxCNC.
>
> I suspect that gpiod won't be fast enough, ideally I would like to be
> able to write to 15 IO lines in 15µs. (because the code will run in a
> realtime thread which can't overrun)
> (There are other reasons that it might not work too, you can probably
> think of more than I can)
>
Depends on what Pi you are on. A Pi Zero would struggle, but on a Pi4
that is doable, of course depending on what else you are doing.
That is based on benchmarking libgpiod v2, but I would expect v1 to be
similar.
On a Pi is it significantly faster to go direct to hardware using
/dev/gpiomem, rather than going via the kernel as libgpiod does.
I do my best to avoid using gpiomem these days, but if you really need to
minimize CPU cycles or latency then that is another option.
Cheers,
Kent.
next prev parent reply other threads:[~2023-07-27 21:56 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-27 15:14 [libgpiod] gpiod_line_get_value_bulk may be broken? andy pugh
2023-07-27 20:53 ` Kent Gibson
2023-07-27 21:17 ` andy pugh
2023-07-27 21:55 ` Kent Gibson [this message]
2023-07-27 22:10 ` andy pugh
2023-07-27 22:36 ` Kent Gibson
2023-07-28 0:39 ` andy pugh
2023-07-28 1:07 ` andy pugh
2023-07-28 5:57 ` Kent Gibson
2023-07-28 19:01 ` andy pugh
2023-07-29 2:03 ` Kent Gibson
2023-08-05 22:55 ` andy pugh
2023-08-06 1:02 ` Kent Gibson
2023-08-06 9:13 ` andy pugh
2023-08-06 9:29 ` Kent Gibson
2023-08-10 0:17 ` andy pugh
2023-08-10 0:46 ` Kent Gibson
2023-08-10 22:07 ` andy pugh
2023-08-11 0:59 ` Kent Gibson
2023-08-11 1:26 ` andy pugh
2023-08-11 1:36 ` Kent Gibson
2023-08-14 22:25 ` How to use gpiod_line_set_flags andy pugh
2023-08-15 0:49 ` Kent Gibson
2023-08-15 18:03 ` andy pugh
2023-08-11 12:19 ` [libgpiod] gpiod_line_get_value_bulk may be broken? Bartosz Golaszewski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZMLnz25brQvcwBVW@sol \
--to=warthog618@gmail.com \
--cc=bodgesoc@gmail.com \
--cc=linux-gpio@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).