From: Jonathan Cameron <jic23@jic23.retrosnub.co.uk>
To: Martin Kelly <mkelly@xevo.com>
Cc: Jean-Baptiste Maneyrol <jmaneyrol@invensense.com>,
Jonathan Cameron <Jonathan.Cameron@huawei.com>,
Jonathan Cameron <jic23@kernel.org>,
"linux-iio@vger.kernel.org" <linux-iio@vger.kernel.org>
Subject: Re: [PATCH v3] imu: inv_mpu6050: interpolate missing timestamps
Date: Sat, 28 Apr 2018 16:10:21 +0100 [thread overview]
Message-ID: <20180428161021.11ef16ab@archlinux> (raw)
In-Reply-To: <f8750729-62fe-811f-f9ec-589d01931c5c@xevo.com>
On Thu, 26 Apr 2018 09:46:18 -0700
Martin Kelly <mkelly@xevo.com> wrote:
> On 04/26/2018 12:35 AM, Jean-Baptiste Maneyrol wrote:
> >
> >
> > On 25/04/2018 20:06, Martin Kelly wrote:
> >> On 04/06/2018 09:33 AM, Martin Kelly wrote:
> >>> On 04/06/2018 08:41 AM, Jonathan Cameron wrote:
> >>>>
> >>>>
> >>>> On 6 April 2018 16:21:05 BST, Jean-Baptiste Maneyrol
> >>>> <JManeyrol@invensense.com> wrote:
> >>>>> Hello,
> >>>>>
> >>>>>
> >>>>> there is just a problem if I'm understanding well.
> >>>>>
> >>>>>
> >>>>> Reading FIFO count register under hard irq handler (when taking the
> >>>>> timestamp) is not possible since i2c access is using a mutex. That's
> >>>>> why we are using an irq thread for reading FIFO content.
> >>>>
> >>>> Good point. Need more sleep or caffeine!
> >>>>
> >>>>
> >>>
> >>> I was about to reply with the same, as I started coding it up :). Too
> >>> bad, it was such a great plan!
> >>>
> >>> I have a little update: When switching to level triggered interrupts,
> >>> the problem goes away for me, as do the bus errors I get at high
> >>> frequencies. I'm working on a patch to support other interrupt types
> >>> than rising edge, which is almost done.
> >>>
> >>> I also intend to look again at the bus errors for edge driven
> >>> interrupts. Since they happen only at high frequency and go away with
> >>> level driven interrupts (which must be acked and therefore prevent
> >>> reentrancy), I suspect there's a concurrency bug.
> >>>
> >>> That said, I think the question remains: Since we can't get the FIFO
> >>> count from the hard IRQ handler, and since it could be some time
> >>> before the bottom half thread is scheduled, I don't see any way to
> >>> accurately handle forward interpolation.
> >>>
> >>> Though we can't do forward interpolation, I think at least having
> >>> backward interpolation is better than filling in blank timestamps,
> >>> especially as we haven't seen an actual case of forward interpolation
> >>> being needed, while we have real use cases that require backward
> >>> interpolation (and can be easily verified in a logic analyzer).
> >>>
> >>> However, that's only my opinion. Jonathan, Jean-Baptiste, and others,
> >>> what do you think?
> >>>
> >>
> >> Hi,
> >>
> >> What can I do to help get closure on this? Is there any data I could
> >> gather that would help make a decision?
> >>
> >> In cases of a troubled system, I think the interpolation is very
> >> useful, and it's awkward to do in userspace, as it involves keeping a
> >> history of past records, which can be inconvenient and not always
> >> accurate (e.g. if userspace doesn't read fast enough and we miss
> >> records). However, I certainly understand the concern about
> >> interpolation. Should we consider making the interpolation
> >> configurable via sysfs or device-tree?
> >>
> >> I'd be happy to hear other ideas too about better handling the case of
> >> missed interrupts.
> >
> > Hello,
> >
> > I have implemented a new timestamp mechanism that do something similar
> > but in a more precise way.
> >
> > The main ideas are:
> > * check if interrupt timestamp is valid (computes interrupt timestamps
> > interval and check against set period value with a margin)
> > * use validated interrupt timestamps to measure chip internal period
> > from the system (to have more accurate computed timestamp value)
> > * use the interrupt timestamp for data if it is valid
> > * if interrupt timestamp is invalid (meaning interrupt was delayed or
> > missing), computes timestamp using the measured chip period
> >
> > I will send the corresponding patch series as soon as my last clean-up
> > series has been accepted by Jonathan.
> >
> > Regards,
> > JB
>
> Excellent, I look forward to the patches. Jonathan, are you OK with this
> design approach?
It does sound a rather complex solution, but should be accurate.
We'll see when the patches are out, but it will probably do the job given
I think we have concluded we want to hide this from userspace as much
as possible.
Thanks,
Jonathan
prev parent reply other threads:[~2018-04-28 15:10 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-03-28 17:40 [PATCH v3] imu: inv_mpu6050: interpolate missing timestamps Martin Kelly
2018-03-30 10:36 ` Jonathan Cameron
2018-04-02 17:07 ` Martin Kelly
2018-04-06 15:15 ` Jonathan Cameron
2018-04-06 15:21 ` Jean-Baptiste Maneyrol
2018-04-06 15:25 ` Jean-Baptiste Maneyrol
2018-04-06 15:41 ` Jonathan Cameron
2018-04-06 16:33 ` Martin Kelly
2018-04-25 18:06 ` Martin Kelly
2018-04-26 7:35 ` Jean-Baptiste Maneyrol
2018-04-26 16:46 ` Martin Kelly
2018-04-28 15:10 ` Jonathan Cameron [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180428161021.11ef16ab@archlinux \
--to=jic23@jic23.retrosnub.co.uk \
--cc=Jonathan.Cameron@huawei.com \
--cc=jic23@kernel.org \
--cc=jmaneyrol@invensense.com \
--cc=linux-iio@vger.kernel.org \
--cc=mkelly@xevo.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).