From: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
To: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Cc: Hans Verkuil <hverkuil@xs4all.nl>,
Linux Media Mailing List <linux-media@vger.kernel.org>,
Mauro Carvalho Chehab <mchehab@infradead.org>,
Sakari Ailus <sakari.ailus@linux.intel.com>,
Aviv Greenberg <avivgr@gmail.com>
Subject: Re: per-frame camera metadata (again)
Date: Sun, 27 Dec 2015 01:47:54 +0200 [thread overview]
Message-ID: <5520197.vJSVcNd1Sr@avalon> (raw)
In-Reply-To: <Pine.LNX.4.64.1512241123060.12474@axis700.grange>
Hi Guennadi,
On Thursday 24 December 2015 11:42:49 Guennadi Liakhovetski wrote:
> Hi Laurent,
>
> Let me put this at the top: So far it looks like we converge on two
> possibilities:
>
> (1) a separate video-device node with a separate queue. No user-space
> visible changes are required apart from new FOURCC codes. In the kernel
> we'd have to add some subdev API between the bridge and the sensor drivers
> to let the sensor driver instruct the bridge driver to use some of the
> data, arriving over the camera interface, as metadata.
The interface should be more generic and allow describing how multiple
channels (in terms of virtual channels and data types for CSI-2 for instance)
are multiplexed over a single physical link. I'm not sure how to represent
that at the media controller level, that's also one topic that needs to be
researched.
> (2) parsing metadata by the sensor subdevice driver to make it available
> as controls. This would only (properly) work with the request API, which
> is still a work in progress. Apart from that request API no additional
> user-space visible changes would be required. The kernel subdevice API
> would have to be extended as above, to specify metadata location.
> Additionally, the bridge driver would have to pass the metadata buffer
> back to the subdevice driver for parsing.
>
> Since the request API isn't available yet and even the latest version
> doesn't support per-request controls, looks like immediately only the
> former approach can be used.
Both approaches require development, I'm certainly open to collaborating on
the request API to finalize it sooner :-)
> On Wed, 23 Dec 2015, Laurent Pinchart wrote:
>
> [snip]
>
> >>> My other use case (Android camera HAL v3 for Project Ara) mainly deals
> >>> with controls and meta-data, but I'll then likely pass the meta-data
> >>> blob to userspace as-is, as its format isn't always known to the
> >>> driver. I'm also concerned about efficiency but haven't had time to
> >>> perform measurements yet.
> >>
> >> Hm, why is it not known to the subdevice driver? Does the buffer layout
> >> depend on some external conditions? Maybe loaded firmware? But it should
> >> be possible to tell the driver, say, that the current metadata buffer
> >> layout has version N?
> >
> > My devices are class-compliant but can use a device-specific meta-data
> > format. The kernel driver knows about the device class only, knowledge
> > about any device-specific format is only available in userspace.
>
> So, unless you want to add camera-specific code to your class-driver
> (UVC?),
Not UVC, project Ara camera class.
> that's another argument against approach (2) above.
In my case that's correct, although I could still use the request API with a
single binary blob control.
> >> Those metadata buffers can well contain some parameters, that can also
> >> be obtained via controls. So, if we just send metadata buffers to the
> >> user as is, we create duplication, which isn't nice.
> >
> > In my case there won't be any duplication as there will likely be no
> > control at all, but I agree with you in the general case.
> >
> >> Besides, the end user will anyway want broken down control values. E.g.
> >> in the Android case, the app is getting single controls, not opaque
> >> metadata buffers. Of course, one could create a vendor metadata tag
> >> "metadata blob," but that's not how Android does it so far.
> >>
> >> OTOH passing those buffers to the subdevice driver for parsing and
> >> returning them as an (extended) control also seems a bit ugly.
> >>
> >> What about performance cost? If we pass all those parameters as a single
> >> extended control (as long as they are of the same class), the cost won't
> >> be higher, than dequeuing a buffer? Let's not take the parsing cost and
> >> the control struct memory overhead into account for now.
> >
> > If you take nothing into account then the cost won't be higher ;-) It's
> > the parsing cost I was referring to, including the cost of updating the
> > control value from within the kernel.
>
> I meant mostly context switching costs, switching between the kernel- and
> the user-space. If we had to extract all controls one by one that wouldn't
> be a negligible overhead, I guess.
Agreed.
> [snip]
>
> >>>> Right, our use-cases so far don't send a lot of data as per-frame
> >>>> metadata, no idea what others do.
> >>>
> >>> What kind of hardware do you deal with that sends meta-data ? And over
> >>> what kind of channel does it send it ?
> >>
> >> A CSI-2 connected camera sensor.
> >
> > Is meta-data sent as embedded data lines with a different CSI-2 DT ?
>
> A different data type, yes.
>
> So, all in all it looks that the only immediately available option and,
> possibly, the only feasible option at all is a separate buffer queue. Do
> we agree, that a subdev API is needed to inform the bridge driver about
> the availability and location of the metadata?
As explained above I agree we need to extend the subdev API.
--
Regards,
Laurent Pinchart
next prev parent reply other threads:[~2015-12-26 23:47 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-12-16 9:37 per-frame camera metadata (again) Guennadi Liakhovetski
2015-12-16 10:02 ` Hans Verkuil
2015-12-16 11:25 ` Guennadi Liakhovetski
2015-12-21 3:41 ` Laurent Pinchart
2015-12-22 11:16 ` Guennadi Liakhovetski
2015-12-22 13:30 ` karthik poduval
2015-12-24 10:54 ` Laurent Pinchart
2015-12-23 17:40 ` Laurent Pinchart
2015-12-24 10:42 ` Guennadi Liakhovetski
2015-12-26 23:47 ` Laurent Pinchart [this message]
2016-01-01 15:43 ` Guennadi Liakhovetski
2016-01-05 11:31 ` Guennadi Liakhovetski
2016-01-25 11:14 ` Guennadi Liakhovetski
2016-01-25 19:53 ` Laurent Pinchart
2016-01-26 12:49 ` Guennadi Liakhovetski
2016-01-29 10:08 ` Guennadi Liakhovetski
2015-12-19 0:06 ` Sakari Ailus
2015-12-23 9:47 ` Guennadi Liakhovetski
2015-12-24 10:46 ` Laurent Pinchart
2015-12-24 11:17 ` hverkuil
2015-12-24 11:29 ` Laurent Pinchart
2015-12-24 12:54 ` hverkuil
2015-12-24 17:33 ` Laurent Pinchart
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5520197.vJSVcNd1Sr@avalon \
--to=laurent.pinchart@ideasonboard.com \
--cc=avivgr@gmail.com \
--cc=g.liakhovetski@gmx.de \
--cc=hverkuil@xs4all.nl \
--cc=linux-media@vger.kernel.org \
--cc=mchehab@infradead.org \
--cc=sakari.ailus@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).