* v4l2 device property framework in userspace
@ 2011-05-29 13:07 Martin Strubel
2011-05-30 7:32 ` Hans Verkuil
0 siblings, 1 reply; 14+ messages in thread
From: Martin Strubel @ 2011-05-29 13:07 UTC (permalink / raw)
To: linux-media
Hello,
I was wondering if it makes sense to raise a discussion about a few
aspects listed below - my apology, if this might be old coffee, I
haven't been following this list for long.
Since older kernels didn't have the matching functionality, we (a few
losely connected developers) had "hacked" a userspace framework to
address various extra features (multi sensor head, realtime stuff or
special sensor properties). So, our kernel driver (specific to the PPI
port of the Blackfin architecture) is covering frame acquisition only,
all sensor specific properties (that were historically rather to be
integrated into the v4l2 system) are controller from userspace or over
network using our netpp library (which was just released into opensource).
The reasons for this were:
1. 100's of register controlling various special properties on some SoC
sensors
2. One software and kernel should work with all sorts of camera
configuration
3. I'm lazy and hate to do a lot of boring code writing (ioctls()..).
Also, we didn't want to bloat the kernel with property tables.
4. Some implementations did not have much to do with classic "video"
So nowadays we write or parse sensor properties into XML files and
generate a library for it that wraps all sensor raw entities (registers
and bits) into named entities for quick remote control and direct access
to peripherals on the embedded target during the prototyping phase (this
is what netpp does for us).
Now, the goal is to opensource stuff from the Blackfin-Side, too (as
there seems to be no official v4l2 driver at the moment). Obviously, a
lot of work has been done meanwhile on the upstream v4l2 side, but since
I'm not completely into it yet, I'd like to ask the experts:
1. Can we do multi sensor configurations on a tristated camera bus with
the current kernel framework?
2. Is there a preferred way to route ioctls() back to userspace
"property handlers", so that standard v4l2 ioctls() can be implemented
while special sensor properties are still accessible through userspace?
3. Has anyone measured latencies (or is aware of such) with respect to
process response to a just arrived video frame within the RT_PREEMPT
context? (I assume any RT_PREEMPT latency research could be generalized
to video, but asking anyhow)
4. For some applications it's mandatory to queue commands that are
commited to a sensor immediately during a frame blank. This makes the
shared userspace and kernel access for example to an SPI bus rather
tricky. Can this be solved with the current (new) v4l2 framework?
Cheers,
- Martin
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: v4l2 device property framework in userspace
2011-05-29 13:07 v4l2 device property framework in userspace Martin Strubel
@ 2011-05-30 7:32 ` Hans Verkuil
2011-05-30 9:38 ` Martin Strubel
0 siblings, 1 reply; 14+ messages in thread
From: Hans Verkuil @ 2011-05-30 7:32 UTC (permalink / raw)
To: Martin Strubel; +Cc: linux-media
On Sunday, May 29, 2011 15:07:00 Martin Strubel wrote:
> Hello,
>
> I was wondering if it makes sense to raise a discussion about a few
> aspects listed below - my apology, if this might be old coffee, I
> haven't been following this list for long.
>
> Since older kernels didn't have the matching functionality, we (a few
> losely connected developers) had "hacked" a userspace framework to
> address various extra features (multi sensor head, realtime stuff or
> special sensor properties). So, our kernel driver (specific to the PPI
> port of the Blackfin architecture) is covering frame acquisition only,
> all sensor specific properties (that were historically rather to be
> integrated into the v4l2 system) are controller from userspace or over
> network using our netpp library (which was just released into opensource).
>
> The reasons for this were:
> 1. 100's of register controlling various special properties on some SoC
> sensors
> 2. One software and kernel should work with all sorts of camera
> configuration
> 3. I'm lazy and hate to do a lot of boring code writing (ioctls()..).
> Also, we didn't want to bloat the kernel with property tables.
> 4. Some implementations did not have much to do with classic "video"
>
> So nowadays we write or parse sensor properties into XML files and
> generate a library for it that wraps all sensor raw entities (registers
> and bits) into named entities for quick remote control and direct access
> to peripherals on the embedded target during the prototyping phase (this
> is what netpp does for us).
>
> Now, the goal is to opensource stuff from the Blackfin-Side, too (as
> there seems to be no official v4l2 driver at the moment). Obviously, a
> lot of work has been done meanwhile on the upstream v4l2 side, but since
> I'm not completely into it yet, I'd like to ask the experts:
>
> 1. Can we do multi sensor configurations on a tristated camera bus with
> the current kernel framework?
Yes. As long as the sensors are implemented as sub-devices (see
Documentation/video4linux/v4l2-framework.txt) then you can add lots of custom
controls to those subdevs that can be exposed to userspace. Writing directly
to sensor registers from userspace is a no-go. If done correctly using the
control framework (see Documentation/video4linux/v4l2-controls.txt) this shouldn't
take a lot of code. The hardest part is probably documentation of those controls.
> 2. Is there a preferred way to route ioctls() back to userspace
> "property handlers", so that standard v4l2 ioctls() can be implemented
> while special sensor properties are still accessible through userspace?
As mentioned, sensor properties should be implemented as V4L2 controls
> 3. Has anyone measured latencies (or is aware of such) with respect to
> process response to a just arrived video frame within the RT_PREEMPT
> context? (I assume any RT_PREEMPT latency research could be generalized
> to video, but asking anyhow)
I'm not aware of such measurements, but there is nothing special about video.
So it would be the same as any other process response to an interrupt.
> 4. For some applications it's mandatory to queue commands that are
> commited to a sensor immediately during a frame blank. This makes the
> shared userspace and kernel access for example to an SPI bus rather
> tricky. Can this be solved with the current (new) v4l2 framework?
That's why you want to always go through a kernel driver instead of mixing
kernel and userspace.
However, at the moment we do not have the ability to set and active a
configuration at a specific time. It is something on our TODO list, though.
You are not the only one that wants this.
Regards,
Hans
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: v4l2 device property framework in userspace
2011-05-30 7:32 ` Hans Verkuil
@ 2011-05-30 9:38 ` Martin Strubel
2011-05-30 10:52 ` Hans Verkuil
0 siblings, 1 reply; 14+ messages in thread
From: Martin Strubel @ 2011-05-30 9:38 UTC (permalink / raw)
To: Hans Verkuil; +Cc: linux-media
Hi,
>
> Yes. As long as the sensors are implemented as sub-devices (see
> Documentation/video4linux/v4l2-framework.txt) then you can add lots of custom
> controls to those subdevs that can be exposed to userspace. Writing directly
> to sensor registers from userspace is a no-go. If done correctly using the
> control framework (see Documentation/video4linux/v4l2-controls.txt) this shouldn't
> take a lot of code. The hardest part is probably documentation of those controls.
>
Well, we could generate all the control handlers from XML by writing
appropriate style sheets, but the point is that there are by now a few
hundreds of registers covered up in the current driver. Putting this
into the kernel would horribly bloat it, and this again is a no go on
our embedded system.
Documentation is also generated per property, BTW (as long as the user
fills in the <info> node)
Just to outline again what we're doing: The access to the registers (at
least to the SPI control interface) is in fact in kernel space, just the
handlers (and remember, there are a few 100s of them) are not. This
keeps the kernel layer lean and mean.
For machine vision people, most of the typical v4l2 controls are
irrelevant, but for things like video format, we just pass ioctl calls
to user space via kernel events, handle them, and pass the register
read/write sequence back to the kernel.
What problem do you see doing it this way? There seem to be various uio
based drivers out for v4l2 devices.
For i2c, we access the registers even through the /dev/i2c-X. So far I
see no problem with that, it turned out to provide better latencies (for
the video acquisition) in some scenarios that way. This does not allow
to switch configs in real-time, but this is a hacky task for i2c anyhow.
> ...
>
> That's why you want to always go through a kernel driver instead of mixing
> kernel and userspace.
>
> However, at the moment we do not have the ability to set and active a
> configuration at a specific time. It is something on our TODO list, though.
> You are not the only one that wants this.
>
If we're adapting our stuff to the new framework, it will likely be
opensource, too. Just a few people will need to be convinced..
Cheers,
- Martin
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: v4l2 device property framework in userspace
2011-05-30 9:38 ` Martin Strubel
@ 2011-05-30 10:52 ` Hans Verkuil
2011-05-30 12:07 ` Martin Strubel
0 siblings, 1 reply; 14+ messages in thread
From: Hans Verkuil @ 2011-05-30 10:52 UTC (permalink / raw)
To: Martin Strubel; +Cc: linux-media
> Hi,
>
>>
>> Yes. As long as the sensors are implemented as sub-devices (see
>> Documentation/video4linux/v4l2-framework.txt) then you can add lots of
>> custom
>> controls to those subdevs that can be exposed to userspace. Writing
>> directly
>> to sensor registers from userspace is a no-go. If done correctly using
>> the
>> control framework (see Documentation/video4linux/v4l2-controls.txt) this
>> shouldn't
>> take a lot of code. The hardest part is probably documentation of those
>> controls.
>>
>
> Well, we could generate all the control handlers from XML by writing
> appropriate style sheets, but the point is that there are by now a few
> hundreds of registers covered up in the current driver. Putting this
> into the kernel would horribly bloat it, and this again is a no go on
> our embedded system.
> Documentation is also generated per property, BTW (as long as the user
> fills in the <info> node)
> Just to outline again what we're doing: The access to the registers (at
> least to the SPI control interface) is in fact in kernel space, just the
> handlers (and remember, there are a few 100s of them) are not. This
> keeps the kernel layer lean and mean.
Can you give examples of the sort of things that are in those registers?
Is that XML file available somewhere? Are there public datasheets?
BTW, you should need just a single control handler that just looks up all
the relevant information in a table.
> For machine vision people, most of the typical v4l2 controls are
> irrelevant, but for things like video format, we just pass ioctl calls
> to user space via kernel events, handle them, and pass the register
> read/write sequence back to the kernel.
> What problem do you see doing it this way? There seem to be various uio
> based drivers out for v4l2 devices.
If V4L2 drivers want to go into the kernel, then it is highly unlikely we
want to allow uio drivers. Such drivers cannot be reused. A typical sensor
can be used by many vendors and products. By ensuring that access to the
sensor is standardized you ensure that anyone can use that sensor and that
fixes/improvements to that sensor will benefit everyone.
You don't have that with uio, and that's the reason we don't want it
(other reasons are possible abuse of uio allowing closed source drivers
being build on top of it).
Regards,
Hans
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: v4l2 device property framework in userspace
2011-05-30 10:52 ` Hans Verkuil
@ 2011-05-30 12:07 ` Martin Strubel
2011-05-30 12:53 ` Hans Verkuil
2011-05-31 6:38 ` Kim, HeungJun
0 siblings, 2 replies; 14+ messages in thread
From: Martin Strubel @ 2011-05-30 12:07 UTC (permalink / raw)
To: Hans Verkuil; +Cc: linux-media
Hi Hans,
>
> Can you give examples of the sort of things that are in those registers?
> Is that XML file available somewhere? Are there public datasheets?
>
If you mean the sensor datasheets, many of them are buried behind NDAs,
but people are writing opensourced headers too...let's leave this to the
lawyers.
Here's an example: http://section5.ch/downloads/mt9d111.xml
The XSLT sheets to generate code from it are found in the netpp
distribution, see http://www.section5.ch/netpp. You might have to read
some of the documentation to get to the actual clues.
> BTW, you should need just a single control handler that just looks up all
> the relevant information in a table.
Right. It might be not too much work to write an appropriate XSLT for
that. In fact, the netpp TOKENs (see it as a handle or proxy for a
property) are 32 bit values, so they could be used to hold ioctl request
codes. However, there are some enumeration/mapping (TOKEN -> Property)
issues to be sorted out.
In the standard implementation, a TOKEN is merely a mangled index, and
we generate a table with elements like:
{ "Enable" /* id250673 */, DC_BOOL,
F_RW,
DC_VAR, { .varp_bool = &g_context.hist_enable },
0 /* no children */ },
So coding efforts could again be kept at a minimum (apart from the
horror of writing an XSLT), but you'd fill some bytes with the above
table data. For the kernel, the property names (the string) should be
probably stripped and turned into ioctl request codes (?).
For some utopia, it would be darn cool (for lazy people) if device
vendors provided register information in XML and the kernel would just
access them via generated property tables.
> If V4L2 drivers want to go into the kernel, then it is highly unlikely we
> want to allow uio drivers. Such drivers cannot be reused. A typical sensor
> can be used by many vendors and products. By ensuring that access to the
> sensor is standardized you ensure that anyone can use that sensor and that
> fixes/improvements to that sensor will benefit everyone.
>
> You don't have that with uio, and that's the reason we don't want it
> (other reasons are possible abuse of uio allowing closed source drivers
> being build on top of it).
>
Right. I'm aware that there's some discussion about pro/cons of uio, but
I won't blame people for doing closed source drivers. Also, to bring
back the above NDA topic, some vendors might be getting annoyed at
source code containing their 'protected' register information. But let's
keep this off topic for now.
Anyhow, with the framework we use I don't see many problems in terms of
reusability, because we generate most of the stuff. So you would be free
to put all sensor properties into a kernel module as well (provided,
that the XML files are 'free'). But for our embedded stuff (or rapid
prototyping), I'd still want to see "userspace", also for the reason to
quickly add a new sensor device or property without the need to
recompile the kernel
This is BTW a big issue for some embedded linux device vendors.
So my question is still up, if there's room for userspace handlers for
kernel events (ioctl requests). Our current hack is, to read events from
a char device and push them through netpp.
Greetings,
- Martin
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: v4l2 device property framework in userspace
2011-05-30 12:07 ` Martin Strubel
@ 2011-05-30 12:53 ` Hans Verkuil
2011-05-30 13:30 ` Martin Strubel
2011-05-31 6:38 ` Kim, HeungJun
1 sibling, 1 reply; 14+ messages in thread
From: Hans Verkuil @ 2011-05-30 12:53 UTC (permalink / raw)
To: Martin Strubel; +Cc: linux-media
> Hi Hans,
>
>>
>> Can you give examples of the sort of things that are in those registers?
>> Is that XML file available somewhere? Are there public datasheets?
>>
>
> If you mean the sensor datasheets, many of them are buried behind NDAs,
> but people are writing opensourced headers too...let's leave this to the
> lawyers.
>
> Here's an example: http://section5.ch/downloads/mt9d111.xml
> The XSLT sheets to generate code from it are found in the netpp
> distribution, see http://www.section5.ch/netpp. You might have to read
> some of the documentation to get to the actual clues.
The XML is basically just a dump of all the sensor registers, right?
So you are not talking about 'properties', but about reading/setting
registers directly. That's something that we do not support (or even want
to support) except for debugging (see the VIDIOC_DBG_G/S_REGISTER ioctls
which require root access).
>> BTW, you should need just a single control handler that just looks up
>> all
>> the relevant information in a table.
>
> Right. It might be not too much work to write an appropriate XSLT for
> that. In fact, the netpp TOKENs (see it as a handle or proxy for a
> property) are 32 bit values, so they could be used to hold ioctl request
> codes. However, there are some enumeration/mapping (TOKEN -> Property)
> issues to be sorted out.
> In the standard implementation, a TOKEN is merely a mangled index, and
> we generate a table with elements like:
>
> { "Enable" /* id250673 */, DC_BOOL,
> F_RW,
> DC_VAR, { .varp_bool = &g_context.hist_enable },
> 0 /* no children */ },
>
> So coding efforts could again be kept at a minimum (apart from the
> horror of writing an XSLT), but you'd fill some bytes with the above
> table data. For the kernel, the property names (the string) should be
> probably stripped and turned into ioctl request codes (?).
>
> For some utopia, it would be darn cool (for lazy people) if device
> vendors provided register information in XML and the kernel would just
> access them via generated property tables.
But this is not a driver. This is just a mapping of symbols to registers.
You are just moving the actual driver intelligence to userspace, making it
very hard to reuse. It's a no-go I'm afraid.
>> If V4L2 drivers want to go into the kernel, then it is highly unlikely
>> we
>> want to allow uio drivers. Such drivers cannot be reused. A typical
>> sensor
>> can be used by many vendors and products. By ensuring that access to the
>> sensor is standardized you ensure that anyone can use that sensor and
>> that
>> fixes/improvements to that sensor will benefit everyone.
>>
>> You don't have that with uio, and that's the reason we don't want it
>> (other reasons are possible abuse of uio allowing closed source drivers
>> being build on top of it).
>>
>
> Right. I'm aware that there's some discussion about pro/cons of uio, but
> I won't blame people for doing closed source drivers. Also, to bring
> back the above NDA topic, some vendors might be getting annoyed at
> source code containing their 'protected' register information. But let's
> keep this off topic for now.
>
> Anyhow, with the framework we use I don't see many problems in terms of
> reusability, because we generate most of the stuff.
As mentioned a list of registers does not make a driver. Someone has to do
the actual programming.
> So you would be free
> to put all sensor properties into a kernel module as well (provided,
> that the XML files are 'free'). But for our embedded stuff (or rapid
> prototyping), I'd still want to see "userspace", also for the reason to
> quickly add a new sensor device or property without the need to
> recompile the kernel
> This is BTW a big issue for some embedded linux device vendors.
> So my question is still up, if there's room for userspace handlers for
> kernel events (ioctl requests). Our current hack is, to read events from
> a char device and push them through netpp.
Well, no. The policy is to have kernel drivers and not userspace drivers.
It's not just technical reasons, but also social reasons: suppose you have
userspace drivers, who is going to maintain all those drivers? Ensure that
they remain in sync, that new features can be added, etc.? That whole
infrastructure already exists if you use kernel drivers.
Userspace drivers may be great in the short term and from the point of
view of a single company/user, but it's a lot less attractive in the long
term.
Anyway, using subdevices and judicious use of controls it really shouldn't
be that hard to create a kernel driver.
Regards,
Hans
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: v4l2 device property framework in userspace
2011-05-30 12:53 ` Hans Verkuil
@ 2011-05-30 13:30 ` Martin Strubel
2011-05-31 8:01 ` Hans Verkuil
0 siblings, 1 reply; 14+ messages in thread
From: Martin Strubel @ 2011-05-30 13:30 UTC (permalink / raw)
To: Hans Verkuil; +Cc: linux-media
Hi,
>
> The XML is basically just a dump of all the sensor registers, right?
>
There are two sections: The register tables, and the property wrappers.
Property wrappers don't have to necessarily link to registers, but
that's covered in the docs.
> So you are not talking about 'properties', but about reading/setting
> registers directly. That's something that we do not support (or even want
> to support) except for debugging (see the VIDIOC_DBG_G/S_REGISTER ioctls
> which require root access).
>
I guess I should elaborate:
For the case, when the hardware is "operation mode safe", i.e. you can
set a value (like video format) in a register, you can wrap a property
or ioctl directly into a register bit.
For the case when it's not safe or more complex (i.e. you have to toggle
a bit to actually enable the mode), you'll have to write handlers. It's
up to you and the safety of the HW implementation, really.
To the user, it's always just "Properties". What you do internally, is
up to you. With a "non intelligent" register design, you'll have to
indeed write quite some handler code.
Where the handler lives (userspace or kernel) is again up to you.
>
> But this is not a driver. This is just a mapping of symbols to registers.
> You are just moving the actual driver intelligence to userspace, making it
> very hard to reuse. It's a no-go I'm afraid.
>
Actually no. IMHO, the kernel driver should not have all the
intelligence (some might argue this contradicts the actual concept of a
kernel). And for us it is even more reusable, because we can run the
same thing on a standalone 'OS' (no OS really) and for example RTEMS.
So for the various OS specifics, we only have one video acquisition
driver which has no knowledge of the attached sensor. All generic v4l2
properties again are tunneled through userspace to the "sensor daemon".
I still don't see what is (technically) wrong with that.
For me, it works like a driver, plus it is way more flexible as I don't
have to go through ioctls for special sensor properties.
>
> As mentioned a list of registers does not make a driver. Someone has to do
> the actual programming.
>
Sure. This aspect is covered by the netpp getters/setters.
>> recompile the kernel
>> This is BTW a big issue for some embedded linux device vendors.
>> So my question is still up, if there's room for userspace handlers for
>> kernel events (ioctl requests). Our current hack is, to read events from
>> a char device and push them through netpp.
>
> Well, no. The policy is to have kernel drivers and not userspace drivers.
>
> It's not just technical reasons, but also social reasons: suppose you have
> userspace drivers, who is going to maintain all those drivers? Ensure that
> they remain in sync, that new features can be added, etc.? That whole
> infrastructure already exists if you use kernel drivers.
In the past we had a lot more work from each kernel release to update
the kernel stuff because internals have been changing.
I don't see a problem maintaining the drivers, if you have lean & mean
module interfaces. It creates a lot of work though, if you have to touch
code over and over again (and this for each sensor implementation).
If companies have to pay more for "social reasons", they won't do it.
But again, this is argued about elsewhere..
I agree about an infrastructure, that's why I'm raising the discussion.
>
> Userspace drivers may be great in the short term and from the point of
> view of a single company/user, but it's a lot less attractive in the long
> term.
>
> Anyway, using subdevices and judicious use of controls it really shouldn't
> be that hard to create a kernel driver.
>
I don't know. Up to now (speaking Linux v2.6.34) I couldn't be
convinced, and none of our customers could, either. I'm aware that
there are some crazy requirements from the machine vision domain that
are of no relevance to a typical Linux webcam.
Anyhow, if you don't want to support a userspace layer policy, it's no
problem to us. We can just release the "dumb" acquisition driver and
everyone can register his stuff on top.
Cheers,
- Martin
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: v4l2 device property framework in userspace
2011-05-30 12:07 ` Martin Strubel
2011-05-30 12:53 ` Hans Verkuil
@ 2011-05-31 6:38 ` Kim, HeungJun
1 sibling, 0 replies; 14+ messages in thread
From: Kim, HeungJun @ 2011-05-31 6:38 UTC (permalink / raw)
To: Martin Strubel; +Cc: Hans Verkuil, linux-media
Hi Martin,
I'm not an expert of V4L2 and this camera sensor, too.
But, if you don't mind, I want to leave some comments about the registers,
and I hope that it helps you.
2011-05-30 오후 9:07, Martin Strubel 쓴 글:
> Hi Hans,
>
>>
>> Can you give examples of the sort of things that are in those registers?
>> Is that XML file available somewhere? Are there public datasheets?
>>
>
> If you mean the sensor datasheets, many of them are buried behind NDAs,
> but people are writing opensourced headers too...let's leave this to the
> lawyers.
>
> Here's an example: http://section5.ch/downloads/mt9d111.xml
> The XSLT sheets to generate code from it are found in the netpp
> distribution, see http://www.section5.ch/netpp. You might have to read
> some of the documentation to get to the actual clues.
As you said, obviously this camera has a lot of registers.
But, IMHO, even it can be expressed by one subdev driver like any other ones.
Almost registers can be absorbed and adapted at the start(or booting) time.
And the most others also can be defined at the current V4L2 APIs.
(like controls, croppings, buffers, powers, etc)
The matter is to find which factor can vary, when the camera setting is varied
by the board as you said. And in just my short thinking, this is not much to
catch. If there are such things, not expressed using current V4L2 APIs, but
needed for your works or your board, you can submit this APIs to ML.
The best thing is, you collect such things(it can be express current V4L2 APIs),
and submit new V4L2 APIs, because there are many other people handling camera
driver like you, and they can think similary like you.
For sure, they can welcome to birth new APIs.
Cheers,
Heungjun Kim
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: v4l2 device property framework in userspace
2011-05-30 13:30 ` Martin Strubel
@ 2011-05-31 8:01 ` Hans Verkuil
2011-05-31 8:27 ` Martin Strubel
0 siblings, 1 reply; 14+ messages in thread
From: Hans Verkuil @ 2011-05-31 8:01 UTC (permalink / raw)
To: Martin Strubel; +Cc: linux-media
On Monday, May 30, 2011 15:30:23 Martin Strubel wrote:
> Hi,
>
> >
> > The XML is basically just a dump of all the sensor registers, right?
> >
>
> There are two sections: The register tables, and the property wrappers.
> Property wrappers don't have to necessarily link to registers, but
> that's covered in the docs.
>
> > So you are not talking about 'properties', but about reading/setting
> > registers directly. That's something that we do not support (or even want
> > to support) except for debugging (see the VIDIOC_DBG_G/S_REGISTER ioctls
> > which require root access).
> >
>
> I guess I should elaborate:
> For the case, when the hardware is "operation mode safe", i.e. you can
> set a value (like video format) in a register, you can wrap a property
> or ioctl directly into a register bit.
> For the case when it's not safe or more complex (i.e. you have to toggle
> a bit to actually enable the mode), you'll have to write handlers. It's
> up to you and the safety of the HW implementation, really.
> To the user, it's always just "Properties". What you do internally, is
> up to you. With a "non intelligent" register design, you'll have to
> indeed write quite some handler code.
> Where the handler lives (userspace or kernel) is again up to you.
>
> >
> > But this is not a driver. This is just a mapping of symbols to registers.
> > You are just moving the actual driver intelligence to userspace, making it
> > very hard to reuse. It's a no-go I'm afraid.
> >
>
> Actually no. IMHO, the kernel driver should not have all the
> intelligence (some might argue this contradicts the actual concept of a
> kernel).
Userspace tells the driver what it should do and the driver decides how to do it.
That's how it works.
> And for us it is even more reusable, because we can run the
> same thing on a standalone 'OS' (no OS really) and for example RTEMS.
That is never a consideration for linux. Hardware abstraction layers are not
allowed. Blame Linus, not me, although I completely agree with him on this.
> So for the various OS specifics, we only have one video acquisition
> driver which has no knowledge of the attached sensor. All generic v4l2
> properties again are tunneled through userspace to the "sensor daemon".
> I still don't see what is (technically) wrong with that.
It's the tunneling to a sensor daemon that is wrong. You can write a sensor
driver as a V4L subdevice driver and it is reusable by any 'video acquisition;
driver (aka V4L2 bridge/platform driver) without going through userspace and
requiring userspace daemons.
> For me, it works like a driver, plus it is way more flexible as I don't
> have to go through ioctls for special sensor properties.
You still have to go through the kernel to set registers. That's just as
expensive as an ioctl.
>
> >
> > As mentioned a list of registers does not make a driver. Someone has to do
> > the actual programming.
> >
>
> Sure. This aspect is covered by the netpp getters/setters.
>
> >> recompile the kernel
> >> This is BTW a big issue for some embedded linux device vendors.
> >> So my question is still up, if there's room for userspace handlers for
> >> kernel events (ioctl requests). Our current hack is, to read events from
> >> a char device and push them through netpp.
> >
> > Well, no. The policy is to have kernel drivers and not userspace drivers.
> >
> > It's not just technical reasons, but also social reasons: suppose you have
> > userspace drivers, who is going to maintain all those drivers? Ensure that
> > they remain in sync, that new features can be added, etc.? That whole
> > infrastructure already exists if you use kernel drivers.
>
> In the past we had a lot more work from each kernel release to update
> the kernel stuff because internals have been changing.
That's why you want to upstream drivers. Once it's upstream this work goes away.
> I don't see a problem maintaining the drivers, if you have lean & mean
> module interfaces. It creates a lot of work though, if you have to touch
> code over and over again (and this for each sensor implementation).
> If companies have to pay more for "social reasons", they won't do it.
> But again, this is argued about elsewhere..
>
> I agree about an infrastructure, that's why I'm raising the discussion.
>
> >
> > Userspace drivers may be great in the short term and from the point of
> > view of a single company/user, but it's a lot less attractive in the long
> > term.
> >
> > Anyway, using subdevices and judicious use of controls it really shouldn't
> > be that hard to create a kernel driver.
> >
>
> I don't know. Up to now (speaking Linux v2.6.34) I couldn't be
> convinced, and none of our customers could, either. I'm aware that
> there are some crazy requirements from the machine vision domain that
> are of no relevance to a typical Linux webcam.
Note that much of the functionality you need didn't go into the kernel
until 2.6.39.
> Anyhow, if you don't want to support a userspace layer policy, it's no
> problem to us. We can just release the "dumb" acquisition driver and
> everyone can register his stuff on top.
Sure, no problem. It's open source after all. Just be aware that all the
maintenance effort is for you as long as you remain out of tree.
Regards,
Hans
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: v4l2 device property framework in userspace
2011-05-31 8:01 ` Hans Verkuil
@ 2011-05-31 8:27 ` Martin Strubel
2011-05-31 10:55 ` Hans Verkuil
0 siblings, 1 reply; 14+ messages in thread
From: Martin Strubel @ 2011-05-31 8:27 UTC (permalink / raw)
To: Hans Verkuil; +Cc: linux-media
>
> Userspace tells the driver what it should do and the driver decides how to do it.
> That's how it works.
Sounds a little religious. Not sure if you've been listening..
>
>> And for us it is even more reusable, because we can run the
>> same thing on a standalone 'OS' (no OS really) and for example RTEMS.
>
> That is never a consideration for linux. Hardware abstraction layers are not
> allowed. Blame Linus, not me, although I completely agree with him on this.
>
This is new to me. What should be the reason not to abstract hardware?
To give people a coding job?
>> So for the various OS specifics, we only have one video acquisition
>> driver which has no knowledge of the attached sensor. All generic v4l2
>> properties again are tunneled through userspace to the "sensor daemon".
>> I still don't see what is (technically) wrong with that.
>
> It's the tunneling to a sensor daemon that is wrong. You can write a sensor
> driver as a V4L subdevice driver and it is reusable by any 'video acquisition;
> driver (aka V4L2 bridge/platform driver) without going through userspace and
> requiring userspace daemons.
>
> It's the tunneling to a sensor daemon that is wrong. You can write a sensor
> driver as a V4L subdevice driver and it is reusable by any 'video acquisition;
> driver (aka V4L2 bridge/platform driver) without going through userspace and
> requiring userspace daemons.
You keep saying it is wrong, but I have so far seen no technically firm
argument. Please tell me why.
>
>> For me, it works like a driver, plus it is way more flexible as I don't
>> have to go through ioctls for special sensor properties.
>
> You still have to go through the kernel to set registers. That's just as
> expensive as an ioctl.
>
Not sure if you understand: I do not have to implement or generate ioctl
handlers and call them. This is definitely less expensive in terms of
coding. All the register access is handled *automatically* using the HW
description layer.
>
> Sure, no problem. It's open source after all. Just be aware that all the
> maintenance effort is for you as long as you remain out of tree.
>
We have to do this anyhow. But we'd prefer to do it the way that
requires least maintenance as described.
We need to have a *solution*. Not some academic hack that is "not wrong".
I think we can end the discussion here. I was hoping for more
technically valuable input, really.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: v4l2 device property framework in userspace
2011-05-31 8:27 ` Martin Strubel
@ 2011-05-31 10:55 ` Hans Verkuil
2011-05-31 11:33 ` Martin Strubel
0 siblings, 1 reply; 14+ messages in thread
From: Hans Verkuil @ 2011-05-31 10:55 UTC (permalink / raw)
To: Martin Strubel; +Cc: linux-media
On Tuesday, May 31, 2011 10:27:38 Martin Strubel wrote:
>
> >
> > Userspace tells the driver what it should do and the driver decides how to do it.
> > That's how it works.
>
> Sounds a little religious. Not sure if you've been listening..
Not religion, it's experience. I understand what you want to do and it is
just a bad idea in the long term. Mind you, it's great for prototyping and
experimentation. But if you want to get stable sensor support in the kernel,
then it has to conform to the rules. Having some sensor drivers in the kernel
and some in userspace will be a maintenance disaster.
>
> >
> >> And for us it is even more reusable, because we can run the
> >> same thing on a standalone 'OS' (no OS really) and for example RTEMS.
> >
> > That is never a consideration for linux. Hardware abstraction layers are not
> > allowed. Blame Linus, not me, although I completely agree with him on this.
> >
>
> This is new to me. What should be the reason not to abstract hardware?
> To give people a coding job?
Sorry, I used the wrong name. I meant OS abstraction layers.
>
> >> So for the various OS specifics, we only have one video acquisition
> >> driver which has no knowledge of the attached sensor. All generic v4l2
> >> properties again are tunneled through userspace to the "sensor daemon".
> >> I still don't see what is (technically) wrong with that.
> >
> > It's the tunneling to a sensor daemon that is wrong. You can write a sensor
> > driver as a V4L subdevice driver and it is reusable by any 'video acquisition;
> > driver (aka V4L2 bridge/platform driver) without going through userspace and
> > requiring userspace daemons.
> >
>
> > It's the tunneling to a sensor daemon that is wrong. You can write a sensor
> > driver as a V4L subdevice driver and it is reusable by any 'video acquisition;
> > driver (aka V4L2 bridge/platform driver) without going through userspace and
> > requiring userspace daemons.
>
> You keep saying it is wrong, but I have so far seen no technically firm
> argument. Please tell me why.
Technically both are valid approaches. But doing this in userspace just adds
extra complexity. All sensor drivers are in the kernel and we are not going
to introduce userspace sensor drivers as that leads to a maintenance disaster.
Besides, how is your sensor driver supposed to work when used in a USB webcam?
Such a USB bridge driver expects a subdevice sensor driver. Since you use a
different API the two can't communicate. Hence no code reuse.
>
> >
> >> For me, it works like a driver, plus it is way more flexible as I don't
> >> have to go through ioctls for special sensor properties.
> >
> > You still have to go through the kernel to set registers. That's just as
> > expensive as an ioctl.
> >
>
>
> Not sure if you understand: I do not have to implement or generate ioctl
> handlers and call them. This is definitely less expensive in terms of
> coding. All the register access is handled *automatically* using the HW
> description layer.
Using what? /dev/i2c-X? That's using ioctls (I2C_RDWR).
>
> >
> > Sure, no problem. It's open source after all. Just be aware that all the
> > maintenance effort is for you as long as you remain out of tree.
> >
>
> We have to do this anyhow. But we'd prefer to do it the way that
> requires least maintenance as described.
> We need to have a *solution*. Not some academic hack that is "not wrong".
>
> I think we can end the discussion here. I was hoping for more
> technically valuable input, really.
>
Well, you clearly want *your* solution. I've been working in the v4l subsystem
for many years now ensuring that we can support the widest range of very
practical and non-academic hardware, both consumer hardware and embedded system
hardware, and working together with companies like TI, Samsung, Nokia, etc.
You (or your company/organization) designed a system without as far as I am aware
consulating the people responsible for the relevant kernel subsystem (V4L in this
case). And now you want to get your code in with a minimum of change. Sorry, that's
not the way it works.
Regards,
Hans
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: v4l2 device property framework in userspace
2011-05-31 10:55 ` Hans Verkuil
@ 2011-05-31 11:33 ` Martin Strubel
2011-05-31 12:58 ` Mauro Carvalho Chehab
2011-06-06 11:00 ` Sakari Ailus
0 siblings, 2 replies; 14+ messages in thread
From: Martin Strubel @ 2011-05-31 11:33 UTC (permalink / raw)
To: Hans Verkuil; +Cc: linux-media
Hi,
>
> Not religion, it's experience. I understand what you want to do and it is
> just a bad idea in the long term. Mind you, it's great for prototyping and
> experimentation. But if you want to get stable sensor support in the kernel,
> then it has to conform to the rules. Having some sensor drivers in the kernel
> and some in userspace will be a maintenance disaster.
Sorry, from our perspective the current v4l2 system *has* already been a
maintenance disaster. No offense, but that is exactly the reason why we
had to internally circumvent it.
You're free to use the system for early prototyping stage as well as for
a stable release (the framework is in fact running since 2006 in medical
imaging devices). It certainly cost us less maintenance so far than
syncing up to the changing v4l2 APIs.
>
> Besides, how is your sensor driver supposed to work when used in a USB webcam?
> Such a USB bridge driver expects a subdevice sensor driver. Since you use a
> different API the two can't communicate. Hence no code reuse.
>
I don't see a problem there either. Because you just put your register
access code into the kernel. That's merely a matter of two functions.
The sensor daemon doesn't really care *how* you access registers.
>>
>> Not sure if you understand: I do not have to implement or generate ioctl
>> handlers and call them. This is definitely less expensive in terms of
>> coding. All the register access is handled *automatically* using the HW
>> description layer.
>
> Using what? /dev/i2c-X? That's using ioctls (I2C_RDWR).
>
Yes. But I have to write exactly two wrappers for access. Not create
tables with ioctl reqcodes.
>
> Well, you clearly want *your* solution. I've been working in the v4l subsystem
> for many years now ensuring that we can support the widest range of very
> practical and non-academic hardware, both consumer hardware and embedded system
> hardware, and working together with companies like TI, Samsung, Nokia, etc.
Nope. I want any solution that does the job for our requirements. So far
it hasn't been doing it. It's not just getting an image from a sensor
and supporting v4l2 modes, but I think I've been mentioning the dirty
stuff already.
>
> You (or your company/organization) designed a system without as far as I am aware
> consulating the people responsible for the relevant kernel subsystem (V4L in this
> case). And now you want to get your code in with a minimum of change. Sorry, that's
> not the way it works.
>
Just that you understand: I'm not wanting to get our code into
somewhere. I'd rather avoid it, one reason being lengthy discussions :-)
Bottomline again: I'm trying to find a solution to avoid bloated and
potentially unstable kernel drivers. Why do you think we (and our
customers) spent the money to develop alternative solutions?
Cheers,
- Martin
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: v4l2 device property framework in userspace
2011-05-31 11:33 ` Martin Strubel
@ 2011-05-31 12:58 ` Mauro Carvalho Chehab
2011-06-06 11:00 ` Sakari Ailus
1 sibling, 0 replies; 14+ messages in thread
From: Mauro Carvalho Chehab @ 2011-05-31 12:58 UTC (permalink / raw)
To: Martin Strubel; +Cc: Hans Verkuil, linux-media
Hi Martin,
Em 31-05-2011 08:33, Martin Strubel escreveu:
> Hi,
>
>>
>> Not religion, it's experience. I understand what you want to do and it is
>> just a bad idea in the long term. Mind you, it's great for prototyping and
>> experimentation. But if you want to get stable sensor support in the kernel,
>> then it has to conform to the rules. Having some sensor drivers in the kernel
>> and some in userspace will be a maintenance disaster.
>
> Sorry, from our perspective the current v4l2 system *has* already been a
> maintenance disaster. No offense, but that is exactly the reason why we
> had to internally circumvent it.
> You're free to use the system for early prototyping stage as well as for
> a stable release (the framework is in fact running since 2006 in medical
> imaging devices).
It seems that you're completely lost about why it is good for you to upstream
a driver. First of all, by using and contributing with Linux (and other open source
projects), you end by not needing of doing all work by yourself: a team of
experienced developers from several different companies work together to bring
the better solutions to address a problem. This is not academic. You'll see very
few contributions from academy at the Linux Kernel. Most of the contributions come
from people working and solving real problems. The direction taken on Linux come from
those experiences.
If you take a look at the discussions at the mailing list, you'll see that patches
made by one company receive lots of contributions from other companies, in order
to improve it. This warrants that such driver will perform better, have less bug fixes
(as others pair of eyes are looking on it) and can be used by other drivers.
Also, once a driver is merged, if someone needs to change some API, the one that
made the change should also send fixes for the drivers that use that calls. That
means that there's no maintainance effort at long term. We have very good examples
on how this work at the V4L subsystem: you'll find there drivers written in 1999, where
the original maintainer stopped working on it a long time ago, and they still work with
real hardware. Such drivers even got userspace API improvements, like porting from V4L1
into V4L2.
So, if you're upstreaming your drivers, you get such benefits.
On the other hand, if you're working with out-of-tree drivers, it is a maintainance
nightmare, as we're always bringing improvements to the Linux core API's, and to the
V4L core subsystem. So, it costs a lot of time and money to keep an out-of-tree
driver working at the long term.
There's one major requirement for you to upstream your code: you need to understand
the concepts used by the subsystem you're upstreaming and adhere to the upstream
rules.
One of such rules is that a kernel driver should provide the desired functionality
without needing an userspace driver. In other words, we don't want to have a kernel
driver wrapper for a real driver at userspace.
> It certainly cost us less maintenance so far than
> syncing up to the changing v4l2 APIs.
You're increasing your maintainance costs by not upstreaming.
>>
>> Besides, how is your sensor driver supposed to work when used in a USB webcam?
>> Such a USB bridge driver expects a subdevice sensor driver. Since you use a
>> different API the two can't communicate. Hence no code reuse.
>>
>
> I don't see a problem there either. Because you just put your register
> access code into the kernel. That's merely a matter of two functions.
> The sensor daemon doesn't really care *how* you access registers.
The problem is that it violates the rule of the game: to share the developed code
with the others. If you're using non-standard interfaces to communicate between
the sensors and the bridge driver, only you're benefiting from it. At the end, someone
else will write a different code for that sensor, and we'll end by having several
drivers to do the same thing. By having just one driver, the TCO (Total Cost of Ownership)
will decrease, as the costs for writing and maintaining such driver decreases.
>>>
>>> Not sure if you understand: I do not have to implement or generate ioctl
>>> handlers and call them. This is definitely less expensive in terms of
>>> coding. All the register access is handled *automatically* using the HW
>>> description layer.
>>
>> Using what? /dev/i2c-X? That's using ioctls (I2C_RDWR).
>>
>
> Yes. But I have to write exactly two wrappers for access. Not create
> tables with ioctl reqcodes.
V4L2 controls also have just 2 ioctl's for reading/writing values on it.
>>
>> Well, you clearly want *your* solution. I've been working in the v4l subsystem
>> for many years now ensuring that we can support the widest range of very
>> practical and non-academic hardware, both consumer hardware and embedded system
>> hardware, and working together with companies like TI, Samsung, Nokia, etc.
>
> Nope. I want any solution that does the job for our requirements. So far
> it hasn't been doing it. It's not just getting an image from a sensor
> and supporting v4l2 modes, but I think I've been mentioning the dirty
> stuff already.
I haven't seen yet any use case where V4L2 won't fit as-is, or with a few API
additions.
>> You (or your company/organization) designed a system without as far as I am aware
>> consulating the people responsible for the relevant kernel subsystem (V4L in this
>> case). And now you want to get your code in with a minimum of change. Sorry, that's
>> not the way it works.
>>
>
> Just that you understand: I'm not wanting to get our code into
> somewhere. I'd rather avoid it, one reason being lengthy discussions :-)
It is up to you if you want to increase your costs.
> Bottomline again: I'm trying to find a solution to avoid bloated and
> potentially unstable kernel drivers. Why do you think we (and our
> customers) spent the money to develop alternative solutions?
If you're doing the driver ports by yourself, without the help of the ones
that has deep understanding on how things work at the Kernel (because they
wrote the Kernel code), you'll end by having unstable kernel/userspace
drivers. Probably not a wise way to spend your money.
>From the experience I have by analysing thousands of patches for drivers/media
that comes from all sorts of different sources, is that companies that don't
have much experience upstreaming his work frequently do bad things that cause
driver's instability. Several of those troubles are detected during the review
process. Eventually, a few of them go to the main trees, and the Kernel janitor's
and the security teams catch them. So, at the end, the driver becomes much
more reliable than the original one.
Thanks,
Mauro.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: v4l2 device property framework in userspace
2011-05-31 11:33 ` Martin Strubel
2011-05-31 12:58 ` Mauro Carvalho Chehab
@ 2011-06-06 11:00 ` Sakari Ailus
1 sibling, 0 replies; 14+ messages in thread
From: Sakari Ailus @ 2011-06-06 11:00 UTC (permalink / raw)
To: Martin Strubel; +Cc: Hans Verkuil, linux-media
On Tue, May 31, 2011 at 01:33:32PM +0200, Martin Strubel wrote:
> Hi,
Hi Martin,
> >
> > Not religion, it's experience. I understand what you want to do and it is
> > just a bad idea in the long term. Mind you, it's great for prototyping and
> > experimentation. But if you want to get stable sensor support in the kernel,
> > then it has to conform to the rules. Having some sensor drivers in the kernel
> > and some in userspace will be a maintenance disaster.
>
> Sorry, from our perspective the current v4l2 system *has* already been a
> maintenance disaster. No offense, but that is exactly the reason why we
> had to internally circumvent it.
> You're free to use the system for early prototyping stage as well as for
> a stable release (the framework is in fact running since 2006 in medical
> imaging devices). It certainly cost us less maintenance so far than
> syncing up to the changing v4l2 APIs.
The V4L2 framework supports imaging devices far better nowadays than
earlier. If you use the V4L2, you may benefit from the work that
others have been doing as well. The V4L2 does much for you that you had to
do in a driver specific way earlier.
What comes to your original question, I've had a roughly similar problem in
the past. A sensor that is controlled with sets of register lists. See
camera-firmware here:
<URL:http://gitorious.org/omap3camera/pages/Home>
The solution for this in the future is to make V4L2 understand these
parameters, such as low level sensor control, including cropping, blanking,
scaling, skipping, binning etc. in a generic way. Some of these parameters
will be configured through format parameters, some through separate ioctls
and many of them will be V4L2 controls. The V4L2 framework needs to be
extended a little to support some of the new functionality.
This way, you can even change your sensor to another from a different vendor
using a different driver with minimal effort on user space software as long
as both of the sensor drivers are implemented as V4L2 subdev drivers as Hans
explained. This allows the user space to stay more generic.
[clip]
> > You (or your company/organization) designed a system without as far as I am aware
> > consulating the people responsible for the relevant kernel subsystem (V4L in this
> > case). And now you want to get your code in with a minimum of change. Sorry, that's
> > not the way it works.
> >
>
> Just that you understand: I'm not wanting to get our code into
> somewhere. I'd rather avoid it, one reason being lengthy discussions :-)
> Bottomline again: I'm trying to find a solution to avoid bloated and
> potentially unstable kernel drivers. Why do you think we (and our
> customers) spent the money to develop alternative solutions?
Please keep in mind that others do have requirements which may differ from
yours. The V4L2 is intended to serve very different ones, not just yours.
The current V4L2 framework does follow more generic principles which are
known to be good in general. This allows it to be something that you can
actually build on, not something just you need to adapt to.
Please follow up the development of V4L2 and participate to it. This way you
may have your views and requirements taken into account. But do remember
others do have theirs as well.
Kind regards,
--
Sakari Ailus
sakari.ailus@iki.fi
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2011-06-06 11:00 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-05-29 13:07 v4l2 device property framework in userspace Martin Strubel
2011-05-30 7:32 ` Hans Verkuil
2011-05-30 9:38 ` Martin Strubel
2011-05-30 10:52 ` Hans Verkuil
2011-05-30 12:07 ` Martin Strubel
2011-05-30 12:53 ` Hans Verkuil
2011-05-30 13:30 ` Martin Strubel
2011-05-31 8:01 ` Hans Verkuil
2011-05-31 8:27 ` Martin Strubel
2011-05-31 10:55 ` Hans Verkuil
2011-05-31 11:33 ` Martin Strubel
2011-05-31 12:58 ` Mauro Carvalho Chehab
2011-06-06 11:00 ` Sakari Ailus
2011-05-31 6:38 ` Kim, HeungJun
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox