* [PATCH RFC] V4L: Add s_rx_buffer subdev video operation
@ 2012-09-24 13:26 Sylwester Nawrocki
2012-09-24 13:44 ` Sakari Ailus
0 siblings, 1 reply; 8+ messages in thread
From: Sylwester Nawrocki @ 2012-09-24 13:26 UTC (permalink / raw)
To: linux-media
Cc: a.hajda, sakari.ailus, laurent.pinchart, hverkuil, kyungmin.park,
sw0312.kim, Sylwester Nawrocki
The s_rx_buffer callback allows the host to set buffer for non-image
(meta) data at a subdev. This callback can be implemented by an image
sensor or a MIPI-CSI receiver, allowing the host to retrieve the frame
embedded data from a subdev.
Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
---
include/media/v4l2-subdev.h | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
index 22ab09e..28067ed 100644
--- a/include/media/v4l2-subdev.h
+++ b/include/media/v4l2-subdev.h
@@ -274,6 +274,10 @@ struct v4l2_subdev_audio_ops {
s_mbus_config: set a certain mediabus configuration. This operation is added
for compatibility with soc-camera drivers and should not be used by new
software.
+
+ s_rx_buffer: set a host allocated memory buffer for the subdev. The subdev
+ can adjust @size to a lower value and must not write more data to the
+ buffer starting at @data than the original value of @size.
*/
struct v4l2_subdev_video_ops {
int (*s_routing)(struct v4l2_subdev *sd, u32 input, u32 output, u32 config);
@@ -327,6 +331,8 @@ struct v4l2_subdev_video_ops {
struct v4l2_mbus_config *cfg);
int (*s_mbus_config)(struct v4l2_subdev *sd,
const struct v4l2_mbus_config *cfg);
+ int (*s_rx_buffer)(struct v4l2_subdev *sd, void *buf,
+ unsigned int *size);
};
/*
--
1.7.11.3
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH RFC] V4L: Add s_rx_buffer subdev video operation
2012-09-24 13:26 [PATCH RFC] V4L: Add s_rx_buffer subdev video operation Sylwester Nawrocki
@ 2012-09-24 13:44 ` Sakari Ailus
2012-09-24 13:58 ` Laurent Pinchart
2012-09-24 16:51 ` Sylwester Nawrocki
0 siblings, 2 replies; 8+ messages in thread
From: Sakari Ailus @ 2012-09-24 13:44 UTC (permalink / raw)
To: Sylwester Nawrocki
Cc: linux-media, a.hajda, laurent.pinchart, hverkuil, kyungmin.park,
sw0312.kim
Hi Sylwester,
On Mon, Sep 24, 2012 at 03:26:53PM +0200, Sylwester Nawrocki wrote:
> The s_rx_buffer callback allows the host to set buffer for non-image
> (meta) data at a subdev. This callback can be implemented by an image
> sensor or a MIPI-CSI receiver, allowing the host to retrieve the frame
> embedded data from a subdev.
>
> Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
> ---
> include/media/v4l2-subdev.h | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
> index 22ab09e..28067ed 100644
> --- a/include/media/v4l2-subdev.h
> +++ b/include/media/v4l2-subdev.h
> @@ -274,6 +274,10 @@ struct v4l2_subdev_audio_ops {
> s_mbus_config: set a certain mediabus configuration. This operation is added
> for compatibility with soc-camera drivers and should not be used by new
> software.
> +
> + s_rx_buffer: set a host allocated memory buffer for the subdev. The subdev
> + can adjust @size to a lower value and must not write more data to the
> + buffer starting at @data than the original value of @size.
> */
> struct v4l2_subdev_video_ops {
> int (*s_routing)(struct v4l2_subdev *sd, u32 input, u32 output, u32 config);
> @@ -327,6 +331,8 @@ struct v4l2_subdev_video_ops {
> struct v4l2_mbus_config *cfg);
> int (*s_mbus_config)(struct v4l2_subdev *sd,
> const struct v4l2_mbus_config *cfg);
> + int (*s_rx_buffer)(struct v4l2_subdev *sd, void *buf,
> + unsigned int *size);
> };
>
> /*
How about useing a separate video buffer queue for the purpose? That would
provide a nice way to pass it to the user space where it's needed. It'd also
play nicely together with the frame layout descriptors.
Kind regards,
--
Sakari Ailus
e-mail: sakari.ailus@iki.fi XMPP: sailus@retiisi.org.uk
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH RFC] V4L: Add s_rx_buffer subdev video operation
2012-09-24 13:44 ` Sakari Ailus
@ 2012-09-24 13:58 ` Laurent Pinchart
2012-09-24 17:05 ` Sylwester Nawrocki
2012-09-24 16:51 ` Sylwester Nawrocki
1 sibling, 1 reply; 8+ messages in thread
From: Laurent Pinchart @ 2012-09-24 13:58 UTC (permalink / raw)
To: Sakari Ailus
Cc: Sylwester Nawrocki, linux-media, a.hajda, hverkuil, kyungmin.park,
sw0312.kim
On Monday 24 September 2012 16:44:54 Sakari Ailus wrote:
> On Mon, Sep 24, 2012 at 03:26:53PM +0200, Sylwester Nawrocki wrote:
> > The s_rx_buffer callback allows the host to set buffer for non-image
> > (meta) data at a subdev. This callback can be implemented by an image
> > sensor or a MIPI-CSI receiver, allowing the host to retrieve the frame
> > embedded data from a subdev.
> >
> > Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
> > Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
> > ---
> >
> > include/media/v4l2-subdev.h | 6 ++++++
> > 1 file changed, 6 insertions(+)
> >
> > diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
> > index 22ab09e..28067ed 100644
> > --- a/include/media/v4l2-subdev.h
> > +++ b/include/media/v4l2-subdev.h
> > @@ -274,6 +274,10 @@ struct v4l2_subdev_audio_ops {
> >
> > s_mbus_config: set a certain mediabus configuration. This operation is
> > added>
> > for compatibility with soc-camera drivers and should not be used by
new
> > software.
> >
> > +
> > + s_rx_buffer: set a host allocated memory buffer for the subdev. The
> > subdev + can adjust @size to a lower value and must not write more data
> > to the + buffer starting at @data than the original value of @size.
> >
> > */
> >
> > struct v4l2_subdev_video_ops {
> >
> > int (*s_routing)(struct v4l2_subdev *sd, u32 input, u32 output, u32
> > config);>
> > @@ -327,6 +331,8 @@ struct v4l2_subdev_video_ops {
> >
> > struct v4l2_mbus_config *cfg);
> >
> > int (*s_mbus_config)(struct v4l2_subdev *sd,
> >
> > const struct v4l2_mbus_config *cfg);
> >
> > + int (*s_rx_buffer)(struct v4l2_subdev *sd, void *buf,
> > + unsigned int *size);
> >
> > };
> >
> > /*
>
> How about useing a separate video buffer queue for the purpose? That would
> provide a nice way to pass it to the user space where it's needed. It'd also
> play nicely together with the frame layout descriptors.
Beside, a void *buf wouldn't support DMA. Only subdevs that use PIO to
transfer meta data could be supported by this.
--
Regards,
Laurent Pinchart
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH RFC] V4L: Add s_rx_buffer subdev video operation
2012-09-24 13:44 ` Sakari Ailus
2012-09-24 13:58 ` Laurent Pinchart
@ 2012-09-24 16:51 ` Sylwester Nawrocki
2012-09-24 18:26 ` Sakari Ailus
1 sibling, 1 reply; 8+ messages in thread
From: Sylwester Nawrocki @ 2012-09-24 16:51 UTC (permalink / raw)
To: Sakari Ailus
Cc: linux-media, a.hajda, laurent.pinchart, hverkuil, kyungmin.park,
sw0312.kim
Hi Sakari,
On 09/24/2012 03:44 PM, Sakari Ailus wrote:
> How about useing a separate video buffer queue for the purpose? That would
> provide a nice way to pass it to the user space where it's needed. It'd also
> play nicely together with the frame layout descriptors.
It's tempting, but doing frame synchronisation in user space in this case
would have been painful, if at all possible in reliable manner. It would
have significantly complicate applications and the drivers.
VIDIOC_STREAMON, VIDIOC_QBUF/DQBUF calls would have been at least roughly
synchronized, and applications would have to know somehow which video nodes
needs to be opened together. I guess things like that could be abstracted
in a library, but what do we really gain for such effort ?
And now I can just ask kernel for 2-planar buffers where everything is in
place..
Regards,
Sylwester
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH RFC] V4L: Add s_rx_buffer subdev video operation
2012-09-24 13:58 ` Laurent Pinchart
@ 2012-09-24 17:05 ` Sylwester Nawrocki
0 siblings, 0 replies; 8+ messages in thread
From: Sylwester Nawrocki @ 2012-09-24 17:05 UTC (permalink / raw)
To: Laurent Pinchart
Cc: Sakari Ailus, linux-media, a.hajda, hverkuil, kyungmin.park,
sw0312.kim
Hi Laurent,
On 09/24/2012 03:58 PM, Laurent Pinchart wrote:
>> How about useing a separate video buffer queue for the purpose? That would
>> provide a nice way to pass it to the user space where it's needed. It'd also
>> play nicely together with the frame layout descriptors.
>
> Beside, a void *buf wouldn't support DMA. Only subdevs that use PIO to
> transfer meta data could be supported by this.
I guess most of MIPI-CSI2 receivers out there support data capture
from distinct Data Types into separate DMA buffers. But not this one.
In case of multi-context DMA engine I guess MIPI-CSI2 receiver driver
would expose video node anyway and it wouldn't need such a callback
at all ?
Perhaps using struct v4l2_subdev_core_ops::ioctl would be more
appropriate here ?
--
Regards,
Sylwester
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH RFC] V4L: Add s_rx_buffer subdev video operation
2012-09-24 16:51 ` Sylwester Nawrocki
@ 2012-09-24 18:26 ` Sakari Ailus
2012-09-25 9:48 ` Sylwester Nawrocki
0 siblings, 1 reply; 8+ messages in thread
From: Sakari Ailus @ 2012-09-24 18:26 UTC (permalink / raw)
To: Sylwester Nawrocki
Cc: linux-media, a.hajda, laurent.pinchart, hverkuil, kyungmin.park,
sw0312.kim
Hi Sylwester,
On Mon, Sep 24, 2012 at 06:51:41PM +0200, Sylwester Nawrocki wrote:
> Hi Sakari,
>
> On 09/24/2012 03:44 PM, Sakari Ailus wrote:
> > How about useing a separate video buffer queue for the purpose? That would
> > provide a nice way to pass it to the user space where it's needed. It'd also
> > play nicely together with the frame layout descriptors.
>
> It's tempting, but doing frame synchronisation in user space in this case
> would have been painful, if at all possible in reliable manner. It would
> have significantly complicate applications and the drivers.
Let's face it: applications that are interested in this information have to
do exactly the same frame number matching with the statistics buffers. Just
stitching the data to the same video buffer isn't a generic solution.
> VIDIOC_STREAMON, VIDIOC_QBUF/DQBUF calls would have been at least roughly
> synchronized, and applications would have to know somehow which video nodes
> needs to be opened together. I guess things like that could be abstracted
> in a library, but what do we really gain for such effort ?
> And now I can just ask kernel for 2-planar buffers where everything is in
> place..
That's equally good --- some hardware can only do that after all, but do you
need the callback in that case, if there's a single destination buffer
anyway? Wouldn't the frame layout descriptor have enough information to do
this?
Kind regards,
--
Sakari Ailus
e-mail: sakari.ailus@iki.fi XMPP: sailus@retiisi.org.uk
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH RFC] V4L: Add s_rx_buffer subdev video operation
2012-09-24 18:26 ` Sakari Ailus
@ 2012-09-25 9:48 ` Sylwester Nawrocki
2012-09-26 20:49 ` Sakari Ailus
0 siblings, 1 reply; 8+ messages in thread
From: Sylwester Nawrocki @ 2012-09-25 9:48 UTC (permalink / raw)
To: Sakari Ailus
Cc: linux-media, a.hajda, laurent.pinchart, hverkuil, kyungmin.park,
sw0312.kim
Hi Sakari,
On 09/24/2012 08:26 PM, Sakari Ailus wrote:
> On Mon, Sep 24, 2012 at 06:51:41PM +0200, Sylwester Nawrocki wrote:
>> On 09/24/2012 03:44 PM, Sakari Ailus wrote:
>>> How about useing a separate video buffer queue for the purpose? That would
>>> provide a nice way to pass it to the user space where it's needed. It'd also
>>> play nicely together with the frame layout descriptors.
>>
>> It's tempting, but doing frame synchronisation in user space in this case
>> would have been painful, if at all possible in reliable manner. It would
>> have significantly complicate applications and the drivers.
>
> Let's face it: applications that are interested in this information have to
> do exactly the same frame number matching with the statistics buffers. Just
> stitching the data to the same video buffer isn't a generic solution.
Let me list disadvantages of using separate buffer queue:
1. significant complication of the driver:
- need to add video node support with all it's memory and file ops,
- more complicated VIDIOC_STREAMON logic, MIPI CSI receiver needs to
care about the data pipeline details (power, streaming,..);
2. more processing overhead due second /dev/video handling;
3. much more complex device handling in user space.
All this for virtually nothing but 2 x 4-byte integers we are interested
in in the Embedded Data stream.
And advantages:
1. More generic solution, no need to invent new fourcc's for standard image
data formats with metadata (new fourcc is needed anyway for the device-
specific image data (JPEG/YUV/.../YUV/JPEG/meta-data, and we can choose
to use multi-planar only for non-standard formats and separate meta-data
buffer queue for others);
2. Probably other host IF/ISP drivers would implement it this way, or would
they ?
3. What else could be added ?
Currently I don't see justification for using separate video node as the
frame embedded frame grabber. I don't expect it to be useful for us in
future, not for the ISPs that process sensor data separately from the host
CPUs. Moreover, this MIPI-CSIS device has maximum buffer size of only 4 KiB,
which effectively limits its possible use cases.
I don't think there is a need to force host drivers to use either separate
buffer queues or multi-planar APIs. Especially in case of non-standard hybrid
data formats. I'm ready to discuss separate buffer queue approach if we have
real use case for it. I don't think these two methods are exclusive.
Until then I would prefer not to live with an awkward solution.
>> VIDIOC_STREAMON, VIDIOC_QBUF/DQBUF calls would have been at least roughly
>> synchronized, and applications would have to know somehow which video nodes
>> needs to be opened together. I guess things like that could be abstracted
>> in a library, but what do we really gain for such effort ?
>> And now I can just ask kernel for 2-planar buffers where everything is in
>> place..
>
> That's equally good --- some hardware can only do that after all, but do you
> need the callback in that case, if there's a single destination buffer
> anyway? Wouldn't the frame layout descriptor have enough information to do
> this?
There is as many buffers as user requested with REQBUFS. In VSYNC interrupt
of one device there is a buffer configured for the other device. With each
frame interrupt there is a different buffer used, the one that the DMA engine
actually writes data to. Data copying happens from the MIPI-CSIS internal
ioremapped buffer to a buffer owned by the host interface driver. And the
callback is used for dynamically switching buffers at the subdev.
Regards,
--
Sylwester Nawrocki
Samsung Poland R&D Center
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH RFC] V4L: Add s_rx_buffer subdev video operation
2012-09-25 9:48 ` Sylwester Nawrocki
@ 2012-09-26 20:49 ` Sakari Ailus
0 siblings, 0 replies; 8+ messages in thread
From: Sakari Ailus @ 2012-09-26 20:49 UTC (permalink / raw)
To: Sylwester Nawrocki
Cc: linux-media, a.hajda, laurent.pinchart, hverkuil, kyungmin.park,
sw0312.kim
Hi Sylwester,
Sylwester Nawrocki wrote:
> On 09/24/2012 08:26 PM, Sakari Ailus wrote:
>> On Mon, Sep 24, 2012 at 06:51:41PM +0200, Sylwester Nawrocki wrote:
>>> On 09/24/2012 03:44 PM, Sakari Ailus wrote:
>>>> How about useing a separate video buffer queue for the purpose? That would
>>>> provide a nice way to pass it to the user space where it's needed. It'd also
>>>> play nicely together with the frame layout descriptors.
>>>
>>> It's tempting, but doing frame synchronisation in user space in this case
>>> would have been painful, if at all possible in reliable manner. It would
>>> have significantly complicate applications and the drivers.
>>
>> Let's face it: applications that are interested in this information have to
>> do exactly the same frame number matching with the statistics buffers. Just
>> stitching the data to the same video buffer isn't a generic solution.
>
> Let me list disadvantages of using separate buffer queue:
>
> 1. significant complication of the driver:
> - need to add video node support with all it's memory and file ops,
That's not much of code since the driver already does it once.
> - more complicated VIDIOC_STREAMON logic, MIPI CSI receiver needs to
> care about the data pipeline details (power, streaming,..);
Power management complication is about non-existent, streaming likely
require ensuring that STREAMON IOCTL has been issued on both before
streaming is really started.
> 2. more processing overhead due second /dev/video handling;
True.
> 3. much more complex device handling in user space.
Somewhat, yes.
> All this for virtually nothing but 2 x 4-byte integers we are interested
> in in the Embedded Data stream.
> And advantages:
>
> 1. More generic solution, no need to invent new fourcc's for standard image
> data formats with metadata (new fourcc is needed anyway for the device-
> specific image data (JPEG/YUV/.../YUV/JPEG/meta-data, and we can choose
> to use multi-planar only for non-standard formats and separate meta-data
> buffer queue for others);
> 2. Probably other host IF/ISP drivers would implement it this way, or would
> they ?
> 3. What else could be added ?
One more advantage is that you'll get the metadata immediately to the
user space after it's been written to the system memory; no need to wait
for the rest of the frame. That may be interesting sometimes.
> Currently I don't see justification for using separate video node as the
> frame embedded frame grabber. I don't expect it to be useful for us in
> future, not for the ISPs that process sensor data separately from the host
> CPUs. Moreover, this MIPI-CSIS device has maximum buffer size of only 4 KiB,
> which effectively limits its possible use cases.
That's your hardware. Most CSI-2 receivers I've seen either write the
metadata to the same buffer or to a different memory area equivalent to
the video buffer where the image is stored.
> I don't think there is a need to force host drivers to use either separate
> buffer queues or multi-planar APIs. Especially in case of non-standard hybrid
> data formats. I'm ready to discuss separate buffer queue approach if we have
> real use case for it. I don't think these two methods are exclusive.
I agree with that. It should be made possible for the user to decide
which one to use. The hardware may also limit possibilities since if it
just recognises a single buffer, there's not much that the software can
do in that case.
Multi-plane buffers are the only option in that case I guess.
> Until then I would prefer not to live with an awkward solution.
I think it would be good to be able to have plane-specific formats on
multi-plane buffers. Thus we wouldn't end up having a new pixel format
out of every metadata / image format pair. And there will be many, many
of those and the applications certainly don't want to care about the
combinations themselves.
>>> VIDIOC_STREAMON, VIDIOC_QBUF/DQBUF calls would have been at least roughly
>>> synchronized, and applications would have to know somehow which video nodes
>>> needs to be opened together. I guess things like that could be abstracted
>>> in a library, but what do we really gain for such effort ?
>>> And now I can just ask kernel for 2-planar buffers where everything is in
>>> place..
>>
>> That's equally good --- some hardware can only do that after all, but do you
>> need the callback in that case, if there's a single destination buffer
>> anyway? Wouldn't the frame layout descriptor have enough information to do
>> this?
>
> There is as many buffers as user requested with REQBUFS. In VSYNC interrupt
I meant separately allocated and mapped memory areas related to a single
frame.
> of one device there is a buffer configured for the other device. With each
> frame interrupt there is a different buffer used, the one that the DMA engine
> actually writes data to. Data copying happens from the MIPI-CSIS internal
> ioremapped buffer to a buffer owned by the host interface driver. And the
> callback is used for dynamically switching buffers at the subdev.
So... your CSI-2 receiver has got a small internal memory where the
metadata can be written? That's certainly a novel solution. :-)
I still don't quite understand the need for the callback. First of all,
did I understand correctly that a driver for different hardware than
than the one in the memory of which the metadata actually resides would
copy the contents of this memory to a multi-plane video buffer that it
eventually passes to user space?
Kind regards,
--
Sakari Ailus
sakari.ailus@iki.fi
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2012-09-26 20:49 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-09-24 13:26 [PATCH RFC] V4L: Add s_rx_buffer subdev video operation Sylwester Nawrocki
2012-09-24 13:44 ` Sakari Ailus
2012-09-24 13:58 ` Laurent Pinchart
2012-09-24 17:05 ` Sylwester Nawrocki
2012-09-24 16:51 ` Sylwester Nawrocki
2012-09-24 18:26 ` Sakari Ailus
2012-09-25 9:48 ` Sylwester Nawrocki
2012-09-26 20:49 ` Sakari Ailus
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).