* RFC: Negotiating frame buffer size between sensor subdevs and bridge devices
@ 2011-07-28 17:04 Sylwester Nawrocki
2011-08-16 22:25 ` Sakari Ailus
0 siblings, 1 reply; 7+ messages in thread
From: Sylwester Nawrocki @ 2011-07-28 17:04 UTC (permalink / raw)
To: linux-media@vger.kernel.org
Hello,
Trying to capture images in JPEG format with regular "image sensor ->
mipi-csi receiver -> host interface" H/W configuration I've found there
is no standard way to communicate between the sensor subdev and the host
driver what is exactly a required maximum buffer size to capture a frame.
For the raw formats there is no issue as the buffer size can be easily
determined from the pixel format and resolution (or sizeimage set on
a video node).
However compressed data formats are a bit more complicated, the required
memory buffer size depends on multiple factors, like compression ratio,
exact file header structure etc.
Often it is at the sensor driver where all information required to
determine size of the allocated memory is present. Bridge/host devices
just do plain DMA without caring much what is transferred. I know of
hardware which, for some pixel formats, once data capture is started,
writes to memory whatever amount of data the sensor is transmitting,
without any means to interrupt on the host side. So it is critical
to assure the buffer allocation is done right, according to the sensor
requirements, to avoid buffer overrun.
Here is a link to somehow related discussion I could find:
[1] http://www.mail-archive.com/linux-media@vger.kernel.org/msg27138.html
In order to let the host drivers query or configure subdevs with required
frame buffer size one of the following changes could be done at V4L2 API:
1. Add a 'sizeimage' field in struct v4l2_mbus_framefmt and make subdev
drivers optionally set/adjust it when setting or getting the format with
set_fmt/get_fmt pad level ops (and s/g_mbus_fmt ?)
There could be two situations:
- effective required frame buffer size is specified by the sensor and the
host driver relies on that value when allocating a buffer;
- the host driver forces some arbitrary buffer size and the sensor performs
any required action to limit transmitted amount of data to that amount
of data;
Both cases could be covered similarly as it's done with VIDIOC_S_FMT.
Introducing 'sizeimage' field is making the media bus format struct looking
more similar to struct v4l2_pix_format and not quite in line with media bus
format meaning, i.e. describing data on a physical bus, not in the memory.
The other option I can think of is to create separate subdev video ops.
2. Add new s/g_sizeimage subdev video operations
The best would be to make this an optional callback, not sure if it makes sense
though. It has an advantage of not polluting the user space API. Although
'sizeimage' in user space might be useful for some purposes I rather tried to
focus on "in-kernel" calls.
Comments? Better ideas?
Thanks,
--
Sylwester Nawrocki
Samsung Poland R&D Center
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: RFC: Negotiating frame buffer size between sensor subdevs and bridge devices
2011-07-28 17:04 RFC: Negotiating frame buffer size between sensor subdevs and bridge devices Sylwester Nawrocki
@ 2011-08-16 22:25 ` Sakari Ailus
2011-08-17 17:43 ` Guennadi Liakhovetski
2011-08-17 20:22 ` Sylwester Nawrocki
0 siblings, 2 replies; 7+ messages in thread
From: Sakari Ailus @ 2011-08-16 22:25 UTC (permalink / raw)
To: Sylwester Nawrocki; +Cc: linux-media@vger.kernel.org
On Thu, Jul 28, 2011 at 07:04:11PM +0200, Sylwester Nawrocki wrote:
> Hello,
Hi Sylwester,
> Trying to capture images in JPEG format with regular "image sensor ->
> mipi-csi receiver -> host interface" H/W configuration I've found there
> is no standard way to communicate between the sensor subdev and the host
> driver what is exactly a required maximum buffer size to capture a frame.
>
> For the raw formats there is no issue as the buffer size can be easily
> determined from the pixel format and resolution (or sizeimage set on
> a video node).
> However compressed data formats are a bit more complicated, the required
> memory buffer size depends on multiple factors, like compression ratio,
> exact file header structure etc.
>
> Often it is at the sensor driver where all information required to
> determine size of the allocated memory is present. Bridge/host devices
> just do plain DMA without caring much what is transferred. I know of
> hardware which, for some pixel formats, once data capture is started,
> writes to memory whatever amount of data the sensor is transmitting,
> without any means to interrupt on the host side. So it is critical
> to assure the buffer allocation is done right, according to the sensor
> requirements, to avoid buffer overrun.
>
>
> Here is a link to somehow related discussion I could find:
> [1] http://www.mail-archive.com/linux-media@vger.kernel.org/msg27138.html
>
>
> In order to let the host drivers query or configure subdevs with required
> frame buffer size one of the following changes could be done at V4L2 API:
>
> 1. Add a 'sizeimage' field in struct v4l2_mbus_framefmt and make subdev
> drivers optionally set/adjust it when setting or getting the format with
> set_fmt/get_fmt pad level ops (and s/g_mbus_fmt ?)
> There could be two situations:
> - effective required frame buffer size is specified by the sensor and the
> host driver relies on that value when allocating a buffer;
> - the host driver forces some arbitrary buffer size and the sensor performs
> any required action to limit transmitted amount of data to that amount
> of data;
> Both cases could be covered similarly as it's done with VIDIOC_S_FMT.
>
> Introducing 'sizeimage' field is making the media bus format struct looking
> more similar to struct v4l2_pix_format and not quite in line with media bus
> format meaning, i.e. describing data on a physical bus, not in the memory.
> The other option I can think of is to create separate subdev video ops.
> 2. Add new s/g_sizeimage subdev video operations
>
> The best would be to make this an optional callback, not sure if it makes sense
> though. It has an advantage of not polluting the user space API. Although
> 'sizeimage' in user space might be useful for some purposes I rather tried to
> focus on "in-kernel" calls.
I prefer this second approach over the first once since the maxiumu size of
the image in bytes really isn't a property of the bus.
How about a regular V4L2 control? You would also have minimum and maximum
values, I'm not quite sure whather this is a plus, though. :)
Btw. how does v4l2_mbus_framefmt suit for compressed formats in general?
--
Sakari Ailus
sakari.ailus@iki.fi
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: RFC: Negotiating frame buffer size between sensor subdevs and bridge devices
2011-08-16 22:25 ` Sakari Ailus
@ 2011-08-17 17:43 ` Guennadi Liakhovetski
2011-08-20 21:10 ` Sylwester Nawrocki
2011-08-17 20:22 ` Sylwester Nawrocki
1 sibling, 1 reply; 7+ messages in thread
From: Guennadi Liakhovetski @ 2011-08-17 17:43 UTC (permalink / raw)
To: Sakari Ailus; +Cc: Sylwester Nawrocki, linux-media@vger.kernel.org
On Wed, 17 Aug 2011, Sakari Ailus wrote:
> On Thu, Jul 28, 2011 at 07:04:11PM +0200, Sylwester Nawrocki wrote:
> > Hello,
>
> Hi Sylwester,
>
> > Trying to capture images in JPEG format with regular "image sensor ->
> > mipi-csi receiver -> host interface" H/W configuration I've found there
> > is no standard way to communicate between the sensor subdev and the host
> > driver what is exactly a required maximum buffer size to capture a frame.
> >
> > For the raw formats there is no issue as the buffer size can be easily
> > determined from the pixel format and resolution (or sizeimage set on
> > a video node).
> > However compressed data formats are a bit more complicated, the required
> > memory buffer size depends on multiple factors, like compression ratio,
> > exact file header structure etc.
> >
> > Often it is at the sensor driver where all information required to
> > determine size of the allocated memory is present. Bridge/host devices
> > just do plain DMA without caring much what is transferred. I know of
> > hardware which, for some pixel formats, once data capture is started,
> > writes to memory whatever amount of data the sensor is transmitting,
> > without any means to interrupt on the host side. So it is critical
> > to assure the buffer allocation is done right, according to the sensor
> > requirements, to avoid buffer overrun.
> >
> >
> > Here is a link to somehow related discussion I could find:
> > [1] http://www.mail-archive.com/linux-media@vger.kernel.org/msg27138.html
> >
> >
> > In order to let the host drivers query or configure subdevs with required
> > frame buffer size one of the following changes could be done at V4L2 API:
> >
> > 1. Add a 'sizeimage' field in struct v4l2_mbus_framefmt and make subdev
> > drivers optionally set/adjust it when setting or getting the format with
> > set_fmt/get_fmt pad level ops (and s/g_mbus_fmt ?)
> > There could be two situations:
> > - effective required frame buffer size is specified by the sensor and the
> > host driver relies on that value when allocating a buffer;
> > - the host driver forces some arbitrary buffer size and the sensor performs
> > any required action to limit transmitted amount of data to that amount
> > of data;
> > Both cases could be covered similarly as it's done with VIDIOC_S_FMT.
> >
> > Introducing 'sizeimage' field is making the media bus format struct looking
> > more similar to struct v4l2_pix_format and not quite in line with media bus
> > format meaning, i.e. describing data on a physical bus, not in the memory.
> > The other option I can think of is to create separate subdev video ops.
> > 2. Add new s/g_sizeimage subdev video operations
> >
> > The best would be to make this an optional callback, not sure if it makes sense
> > though. It has an advantage of not polluting the user space API. Although
> > 'sizeimage' in user space might be useful for some purposes I rather tried to
> > focus on "in-kernel" calls.
>
> I prefer this second approach over the first once since the maxiumu size of
> the image in bytes really isn't a property of the bus.
Call that field framesamples and already it fits quite well with the
notion of data on the bus and not in memory. Wouldn't this work?
Thanks
Guennadi
>
> How about a regular V4L2 control? You would also have minimum and maximum
> values, I'm not quite sure whather this is a plus, though. :)
>
> Btw. how does v4l2_mbus_framefmt suit for compressed formats in general?
>
> --
> Sakari Ailus
> sakari.ailus@iki.fi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: RFC: Negotiating frame buffer size between sensor subdevs and bridge devices
2011-08-16 22:25 ` Sakari Ailus
2011-08-17 17:43 ` Guennadi Liakhovetski
@ 2011-08-17 20:22 ` Sylwester Nawrocki
2011-08-18 20:32 ` Sakari Ailus
1 sibling, 1 reply; 7+ messages in thread
From: Sylwester Nawrocki @ 2011-08-17 20:22 UTC (permalink / raw)
To: Sakari Ailus; +Cc: Sylwester Nawrocki, linux-media@vger.kernel.org
On 08/17/2011 12:25 AM, Sakari Ailus wrote:
> On Thu, Jul 28, 2011 at 07:04:11PM +0200, Sylwester Nawrocki wrote:
>> Hello,
>
> Hi Sylwester,
Hi Sakari,
kiitos commentti ;)
>
>> Trying to capture images in JPEG format with regular "image sensor ->
>> mipi-csi receiver -> host interface" H/W configuration I've found there
>> is no standard way to communicate between the sensor subdev and the host
>> driver what is exactly a required maximum buffer size to capture a frame.
>>
>> For the raw formats there is no issue as the buffer size can be easily
>> determined from the pixel format and resolution (or sizeimage set on
>> a video node).
>> However compressed data formats are a bit more complicated, the required
>> memory buffer size depends on multiple factors, like compression ratio,
>> exact file header structure etc.
>>
>> Often it is at the sensor driver where all information required to
>> determine size of the allocated memory is present. Bridge/host devices
>> just do plain DMA without caring much what is transferred. I know of
>> hardware which, for some pixel formats, once data capture is started,
>> writes to memory whatever amount of data the sensor is transmitting,
>> without any means to interrupt on the host side. So it is critical
>> to assure the buffer allocation is done right, according to the sensor
>> requirements, to avoid buffer overrun.
>>
>>
>> Here is a link to somehow related discussion I could find:
>> [1] http://www.mail-archive.com/linux-media@vger.kernel.org/msg27138.html
>>
>>
>> In order to let the host drivers query or configure subdevs with required
>> frame buffer size one of the following changes could be done at V4L2 API:
>>
>> 1. Add a 'sizeimage' field in struct v4l2_mbus_framefmt and make subdev
>> drivers optionally set/adjust it when setting or getting the format with
>> set_fmt/get_fmt pad level ops (and s/g_mbus_fmt ?)
>> There could be two situations:
>> - effective required frame buffer size is specified by the sensor and the
>> host driver relies on that value when allocating a buffer;
>> - the host driver forces some arbitrary buffer size and the sensor performs
>> any required action to limit transmitted amount of data to that amount
>> of data;
>> Both cases could be covered similarly as it's done with VIDIOC_S_FMT.
>>
>> Introducing 'sizeimage' field is making the media bus format struct looking
>> more similar to struct v4l2_pix_format and not quite in line with media bus
>> format meaning, i.e. describing data on a physical bus, not in the memory.
>> The other option I can think of is to create separate subdev video ops.
>> 2. Add new s/g_sizeimage subdev video operations
>>
>> The best would be to make this an optional callback, not sure if it makes sense
>> though. It has an advantage of not polluting the user space API. Although
>> 'sizeimage' in user space might be useful for some purposes I rather tried to
>> focus on "in-kernel" calls.
>
> I prefer this second approach over the first once since the maxiumu size of
> the image in bytes really isn't a property of the bus.
After thinking some more about it I came to similar conclusion. I intended to
find some better name for s/g_sizeimage callbacks and post relevant patch
for consideration.
Although I haven't yet found some time to carry on with this.
>
> How about a regular V4L2 control? You would also have minimum and maximum
> values, I'm not quite sure whather this is a plus, though. :)
My intention was to have these calls used only internally in the kernel and
do not allow the userspace to mess with it. All in all, if anything had
interfered and the host driver would have allocated too small buffer the system
would crash miserably due to buffer overrun.
The final buffer size for a JFIF/EXIF file will depend on other factors, like
main image resolution, JPEG compression ratio, the thumbnail inclusion and its
format/resolution, etc. I imagine we should be rather creating controls
for those parameters.
Also the driver would most likely have to validate the buffer size during
STREAMON call.
>
> Btw. how does v4l2_mbus_framefmt suit for compressed formats in general?
>
Well, there is really nothing particularly addressing the compressed formats
in that struct. But we need to use it as the compressed data flows through
the media bus in same way as the raw data.
It's rather hard to define the pixel codes using existing convention as there
is no simple relationship between the pixel data and what is transferred on
the bus.
Yet I haven't run into issues other than no means to specify the whole image
size.
--
Regards,
Sylwester
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: RFC: Negotiating frame buffer size between sensor subdevs and bridge devices
2011-08-17 20:22 ` Sylwester Nawrocki
@ 2011-08-18 20:32 ` Sakari Ailus
2011-08-19 13:13 ` Sylwester Nawrocki
0 siblings, 1 reply; 7+ messages in thread
From: Sakari Ailus @ 2011-08-18 20:32 UTC (permalink / raw)
To: Sylwester Nawrocki; +Cc: Sylwester Nawrocki, linux-media@vger.kernel.org
On Wed, Aug 17, 2011 at 10:22:26PM +0200, Sylwester Nawrocki wrote:
> On 08/17/2011 12:25 AM, Sakari Ailus wrote:
> > On Thu, Jul 28, 2011 at 07:04:11PM +0200, Sylwester Nawrocki wrote:
> >> Hello,
> >
> > Hi Sylwester,
>
> Hi Sakari,
>
> kiitos commentti ;)
Hi Sylwester!
Ole hyvä! :)
> >
> >> Trying to capture images in JPEG format with regular "image sensor ->
> >> mipi-csi receiver -> host interface" H/W configuration I've found there
> >> is no standard way to communicate between the sensor subdev and the host
> >> driver what is exactly a required maximum buffer size to capture a frame.
> >>
> >> For the raw formats there is no issue as the buffer size can be easily
> >> determined from the pixel format and resolution (or sizeimage set on
> >> a video node).
> >> However compressed data formats are a bit more complicated, the required
> >> memory buffer size depends on multiple factors, like compression ratio,
> >> exact file header structure etc.
> >>
> >> Often it is at the sensor driver where all information required to
> >> determine size of the allocated memory is present. Bridge/host devices
> >> just do plain DMA without caring much what is transferred. I know of
> >> hardware which, for some pixel formats, once data capture is started,
> >> writes to memory whatever amount of data the sensor is transmitting,
> >> without any means to interrupt on the host side. So it is critical
> >> to assure the buffer allocation is done right, according to the sensor
> >> requirements, to avoid buffer overrun.
> >>
> >>
> >> Here is a link to somehow related discussion I could find:
> >> [1] http://www.mail-archive.com/linux-media@vger.kernel.org/msg27138.html
> >>
> >>
> >> In order to let the host drivers query or configure subdevs with required
> >> frame buffer size one of the following changes could be done at V4L2 API:
> >>
> >> 1. Add a 'sizeimage' field in struct v4l2_mbus_framefmt and make subdev
> >> drivers optionally set/adjust it when setting or getting the format with
> >> set_fmt/get_fmt pad level ops (and s/g_mbus_fmt ?)
> >> There could be two situations:
> >> - effective required frame buffer size is specified by the sensor and the
> >> host driver relies on that value when allocating a buffer;
> >> - the host driver forces some arbitrary buffer size and the sensor performs
> >> any required action to limit transmitted amount of data to that amount
> >> of data;
> >> Both cases could be covered similarly as it's done with VIDIOC_S_FMT.
> >>
> >> Introducing 'sizeimage' field is making the media bus format struct looking
> >> more similar to struct v4l2_pix_format and not quite in line with media bus
> >> format meaning, i.e. describing data on a physical bus, not in the memory.
> >> The other option I can think of is to create separate subdev video ops.
> >> 2. Add new s/g_sizeimage subdev video operations
> >>
> >> The best would be to make this an optional callback, not sure if it makes sense
> >> though. It has an advantage of not polluting the user space API. Although
> >> 'sizeimage' in user space might be useful for some purposes I rather tried to
> >> focus on "in-kernel" calls.
> >
> > I prefer this second approach over the first once since the maxiumu size of
> > the image in bytes really isn't a property of the bus.
>
> After thinking some more about it I came to similar conclusion. I intended to
> find some better name for s/g_sizeimage callbacks and post relevant patch
> for consideration.
> Although I haven't yet found some time to carry on with this.
That sounds a possible solution to me as well. The upside would be that
v4l2_mbus_framefmt would be left to describe relatively low level bus and
format properties.
That said, I'm not anymore quite certain it should not be part of that
structure. Is the size always the same, or is this maximum?
> > How about a regular V4L2 control? You would also have minimum and maximum
> > values, I'm not quite sure whather this is a plus, though. :)
>
> My intention was to have these calls used only internally in the kernel and
> do not allow the userspace to mess with it. All in all, if anything had
> interfered and the host driver would have allocated too small buffer the system
> would crash miserably due to buffer overrun.
The user space wouldn't be allowed to do anything like that. E.g. the control
would become read-only during streaming and the bridge driver would need to
check its value against the sizes of the video buffers. Although this might
not be relevant at all if there are no direct ways to affect the maximum size
of the resulting image.
> The final buffer size for a JFIF/EXIF file will depend on other factors, like
> main image resolution, JPEG compression ratio, the thumbnail inclusion and its
> format/resolution, etc. I imagine we should be rather creating controls
> for those parameters.
>
> Also the driver would most likely have to validate the buffer size during
> STREAMON call.
>
> >
> > Btw. how does v4l2_mbus_framefmt suit for compressed formats in general?
> >
>
> Well, there is really nothing particularly addressing the compressed formats
> in that struct. But we need to use it as the compressed data flows through
> the media bus in same way as the raw data.
> It's rather hard to define the pixel codes using existing convention as there
> is no simple relationship between the pixel data and what is transferred on
> the bus.
> Yet I haven't run into issues other than no means to specify the whole image
> size.
I've never dealt with compressed image formats in drivers in general but I'd
suppose it might require taking this into account in the CSI-2 or the
parallel bus receivers.
How does this work in your case?
Is the image size actually used in programming the CSI-2 receiver? What
about the width and the height?
Cheers,
--
Sakari Ailus
sakari.ailus@iki.fi
^ permalink raw reply [flat|nested] 7+ messages in thread
* RE: RFC: Negotiating frame buffer size between sensor subdevs and bridge devices
2011-08-18 20:32 ` Sakari Ailus
@ 2011-08-19 13:13 ` Sylwester Nawrocki
0 siblings, 0 replies; 7+ messages in thread
From: Sylwester Nawrocki @ 2011-08-19 13:13 UTC (permalink / raw)
To: 'Sakari Ailus'; +Cc: linux-media
Hi Sakari,
On 08/18/2011 10:32 PM, Sakari Ailus wrote:
>>>> In order to let the host drivers query or configure subdevs with
>>>> required frame buffer size one of the following changes could be done
at V4L2 API:
>>>>
>>>> 1. Add a 'sizeimage' field in struct v4l2_mbus_framefmt and make subdev
>>>> drivers optionally set/adjust it when setting or getting the format
with
>>>> set_fmt/get_fmt pad level ops (and s/g_mbus_fmt ?)
>>>> There could be two situations:
>>>> - effective required frame buffer size is specified by the sensor
and the
>>>> host driver relies on that value when allocating a buffer;
>>>> - the host driver forces some arbitrary buffer size and the sensor
performs
>>>> any required action to limit transmitted amount of data to that
amount
>>>> of data;
>>>> Both cases could be covered similarly as it's done with VIDIOC_S_FMT.
>>>>
>>>> Introducing 'sizeimage' field is making the media bus format struct
>>>> looking more similar to struct v4l2_pix_format and not quite in
>>>> line with media bus format meaning, i.e. describing data on a physical
bus,
>>>> not in the memory.
>>>> The other option I can think of is to create separate subdev video ops.
>>>> 2. Add new s/g_sizeimage subdev video operations
>>>>
>>>> The best would be to make this an optional callback, not sure if it
>>>> makes sense though. It has an advantage of not polluting the user
>>>> space API. Although 'sizeimage' in user space might be useful for
>>>> some purposes I rather tried to focus on "in-kernel" calls.
>>>
>>> I prefer this second approach over the first once since the maxiumu
>>> size of the image in bytes really isn't a property of the bus.
>>
>> After thinking some more about it I came to similar conclusion. I
>> intended to find some better name for s/g_sizeimage callbacks and
>> post relevant patch for consideration.
>> Although I haven't yet found some time to carry on with this.
>
> That sounds a possible solution to me as well. The upside would be
> that v4l2_mbus_framefmt would be left to describe relatively low level
> bus and format properties.
>
> That said, I'm not anymore quite certain it should not be part of that
> structure. Is the size always the same, or is this maximum?
The output size is not known in advance, due to the nature of compressed
formats, as you may know. So it's actually the maximum data size that
we need to fix up along the pipeline.
>
>>> How about a regular V4L2 control? You would also have minimum and
>>> maximum values, I'm not quite sure whather this is a plus, though.
>>> :)
>>
>> My intention was to have these calls used only internally in the
>> kernel and do not allow the userspace to mess with it. All in all, if
>> anything had interfered and the host driver would have allocated too
>> small buffer the system would crash miserably due to buffer overrun.
>
> The user space wouldn't be allowed to do anything like that. E.g. the
> control would become read-only during streaming and the bridge driver
> would need to check its value against the sizes of the video buffers.
> Although this might not be relevant at all if there are no direct ways
> to affect the maximum size of the resulting image.
Ok, makes sense. AFAIK in most cases you will not be able to force an
exact size of the resulting image. The application may apply some high
threshold on the sensor (firmware), which it could then have considered
when applying other controls.
Nevertheless we need to have a consistent 'sizeimage' along the pipeline.
That said, if we used a control for that, it would be mostly GET on the
transmitter (sensor) and SET on the receiver AFAIU.
But I would like more to see it as a part of struct v4l2_mbus_framefmt.
>
>> The final buffer size for a JFIF/EXIF file will depend on other
>> factors, like main image resolution, JPEG compression ratio, the
>> thumbnail inclusion and its format/resolution, etc. I imagine we
>> should be rather creating controls for those parameters.
>>
>> Also the driver would most likely have to validate the buffer size
>> during STREAMON call.
>>
>>>
>>> Btw. how does v4l2_mbus_framefmt suit for compressed formats in general?
>>>
>>
>> Well, there is really nothing particularly addressing the compressed
>> formats in that struct. But we need to use it as the compressed data
>> flows through the media bus in same way as the raw data.
>> It's rather hard to define the pixel codes using existing convention
>> as there is no simple relationship between the pixel data and what is
>> transferred on the bus.
>> Yet I haven't run into issues other than no means to specify the
>> whole image size.
>
> I've never dealt with compressed image formats in drivers in general
> but I'd suppose it might require taking this into account in the CSI-2
> or the parallel bus receivers.
I guess it matters when the MIPI-CSI receiver driver independently allocates
the buffers to store the received data there.
And in that case we have to pre-program the MIPI-CSI receiver subdev with
the maximum required size of the buffer. Thus it seems the 'sizeimage'
must be passed over in user space...
What do you think ?
Except the pixel resolution and format the buffer size might depend on
other controls, related to the compression process.
So it might not be known exactly at the time of set_fmt call.
> How does this work in your case?
>
> Is the image size actually used in programming the CSI-2 receiver?
> What about the width and the height?
With the hardware is used to work with - no. The pixel width and height
only matters for the raw formats. But the MIPI-CSI slave device in
Samsung's SoCs is only a front-end to the capture engine, i.e. they're
hard wired with each other.
For the User Defined 1 format, which I have been using for JPEG capture,
once streaming is started any amount of data transmitted by the MIPI-CSI2
transmitter (the sensor) will be pushed into memory.
There is no means to pre-program an interrupt trigger after an arbitrary
Amount of data has been received.
The CSI2 standard does not define an exact frame structure for those
arbitrary byte-based formats. There is some more frame structure details
in the CSI-2 standard under the JPEG8 format, but I haven't seen any
hardware that supported this format yet.
Regards,
--
Sylwester Nawrocki
Samsung Poland R&D Center
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: RFC: Negotiating frame buffer size between sensor subdevs and bridge devices
2011-08-17 17:43 ` Guennadi Liakhovetski
@ 2011-08-20 21:10 ` Sylwester Nawrocki
0 siblings, 0 replies; 7+ messages in thread
From: Sylwester Nawrocki @ 2011-08-20 21:10 UTC (permalink / raw)
To: Guennadi Liakhovetski
Cc: Sakari Ailus, Sylwester Nawrocki, linux-media@vger.kernel.org,
Hans Verkuil
On 08/17/2011 07:43 PM, Guennadi Liakhovetski wrote:
> On Wed, 17 Aug 2011, Sakari Ailus wrote:
>> On Thu, Jul 28, 2011 at 07:04:11PM +0200, Sylwester Nawrocki wrote:
...
>>> In order to let the host drivers query or configure subdevs with required
>>> frame buffer size one of the following changes could be done at V4L2 API:
>>>
>>> 1. Add a 'sizeimage' field in struct v4l2_mbus_framefmt and make subdev
>>> drivers optionally set/adjust it when setting or getting the format with
>>> set_fmt/get_fmt pad level ops (and s/g_mbus_fmt ?)
>>> There could be two situations:
>>> - effective required frame buffer size is specified by the sensor and the
>>> host driver relies on that value when allocating a buffer;
>>> - the host driver forces some arbitrary buffer size and the sensor performs
>>> any required action to limit transmitted amount of data to that amount
>>> of data;
>>> Both cases could be covered similarly as it's done with VIDIOC_S_FMT.
>>>
>>> Introducing 'sizeimage' field is making the media bus format struct looking
>>> more similar to struct v4l2_pix_format and not quite in line with media bus
>>> format meaning, i.e. describing data on a physical bus, not in the memory.
>>> The other option I can think of is to create separate subdev video ops.
>>> 2. Add new s/g_sizeimage subdev video operations
...
>> I prefer this second approach over the first once since the maxiumu size of
>> the image in bytes really isn't a property of the bus.
>
> Call that field framesamples and already it fits quite well with the
> notion of data on the bus and not in memory. Wouldn't this work?
Hmm...that might be exactly what we need.
That was also an initial Hans' proposal when we recently discussed it.
At least such an information should be sufficient for handling JPEG, where
the effective buffer size might be calculated from a media bus pixel code
and a number of samples per frame.
--
Regards,
Sylwester
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2011-08-20 21:10 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-07-28 17:04 RFC: Negotiating frame buffer size between sensor subdevs and bridge devices Sylwester Nawrocki
2011-08-16 22:25 ` Sakari Ailus
2011-08-17 17:43 ` Guennadi Liakhovetski
2011-08-20 21:10 ` Sylwester Nawrocki
2011-08-17 20:22 ` Sylwester Nawrocki
2011-08-18 20:32 ` Sakari Ailus
2011-08-19 13:13 ` Sylwester Nawrocki
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).