public inbox for linux-media@vger.kernel.org
 help / color / mirror / Atom feed
* [Q] Interleaved formats on the media bus
@ 2012-01-31 11:23 Sylwester Nawrocki
  2012-02-01  1:44 ` Guennadi Liakhovetski
  2012-02-01 10:00 ` Sakari Ailus
  0 siblings, 2 replies; 30+ messages in thread
From: Sylwester Nawrocki @ 2012-01-31 11:23 UTC (permalink / raw)
  To: linux-media@vger.kernel.org
  Cc: Guennadi Liakhovetski, Sakari Ailus, Laurent Pinchart,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

Hi all,

Some camera sensors generate data formats that cannot be described using
current convention of the media bus pixel code naming.

For instance, interleaved JPEG data and raw VYUY. Moreover interleaving
is rather vendor specific, IOW I imagine there might be many ways of how
the interleaving algorithm is designed.

I'm wondering how to handle this. For sure such an image format will need
a new vendor-specific fourcc. Should we have also vendor specific media bus code ?

I would like to avoid vendor specific media bus codes as much as possible.
For instance defining something like

V4L2_MBUS_FMT_VYUY_JPEG_1X8

for interleaved VYUY and JPEG data might do, except it doesn't tell anything
about how the data is interleaved.

So maybe we could add some code describing interleaving (xxxx)

V4L2_MBUS_FMT_xxxx_VYUY_JPEG_1X8

or just the sensor name instead ?

Thoughts ?


Regards,
-- 
Sylwester Nawrocki
Samsung Poland R&D Center

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-01-31 11:23 [Q] Interleaved formats on the media bus Sylwester Nawrocki
@ 2012-02-01  1:44 ` Guennadi Liakhovetski
  2012-02-01 10:44   ` Sylwester Nawrocki
  2012-02-01 10:00 ` Sakari Ailus
  1 sibling, 1 reply; 30+ messages in thread
From: Guennadi Liakhovetski @ 2012-02-01  1:44 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: linux-media@vger.kernel.org, Sakari Ailus, Laurent Pinchart,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

Hi Sylwester

On Tue, 31 Jan 2012, Sylwester Nawrocki wrote:

> Hi all,
> 
> Some camera sensors generate data formats that cannot be described using
> current convention of the media bus pixel code naming.
> For instance, interleaved JPEG data and raw VYUY. Moreover interleaving
> is rather vendor specific, IOW I imagine there might be many ways of how
> the interleaving algorithm is designed.
> 
> I'm wondering how to handle this. For sure such an image format will need
> a new vendor-specific fourcc. Should we have also vendor specific media bus code ?
> 
> I would like to avoid vendor specific media bus codes as much as possible.
> For instance defining something like
> 
> V4L2_MBUS_FMT_VYUY_JPEG_1X8

Hmm... Are such sensors not sending this data over something like CSI-2 
with different channel IDs? In which case we just deal with two formats 
cleanly.

Otherwise - I'm a bit sceptical about defining a new format for each pair 
of existing codes. Maybe we should rather try to describe individual 
formats and the way they are interleaved? In any case the end user will 
want them separately, right? So, at some point they will want to know what 
are those two formats, that the camera has sent.

No, I don't know yet how to describe this, proposals are welcome;-)

> for interleaved VYUY and JPEG data might do, except it doesn't tell anything
> about how the data is interleaved.
> 
> So maybe we could add some code describing interleaving (xxxx)
> 
> V4L2_MBUS_FMT_xxxx_VYUY_JPEG_1X8
> 
> or just the sensor name instead ?

As I said above, I would describe formats separately and the way, how they 
are interleaved. BTW, this might be related to recent patches from 
Laurent, introducing data layout in RAM and fixing bytesperline and 
sizeimage calculations.

Thanks
Guennadi

> Thoughts ?
> 
> 
> Regards,
> -- 
> Sylwester Nawrocki
> Samsung Poland R&D Center

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-01-31 11:23 [Q] Interleaved formats on the media bus Sylwester Nawrocki
  2012-02-01  1:44 ` Guennadi Liakhovetski
@ 2012-02-01 10:00 ` Sakari Ailus
  2012-02-01 11:41   ` Sylwester Nawrocki
  1 sibling, 1 reply; 30+ messages in thread
From: Sakari Ailus @ 2012-02-01 10:00 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: linux-media@vger.kernel.org, Guennadi Liakhovetski,
	Laurent Pinchart, HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

Hi Sylwester,

On Tue, Jan 31, 2012 at 12:23:21PM +0100, Sylwester Nawrocki wrote:
> Hi all,
> 
> Some camera sensors generate data formats that cannot be described using
> current convention of the media bus pixel code naming.
> 
> For instance, interleaved JPEG data and raw VYUY. Moreover interleaving
> is rather vendor specific, IOW I imagine there might be many ways of how
> the interleaving algorithm is designed.

Is that truly interleaved, or is that e.g. first yuv and then jpeg?
Interleaving the two sounds quite strange to me.

> I'm wondering how to handle this. For sure such an image format will need
> a new vendor-specific fourcc. Should we have also vendor specific media bus code ?
> 
> I would like to avoid vendor specific media bus codes as much as possible.
> For instance defining something like
> 
> V4L2_MBUS_FMT_VYUY_JPEG_1X8
> 
> for interleaved VYUY and JPEG data might do, except it doesn't tell anything
> about how the data is interleaved.
> 
> So maybe we could add some code describing interleaving (xxxx)
> 
> V4L2_MBUS_FMT_xxxx_VYUY_JPEG_1X8
> 
> or just the sensor name instead ?

If that format is truly vendor specific, I think a vendor or sensor specific
media bus code / 4cc would be the way to go. On the other hand, you must be
prepared to handle these formats in your ISP driver, too.

I'd guess that all the ISP would do to such formats is to write them to
memory since I don't see much use for either in ISPs --- both typically are
output of the ISP.

I think we will need to consider use cases where the sensors produce other
data than just the plain image: I've heard of a sensor producing both
(consecutively, I understand) and there are sensors that produce metadata as
well. For those, we need to specify the format of the full frame, not just
the image data part of it --- which we have called "frame" at least up to
this point.

If the case is that the ISP needs this kind of information from the sensor
driver to be able to handle this kind of data, i.e. to write the JPEG and
YUV to separate memory locations, I'm proposing to start working on this
now rather than creating a single hardware-specific solution.

Cheers,

-- 
Sakari Ailus
e-mail: sakari.ailus@iki.fi	jabber/XMPP/Gmail: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-01  1:44 ` Guennadi Liakhovetski
@ 2012-02-01 10:44   ` Sylwester Nawrocki
  0 siblings, 0 replies; 30+ messages in thread
From: Sylwester Nawrocki @ 2012-02-01 10:44 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: linux-media@vger.kernel.org, Sakari Ailus, Laurent Pinchart,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

Hi Guennadi,

On 02/01/2012 02:44 AM, Guennadi Liakhovetski wrote:
>> V4L2_MBUS_FMT_VYUY_JPEG_1X8
> 
> Hmm... Are such sensors not sending this data over something like CSI-2 
> with different channel IDs? In which case we just deal with two formats 
> cleanly.

I think they could, it might be just a matter of a proper firmware. But now
all that is available is only truly interleaved data, in the chunks of page
or so. For a full picture I should mention that such a frame contains also
embedded non image data, at the end of frame. But this can possibly be handled
with a separate buffer queue, like in VBI case for instance.

> Otherwise - I'm a bit sceptical about defining a new format for each pair 
> of existing codes. Maybe we should rather try to describe individual 
> formats and the way they are interleaved? In any case the end user will 

Yes, sounds reasonable. However the sensor specific frame is transferred
as MIPI-CSI2 User Defined Data 1. So it should be possible to associate such 
information with the format on the media bus, for the bus receiver to be able
to properly handle the data. 

> want them separately, right? So, at some point they will want to know what 
> are those two formats, that the camera has sent.

I'm afraid the data will data have to be separated in user space. Moreover 
the user space needs to know what are resolutions for each YUV and JPEG frames.
But this could be probably queried at relevant subdevs/pads.

> No, I don't know yet how to describe this, proposals are welcome;-)

:-)

>> for interleaved VYUY and JPEG data might do, except it doesn't tell anything
>> about how the data is interleaved.
>>
>> So maybe we could add some code describing interleaving (xxxx)
>>
>> V4L2_MBUS_FMT_xxxx_VYUY_JPEG_1X8
>>
>> or just the sensor name instead ?
> 
> As I said above, I would describe formats separately and the way, how they 
> are interleaved. BTW, this might be related to recent patches from 
> Laurent, introducing data layout in RAM and fixing bytesperline and 
> sizeimage calculations.

Yes, more or less. Except of honoring 'sizeimage' the sensor needs to be
queried for the required buffer frame it sends out. I'm currently doing it
with a patch like this: 

http://www.mail-archive.com/linux-media@vger.kernel.org/msg39780.html

But I'm planning to change it to use a new control instead.


--

Regards,
Sylwester

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-01 10:00 ` Sakari Ailus
@ 2012-02-01 11:41   ` Sylwester Nawrocki
  2012-02-02  9:55     ` Laurent Pinchart
  2012-02-04 11:22     ` Sakari Ailus
  0 siblings, 2 replies; 30+ messages in thread
From: Sylwester Nawrocki @ 2012-02-01 11:41 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: linux-media@vger.kernel.org, Guennadi Liakhovetski,
	Laurent Pinchart, HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

Hi Sakari,

On 02/01/2012 11:00 AM, Sakari Ailus wrote:
>> Some camera sensors generate data formats that cannot be described using
>> current convention of the media bus pixel code naming.
>>
>> For instance, interleaved JPEG data and raw VYUY. Moreover interleaving
>> is rather vendor specific, IOW I imagine there might be many ways of how
>> the interleaving algorithm is designed.
> 
> Is that truly interleaved, or is that e.g. first yuv and then jpeg?
> Interleaving the two sounds quite strange to me.

It's truly interleaved. There might be some chance for yuv/jpeg one after
the other, but the interleaved format needs to be supported.

>> I'm wondering how to handle this. For sure such an image format will need
>> a new vendor-specific fourcc. Should we have also vendor specific media bus code ?
>>
>> I would like to avoid vendor specific media bus codes as much as possible.
>> For instance defining something like
>>
>> V4L2_MBUS_FMT_VYUY_JPEG_1X8
>>
>> for interleaved VYUY and JPEG data might do, except it doesn't tell anything
>> about how the data is interleaved.
>>
>> So maybe we could add some code describing interleaving (xxxx)
>>
>> V4L2_MBUS_FMT_xxxx_VYUY_JPEG_1X8
>>
>> or just the sensor name instead ?
> 
> If that format is truly vendor specific, I think a vendor or sensor specific
> media bus code / 4cc would be the way to go. On the other hand, you must be
> prepared to handle these formats in your ISP driver, too.

Yes, I don't see an issue in adding a support for a new format in ISP/bridge
driver, it needs to know anyway e.g. what MIPI-CSI data type corresponds
to the data from sensor.

> I'd guess that all the ISP would do to such formats is to write them to
> memory since I don't see much use for either in ISPs --- both typically are
> output of the ISP.

Yep, correct. In fact in those cases the sensor has complicated ISP built in,
so everything a bridge have to do is to pass data over to user space.

Also non-image data might need to be passed to user space as well.

> I think we will need to consider use cases where the sensors produce other
> data than just the plain image: I've heard of a sensor producing both
> (consecutively, I understand) and there are sensors that produce metadata as
> well. For those, we need to specify the format of the full frame, not just
> the image data part of it --- which we have called "frame" at least up to
> this point.

Yes, moreover such formats partly determine data layout in memory, rather than
really just a format on a video bus.

> If the case is that the ISP needs this kind of information from the sensor
> driver to be able to handle this kind of data, i.e. to write the JPEG and
> YUV to separate memory locations, I'm proposing to start working on this

It's not the case here, it would involve unnecessary copying in kernel space.
Even in case of whole consequitive data planes contiguous buffer is needed.
And it's not easy to split because the border between the data planes cannot
be arbitrarily aligned.

> now rather than creating a single hardware-specific solution.

Yes, I'm attempting rather generic approach, even just for that reason that
there are multiple Samsung sensors that output hybrid data. I've seen Sony
sensor doing that as well.


Regards
-- 
Sylwester Nawrocki
Samsung Poland R&D Center

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-01 11:41   ` Sylwester Nawrocki
@ 2012-02-02  9:55     ` Laurent Pinchart
  2012-02-02 11:00       ` Guennadi Liakhovetski
  2012-02-02 11:14       ` Sylwester Nawrocki
  2012-02-04 11:22     ` Sakari Ailus
  1 sibling, 2 replies; 30+ messages in thread
From: Laurent Pinchart @ 2012-02-02  9:55 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: Sakari Ailus, linux-media@vger.kernel.org, Guennadi Liakhovetski,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

Hi Sylwester,

On Wednesday 01 February 2012 12:41:44 Sylwester Nawrocki wrote:
> On 02/01/2012 11:00 AM, Sakari Ailus wrote:
> >> Some camera sensors generate data formats that cannot be described using
> >> current convention of the media bus pixel code naming.
> >> 
> >> For instance, interleaved JPEG data and raw VYUY. Moreover interleaving
> >> is rather vendor specific, IOW I imagine there might be many ways of how
> >> the interleaving algorithm is designed.
> > 
> > Is that truly interleaved, or is that e.g. first yuv and then jpeg?
> > Interleaving the two sounds quite strange to me.
> 
> It's truly interleaved. There might be some chance for yuv/jpeg one after
> the other, but the interleaved format needs to be supported.
> 
> >> I'm wondering how to handle this. For sure such an image format will
> >> need a new vendor-specific fourcc. Should we have also vendor specific
> >> media bus code ?
> >> 
> >> I would like to avoid vendor specific media bus codes as much as
> >> possible. For instance defining something like
> >> 
> >> V4L2_MBUS_FMT_VYUY_JPEG_1X8
> >> 
> >> for interleaved VYUY and JPEG data might do, except it doesn't tell
> >> anything about how the data is interleaved.
> >> 
> >> So maybe we could add some code describing interleaving (xxxx)
> >> 
> >> V4L2_MBUS_FMT_xxxx_VYUY_JPEG_1X8
> >> 
> >> or just the sensor name instead ?
> > 
> > If that format is truly vendor specific, I think a vendor or sensor
> > specific media bus code / 4cc would be the way to go. On the other hand,
> > you must be prepared to handle these formats in your ISP driver, too.
> 
> Yes, I don't see an issue in adding a support for a new format in
> ISP/bridge driver, it needs to know anyway e.g. what MIPI-CSI data type
> corresponds to the data from sensor.
> 
> > I'd guess that all the ISP would do to such formats is to write them to
> > memory since I don't see much use for either in ISPs --- both typically
> > are output of the ISP.
> 
> Yep, correct. In fact in those cases the sensor has complicated ISP built
> in, so everything a bridge have to do is to pass data over to user space.
> 
> Also non-image data might need to be passed to user space as well.
> 
> > I think we will need to consider use cases where the sensors produce
> > other data than just the plain image: I've heard of a sensor producing
> > both (consecutively, I understand) and there are sensors that produce
> > metadata as well. For those, we need to specify the format of the full
> > frame, not just the image data part of it --- which we have called
> > "frame" at least up to this point.
> 
> Yes, moreover such formats partly determine data layout in memory, rather
> than really just a format on a video bus.
> 
> > If the case is that the ISP needs this kind of information from the
> > sensor driver to be able to handle this kind of data, i.e. to write the
> > JPEG and YUV to separate memory locations, I'm proposing to start
> > working on this
> 
> It's not the case here, it would involve unnecessary copying in kernel
> space. Even in case of whole consequitive data planes contiguous buffer is
> needed. And it's not easy to split because the border between the data
> planes cannot be arbitrarily aligned.
> 
> > now rather than creating a single hardware-specific solution.
> 
> Yes, I'm attempting rather generic approach, even just for that reason that
> there are multiple Samsung sensors that output hybrid data. I've seen Sony
> sensor doing that as well.

Do all those sensors interleave the data in the same way ? This sounds quite 
hackish and vendor-specific to me, I'm not sure if we should try to generalize 
that. Maybe vendor-specific media bus format codes would be the way to go. I 
don't expect ISPs to understand the format, they will likely be configured in 
pass-through mode. Instead of adding explicit support for all those weird 
formats to all ISP drivers, it might make sense to add a "binary blob" media 
bus code to be used by the ISP.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-02  9:55     ` Laurent Pinchart
@ 2012-02-02 11:00       ` Guennadi Liakhovetski
  2012-02-04 11:36         ` Laurent Pinchart
  2012-02-02 11:14       ` Sylwester Nawrocki
  1 sibling, 1 reply; 30+ messages in thread
From: Guennadi Liakhovetski @ 2012-02-02 11:00 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sylwester Nawrocki, Sakari Ailus, linux-media@vger.kernel.org,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

Hi Laurent

On Thu, 2 Feb 2012, Laurent Pinchart wrote:

> Do all those sensors interleave the data in the same way ? This sounds quite 
> hackish and vendor-specific to me, I'm not sure if we should try to generalize 
> that. Maybe vendor-specific media bus format codes would be the way to go. I 
> don't expect ISPs to understand the format, they will likely be configured in 
> pass-through mode. Instead of adding explicit support for all those weird 
> formats to all ISP drivers, it might make sense to add a "binary blob" media 
> bus code to be used by the ISP.

Yes, I agree, that those formats will be just forwarded as is by ISPs, but 
the user-space wants to know the contents, so, it might be more useful to 
provide information about specific components, even if their packing 
layout cannot be defined in a generic way with offsets and sizes. Even 
saying "you're getting formats YUYV and JPEG in vendor-specific packing 
#N" might be more useful, than just "vendor-specific format #N".

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-02  9:55     ` Laurent Pinchart
  2012-02-02 11:00       ` Guennadi Liakhovetski
@ 2012-02-02 11:14       ` Sylwester Nawrocki
  2012-02-04 11:34         ` Laurent Pinchart
  1 sibling, 1 reply; 30+ messages in thread
From: Sylwester Nawrocki @ 2012-02-02 11:14 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sakari Ailus, linux-media@vger.kernel.org, Guennadi Liakhovetski,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

Hi Laurent,

On 02/02/2012 10:55 AM, Laurent Pinchart wrote:
> Do all those sensors interleave the data in the same way ? This sounds quite 

No, each one uses it's own interleaving method.

> hackish and vendor-specific to me, I'm not sure if we should try to generalize 
> that. Maybe vendor-specific media bus format codes would be the way to go. I 
> don't expect ISPs to understand the format, they will likely be configured in 
> pass-through mode. Instead of adding explicit support for all those weird 
> formats to all ISP drivers, it might make sense to add a "binary blob" media 
> bus code to be used by the ISP.

This could work, except that there is no way to match a fourcc with media bus
code. Different fourcc would map to same media bus code, making it impossible
for the brigde to handle multiple sensors or one sensor supporting multiple
interleaved formats. Moreover there is a need to map media bus code to the
MIPI-CSI data ID. What if one sensor sends "binary" blob with MIPI-CSI
"User Define Data 1" and the other with "User Define Data 2" ?

Maybe we could create e.g. V4L2_MBUS_FMT_USER?, for each MIPI-CSI User Defined
data identifier, but as I remember it was decided not to map MIPI-CSI data
codes directly onto media bus pixel codes.


Thanks,
-- 
Sylwester Nawrocki
Samsung Poland R&D Center

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-01 11:41   ` Sylwester Nawrocki
  2012-02-02  9:55     ` Laurent Pinchart
@ 2012-02-04 11:22     ` Sakari Ailus
  2012-02-04 11:30       ` Laurent Pinchart
  2012-02-04 15:26       ` Sylwester Nawrocki
  1 sibling, 2 replies; 30+ messages in thread
From: Sakari Ailus @ 2012-02-04 11:22 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: linux-media@vger.kernel.org, Guennadi Liakhovetski,
	Laurent Pinchart, HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

Hi Sylwester,

Sylwester Nawrocki wrote:
> On 02/01/2012 11:00 AM, Sakari Ailus wrote:
>> I'd guess that all the ISP would do to such formats is to write them to
>> memory since I don't see much use for either in ISPs --- both typically are
>> output of the ISP.
> 
> Yep, correct. In fact in those cases the sensor has complicated ISP built in,
> so everything a bridge have to do is to pass data over to user space.
> 
> Also non-image data might need to be passed to user space as well.

How does one know in the user space which part of the video buffer
contains jpeg data and which part is yuv? Does the data contain some
kind of header, or how is this done currently?

I'd be much in favour or using a separate channel ID as Guennadi asked;
that way you could quite probably save one memory copy as well. But if
the hardware already exists and behaves badly there's usually not much
you can do about it.

Cheers,

-- 
Sakari Ailus
sakari.ailus@iki.fi

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-04 11:22     ` Sakari Ailus
@ 2012-02-04 11:30       ` Laurent Pinchart
  2012-02-04 15:38         ` Sylwester Nawrocki
  2012-02-04 15:26       ` Sylwester Nawrocki
  1 sibling, 1 reply; 30+ messages in thread
From: Laurent Pinchart @ 2012-02-04 11:30 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Sylwester Nawrocki, linux-media@vger.kernel.org,
	Guennadi Liakhovetski,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

Hi Sakari,

On Saturday 04 February 2012 13:22:21 Sakari Ailus wrote:
> Sylwester Nawrocki wrote:
> > On 02/01/2012 11:00 AM, Sakari Ailus wrote:
> >> I'd guess that all the ISP would do to such formats is to write them to
> >> memory since I don't see much use for either in ISPs --- both typically
> >> are
> >> output of the ISP.
> > 
> > Yep, correct. In fact in those cases the sensor has complicated ISP built
> > in, so everything a bridge have to do is to pass data over to user space.
> > 
> > Also non-image data might need to be passed to user space as well.
> 
> How does one know in the user space which part of the video buffer
> contains jpeg data and which part is yuv? Does the data contain some
> kind of header, or how is this done currently?
> 
> I'd be much in favour or using a separate channel ID as Guennadi asked;
> that way you could quite probably save one memory copy as well. But if
> the hardware already exists and behaves badly there's usually not much
> you can do about it.

If I'm not mistaken, the sensor doesn't send data in separate channels but 
interleaves them in a single channel (possibly with headers or fixed-size 
packets - Sylwester, could you comment on that ?). That makes it pretty 
difficult to do anything else than pass-through capture.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-02 11:14       ` Sylwester Nawrocki
@ 2012-02-04 11:34         ` Laurent Pinchart
  2012-02-04 17:00           ` Sylwester Nawrocki
  0 siblings, 1 reply; 30+ messages in thread
From: Laurent Pinchart @ 2012-02-04 11:34 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: Sakari Ailus, linux-media@vger.kernel.org, Guennadi Liakhovetski,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

Hi Sylwester,

On Thursday 02 February 2012 12:14:08 Sylwester Nawrocki wrote:
> On 02/02/2012 10:55 AM, Laurent Pinchart wrote:
> > Do all those sensors interleave the data in the same way ? This sounds
> > quite
> No, each one uses it's own interleaving method.
> 
> > hackish and vendor-specific to me, I'm not sure if we should try to
> > generalize that. Maybe vendor-specific media bus format codes would be
> > the way to go. I don't expect ISPs to understand the format, they will
> > likely be configured in pass-through mode. Instead of adding explicit
> > support for all those weird formats to all ISP drivers, it might make
> > sense to add a "binary blob" media bus code to be used by the ISP.
> 
> This could work, except that there is no way to match a fourcc with media
> bus code. Different fourcc would map to same media bus code, making it
> impossible for the brigde to handle multiple sensors or one sensor
> supporting multiple interleaved formats. Moreover there is a need to map
> media bus code to the MIPI-CSI data ID. What if one sensor sends "binary"
> blob with MIPI-CSI "User Define Data 1" and the other with "User Define
> Data 2" ?

My gut feeling is that the information should be retrieved from the sensor 
driver. This is all pretty vendor-specific, and adding explicit support for 
such sensors to each bridge driver wouldn't be very clean. Could the bridge 
query the sensor using a subdev operation ?

> Maybe we could create e.g. V4L2_MBUS_FMT_USER?, for each MIPI-CSI User
> Defined data identifier, but as I remember it was decided not to map
> MIPI-CSI data codes directly onto media bus pixel codes.

Would setting the format directly on the sensor subdev be an option ?

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-02 11:00       ` Guennadi Liakhovetski
@ 2012-02-04 11:36         ` Laurent Pinchart
  0 siblings, 0 replies; 30+ messages in thread
From: Laurent Pinchart @ 2012-02-04 11:36 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Sylwester Nawrocki, Sakari Ailus, linux-media@vger.kernel.org,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

Hi Guennadi,

On Thursday 02 February 2012 12:00:57 Guennadi Liakhovetski wrote:
> On Thu, 2 Feb 2012, Laurent Pinchart wrote:
> > Do all those sensors interleave the data in the same way ? This sounds
> > quite hackish and vendor-specific to me, I'm not sure if we should try to
> > generalize that. Maybe vendor-specific media bus format codes would be
> > the way to go. I don't expect ISPs to understand the format, they will
> > likely be configured in pass-through mode. Instead of adding explicit
> > support for all those weird formats to all ISP drivers, it might make
> > sense to add a "binary blob" media bus code to be used by the ISP.
> 
> Yes, I agree, that those formats will be just forwarded as is by ISPs, but
> the user-space wants to know the contents, so, it might be more useful to
> provide information about specific components, even if their packing
> layout cannot be defined in a generic way with offsets and sizes. Even
> saying "you're getting formats YUYV and JPEG in vendor-specific packing
> #N" might be more useful, than just "vendor-specific format #N".

That's right. A single media bus code might not be the best option indeed. 
Vendor-specific blob codes (and 4CCs) then ?

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-04 11:22     ` Sakari Ailus
  2012-02-04 11:30       ` Laurent Pinchart
@ 2012-02-04 15:26       ` Sylwester Nawrocki
  2012-02-04 15:43         ` Sakari Ailus
  1 sibling, 1 reply; 30+ messages in thread
From: Sylwester Nawrocki @ 2012-02-04 15:26 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Sylwester Nawrocki, linux-media@vger.kernel.org,
	Guennadi Liakhovetski, Laurent Pinchart,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

Hi Sakari,

On 02/04/2012 12:22 PM, Sakari Ailus wrote:
> Sylwester Nawrocki wrote:
>> On 02/01/2012 11:00 AM, Sakari Ailus wrote:
>>> I'd guess that all the ISP would do to such formats is to write them to
>>> memory since I don't see much use for either in ISPs --- both typically are
>>> output of the ISP.
>>
>> Yep, correct. In fact in those cases the sensor has complicated ISP built in,
>> so everything a bridge have to do is to pass data over to user space.
>>
>> Also non-image data might need to be passed to user space as well.
> 
> How does one know in the user space which part of the video buffer
> contains jpeg data and which part is yuv? Does the data contain some
> kind of header, or how is this done currently?

There is an additional data appended to the image data. Part of it must
be retrieved out of the main DMA channel. I someway missed to mention in
the previous e-mails that the bridge is somehow retarded, probably because
of the way is has been evolving over time. That is, it originally 
supported only the parallel video bus and then a MIPI-CSI2 frontend was 
added. So it cannot split MIPI-CSI data channels into separate memory 
buffers, AFAIK - at this stage. I think it just ignores the VC field of 
the Data Identifier (DI), but it's just a guess for now.

If you look at the S5PV210 datasheet and the MIPI-CSIS device registers,
at the end of the IO region it has 4 x ~4kiB internal buffers for 
"non-image" data. These buffers must be emptied in the interrupt handler 
and I'm going to need this data in user space in order to decode data 
from sensors.

Sounds like a 2-plane buffers is a way to go, one plane for interleaved
YUV/JPEG and the second one for the "metadata".

I originally thought about a separate buffer queue at the MIPI-CSIS driver,
but it likely would have added unnecessary complication in the applications.

> I'd be much in favour or using a separate channel ID as Guennadi asked;
> that way you could quite probably save one memory copy as well. But if
> the hardware already exists and behaves badly there's usually not much
> you can do about it.

As I explained above I suspect that the sensor sends each image data type
on separate channels (I'm not 100% sure) but the bridge is unable to DMA
it into separate memory regions.

Currently we have no support in V4L2 for specifying separate image data 
format per MIPI-CSI2 channel. Maybe the solution is just about that - 
adding support for virtual channels and a possibility to specify an image 
format separately per each channel ?
Still, there would be nothing telling how the channels are interleaved :-/

--
Regards,
Sylwester

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-04 11:30       ` Laurent Pinchart
@ 2012-02-04 15:38         ` Sylwester Nawrocki
  0 siblings, 0 replies; 30+ messages in thread
From: Sylwester Nawrocki @ 2012-02-04 15:38 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sakari Ailus, Sylwester Nawrocki, linux-media@vger.kernel.org,
	Guennadi Liakhovetski,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

Hi Laurent,

On 02/04/2012 12:30 PM, Laurent Pinchart wrote:
>> I'd be much in favour or using a separate channel ID as Guennadi asked;
>> that way you could quite probably save one memory copy as well. But if
>> the hardware already exists and behaves badly there's usually not much
>> you can do about it.
> 
> If I'm not mistaken, the sensor doesn't send data in separate channels but

I suspect it might be sending data on separate virtual channels, but the
bridge won't understand that and will just return one data plane in memory.
The sensor might well send the data in one channel, I don't know myself yet.

In either case we end up with a mixed data in memory, that must be parsed,
which is likely best done in the user space.
Also please see my previous answer to Sakari, there is some more details
there.

> interleaves them in a single channel (possibly with headers or fixed-size
> packets - Sylwester, could you comment on that ?). That makes it pretty
> difficult to do anything else than pass-through capture.

I'm not entirely sure the sensor doesn't send the data in separate virtual
channels. Certainly the bridge cannot DMA each channel into separate memory
buffers.

--

Regards,
Sylwester

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-04 15:26       ` Sylwester Nawrocki
@ 2012-02-04 15:43         ` Sakari Ailus
  2012-02-04 18:32           ` Sylwester Nawrocki
  2012-02-05  0:04           ` Guennadi Liakhovetski
  0 siblings, 2 replies; 30+ messages in thread
From: Sakari Ailus @ 2012-02-04 15:43 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: Sylwester Nawrocki, linux-media@vger.kernel.org,
	Guennadi Liakhovetski, Laurent Pinchart,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

Hi Sylwester,

Sylwester Nawrocki wrote:
> On 02/04/2012 12:22 PM, Sakari Ailus wrote:
>> Sylwester Nawrocki wrote:
>>> On 02/01/2012 11:00 AM, Sakari Ailus wrote:
>>>> I'd guess that all the ISP would do to such formats is to write them to
>>>> memory since I don't see much use for either in ISPs --- both typically are
>>>> output of the ISP.
>>>
>>> Yep, correct. In fact in those cases the sensor has complicated ISP built in,
>>> so everything a bridge have to do is to pass data over to user space.
>>>
>>> Also non-image data might need to be passed to user space as well.
>>
>> How does one know in the user space which part of the video buffer
>> contains jpeg data and which part is yuv? Does the data contain some
>> kind of header, or how is this done currently?
> 
> There is an additional data appended to the image data. Part of it must
> be retrieved out of the main DMA channel. I someway missed to mention in
> the previous e-mails that the bridge is somehow retarded, probably because
> of the way is has been evolving over time. That is, it originally 
> supported only the parallel video bus and then a MIPI-CSI2 frontend was 
> added. So it cannot split MIPI-CSI data channels into separate memory 
> buffers, AFAIK - at this stage. I think it just ignores the VC field of 
> the Data Identifier (DI), but it's just a guess for now.
> 
> If you look at the S5PV210 datasheet and the MIPI-CSIS device registers,
> at the end of the IO region it has 4 x ~4kiB internal buffers for 
> "non-image" data. These buffers must be emptied in the interrupt handler 
> and I'm going to need this data in user space in order to decode data 
> from sensors.
> 
> Sounds like a 2-plane buffers is a way to go, one plane for interleaved
> YUV/JPEG and the second one for the "metadata".
> 
> I originally thought about a separate buffer queue at the MIPI-CSIS driver,
> but it likely would have added unnecessary complication in the applications.
> 
>> I'd be much in favour or using a separate channel ID as Guennadi asked;
>> that way you could quite probably save one memory copy as well. But if
>> the hardware already exists and behaves badly there's usually not much
>> you can do about it.
> 
> As I explained above I suspect that the sensor sends each image data type
> on separate channels (I'm not 100% sure) but the bridge is unable to DMA
> it into separate memory regions.
> 
> Currently we have no support in V4L2 for specifying separate image data 
> format per MIPI-CSI2 channel. Maybe the solution is just about that - 
> adding support for virtual channels and a possibility to specify an image 
> format separately per each channel ?
> Still, there would be nothing telling how the channels are interleaved :-/

_If_ the sensor sends YUV and compressed JPEG data in separate CSI-2
channels then definitely the correct way to implement this is to take
this kind of setup into account in the frame format description --- we
do need that quite badly.

However, this doesn't really help you with your current problem, and
perhaps just creating a custom format for your sensor driver is the best
way to go for the time being. But. When someone attaches this kind of
sensor to another CSI-2 receiver that can separate the data from
different channels, I think we should start working towards for a
correct solution which this driver also should support.

With information on the frame format, the CSI-2 hardware could properly
write the data into two separate buffers. Possibly it should provide two
video nodes, but I'm not sure about that. A multi-plane buffer is
another option.

Cheers,

-- 
Sakari Ailus
sakari.ailus@iki.fi

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-04 11:34         ` Laurent Pinchart
@ 2012-02-04 17:00           ` Sylwester Nawrocki
  2012-02-05 13:30             ` Laurent Pinchart
  0 siblings, 1 reply; 30+ messages in thread
From: Sylwester Nawrocki @ 2012-02-04 17:00 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sylwester Nawrocki, Sakari Ailus, linux-media@vger.kernel.org,
	Guennadi Liakhovetski,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

Hi Laurent,

On 02/04/2012 12:34 PM, Laurent Pinchart wrote:
> On Thursday 02 February 2012 12:14:08 Sylwester Nawrocki wrote:
>> On 02/02/2012 10:55 AM, Laurent Pinchart wrote:
>>> Do all those sensors interleave the data in the same way ? This sounds
>>> quite
>> No, each one uses it's own interleaving method.
>>
>>> hackish and vendor-specific to me, I'm not sure if we should try to
>>> generalize that. Maybe vendor-specific media bus format codes would be
>>> the way to go. I don't expect ISPs to understand the format, they will
>>> likely be configured in pass-through mode. Instead of adding explicit
>>> support for all those weird formats to all ISP drivers, it might make
>>> sense to add a "binary blob" media bus code to be used by the ISP.
>>
>> This could work, except that there is no way to match a fourcc with media
>> bus code. Different fourcc would map to same media bus code, making it
>> impossible for the brigde to handle multiple sensors or one sensor
>> supporting multiple interleaved formats. Moreover there is a need to map
>> media bus code to the MIPI-CSI data ID. What if one sensor sends "binary"
>> blob with MIPI-CSI "User Define Data 1" and the other with "User Define
>> Data 2" ?
> 
> My gut feeling is that the information should be retrieved from the sensor
> driver. This is all pretty vendor-specific, and adding explicit support for
> such sensors to each bridge driver wouldn't be very clean. Could the bridge

We have many standard pixel codes in include/linux/v4l2-mediabus.h, yet each
bridge driver supports only a subset of them. I wouldn't expect a sudden
need for all existing bridge drivers to support some strange interleaved 
image formats.

> query the sensor using a subdev operation ?

There is also a MIPI-CSI2 receiver in between that needs to be configured.
I.e. it must know that it processes the User Defined Data 1, which implies
certain pixel alignment, etc. So far a media bus pixel codes have been 
a base information to handle such things.

>> Maybe we could create e.g. V4L2_MBUS_FMT_USER?, for each MIPI-CSI User
>> Defined data identifier, but as I remember it was decided not to map
>> MIPI-CSI data codes directly onto media bus pixel codes.
> 
> Would setting the format directly on the sensor subdev be an option ?

Do you mean setting a MIPI-CSI2 format ?
It should work as long as the bridge driver can identify media bus code
given a fourcc. I can't recall situation where a reverse lookup is
necessary, i.e. struct v4l2_mbus_framefmt::code -> fourcc. This would
fail since e.g. JPEG and YUV/JPEG would both correspond to User 1 format.

--

Regards,
Sylwester

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-04 15:43         ` Sakari Ailus
@ 2012-02-04 18:32           ` Sylwester Nawrocki
  2012-02-04 23:44             ` Guennadi Liakhovetski
  2012-02-05  0:04           ` Guennadi Liakhovetski
  1 sibling, 1 reply; 30+ messages in thread
From: Sylwester Nawrocki @ 2012-02-04 18:32 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Sylwester Nawrocki, linux-media@vger.kernel.org,
	Guennadi Liakhovetski, Laurent Pinchart,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

Hi Sakari,

On 02/04/2012 04:43 PM, Sakari Ailus wrote:
>> As I explained above I suspect that the sensor sends each image data type
>> on separate channels (I'm not 100% sure) but the bridge is unable to DMA
>> it into separate memory regions.
>>
>> Currently we have no support in V4L2 for specifying separate image data
>> format per MIPI-CSI2 channel. Maybe the solution is just about that -
>> adding support for virtual channels and a possibility to specify an image
>> format separately per each channel ?
>> Still, there would be nothing telling how the channels are interleaved :-/
> 
> _If_ the sensor sends YUV and compressed JPEG data in separate CSI-2

As I learned MIPI-CSI2 specifies 3 data interleaving methods, at: packet, 
frame and virtual channel level. I'm almost certain I'm dealing now with
packet level interleaving, but VC interleaving might need to be supported  
very soon.

> channels then definitely the correct way to implement this is to take
> this kind of setup into account in the frame format description --- we
> do need that quite badly.

Yeah, I will probably want to focus more on that after completing the
camera control works.

> However, this doesn't really help you with your current problem, and
> perhaps just creating a custom format for your sensor driver is the best
> way to go for the time being. But. When someone attaches this kind of

Yes, this is what I started with. What do you think about creating media 
bus codes directly corresponding the the user defined MIPI-CSI data types ?

> sensor to another CSI-2 receiver that can separate the data from
> different channels, I think we should start working towards for a
> correct solution which this driver also should support.

Sure. We would also include description of bus receiver/transmitter 
capabilities, e.g. telling explicitly which interleaving methods are 
supported.

> With information on the frame format, the CSI-2 hardware could properly
> write the data into two separate buffers. Possibly it should provide two
> video nodes, but I'm not sure about that. A multi-plane buffer is
> another option.

Indeed. I think both solutions are equally correct and there should be no
need to restrict us to one or the other. I would leave decision up to the
driver authors, as one option will be more appropriate in some cases than
the other.

--

Thanks,
Sylwester

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-04 18:32           ` Sylwester Nawrocki
@ 2012-02-04 23:44             ` Guennadi Liakhovetski
  2012-02-05  0:36               ` Sylwester Nawrocki
  0 siblings, 1 reply; 30+ messages in thread
From: Guennadi Liakhovetski @ 2012-02-04 23:44 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: Sakari Ailus, Sylwester Nawrocki, linux-media@vger.kernel.org,
	Laurent Pinchart, HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

On Sat, 4 Feb 2012, Sylwester Nawrocki wrote:

> Hi Sakari,

[snip]

> Yes, this is what I started with. What do you think about creating media 
> bus codes directly corresponding the the user defined MIPI-CSI data types ?

We've discussed this before with Laurent, IIRC, and the decision was, that 
since a "typical" CSI-2 configuration includes a CSI-2 phy, interfacing to 
a "standard" bridge, that can also receive parallel data directly, and the 
phy normally has a 1-to-1 mapping from CSI-2 formats to mediabus codes, 
so, we can just as well directly use respective mediabus codes to 
configure CSI-2 phys.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-04 15:43         ` Sakari Ailus
  2012-02-04 18:32           ` Sylwester Nawrocki
@ 2012-02-05  0:04           ` Guennadi Liakhovetski
  1 sibling, 0 replies; 30+ messages in thread
From: Guennadi Liakhovetski @ 2012-02-05  0:04 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Sylwester Nawrocki, Sylwester Nawrocki,
	linux-media@vger.kernel.org, Laurent Pinchart,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

On Sat, 4 Feb 2012, Sakari Ailus wrote:

> Hi Sylwester,
> 
> Sylwester Nawrocki wrote:
> > On 02/04/2012 12:22 PM, Sakari Ailus wrote:

[snip]

> >> I'd be much in favour or using a separate channel ID as Guennadi asked;
> >> that way you could quite probably save one memory copy as well. But if
> >> the hardware already exists and behaves badly there's usually not much
> >> you can do about it.
> > 
> > As I explained above I suspect that the sensor sends each image data type
> > on separate channels (I'm not 100% sure) but the bridge is unable to DMA
> > it into separate memory regions.
> > 
> > Currently we have no support in V4L2 for specifying separate image data 
> > format per MIPI-CSI2 channel. Maybe the solution is just about that - 
> > adding support for virtual channels and a possibility to specify an image 
> > format separately per each channel ?
> > Still, there would be nothing telling how the channels are interleaved :-/
> 
> _If_ the sensor sends YUV and compressed JPEG data in separate CSI-2
> channels then definitely the correct way to implement this is to take
> this kind of setup into account in the frame format description --- we
> do need that quite badly.
> 
> However, this doesn't really help you with your current problem, and
> perhaps just creating a custom format for your sensor driver is the best
> way to go for the time being.

As fas as I understand, the problem is not the sensor but the bridge. So, 
following your logic you would have to create a new format for each sensor 
with similar capabilities, if you want to connect it to this bridge. This 
doesn't seem like a good idea to me.

May I again do some shameless self-advertising: soc-camera had to deal 
with this kind of problem since some time and we have a solution for it.

The problem is actually not _quite_ identical, it has nothing to do with 
interleaved formats, but I think, essentially the problem is: how to 
configure bridges to process some generic (video) data when no specialised 
support for this data format is available or implemented yet. This is what 
we call a pass-through mode. All bridges I've met so far have a capability 
to receive and store in memory some generic video data, for which you 
basically just configure the number of bytes per line and the number of 
lines per frame.

The solution, that we use in soc-camera is to define format descriptors, 
that can be used to calculate those generic parameters for each supported 
format. I am talking about the mbus_fmt[] array and the 
soc_mbus_bytes_per_line() function in soc_mediabus.c. So, my suggestion 
would be to use something similar for this case too: use some high-level 
description for this format, including any channel information, that 
advanced bridges can use to properly separate the date, and a function, 
that interprets that high-level description and provides the low-level 
details like bytes-per-line, necessary to configure bridges, unable to 
handle the data natively. Ideally, of course, I would suggest to convert 
that file to a generic API, usable for all V4L2 drivers (basically, just 
rename a couple of structs and functions) and extend it to handle 
interleaved formats.

> But. When someone attaches this kind of
> sensor to another CSI-2 receiver that can separate the data from
> different channels, I think we should start working towards for a
> correct solution which this driver also should support.

Exactly.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-04 23:44             ` Guennadi Liakhovetski
@ 2012-02-05  0:36               ` Sylwester Nawrocki
  0 siblings, 0 replies; 30+ messages in thread
From: Sylwester Nawrocki @ 2012-02-05  0:36 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Sakari Ailus, Sylwester Nawrocki, linux-media@vger.kernel.org,
	Laurent Pinchart, HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

On 02/05/2012 12:44 AM, Guennadi Liakhovetski wrote:
>> Yes, this is what I started with. What do you think about creating media

Actually now I have something like V4L2_MBUS_FMT_VYUY_JPEG_I1_1X8 
(I1 indicating interleaving method), so it is not so tightly tied 
to a particular sensor. 

>> bus codes directly corresponding the the user defined MIPI-CSI data types ?
> 
> We've discussed this before with Laurent, IIRC, and the decision was, that
> since a "typical" CSI-2 configuration includes a CSI-2 phy, interfacing to
> a "standard" bridge, that can also receive parallel data directly, and the
> phy normally has a 1-to-1 mapping from CSI-2 formats to mediabus codes,
> so, we can just as well directly use respective mediabus codes to
> configure CSI-2 phys.

OK. The 1-to-1 mapping is true only for MIPI-CSI defined image formats AFAICS.
Let's take JPEG as an example, AFAIU there is nothing in the standard indicating
which User Defined Data Type should be used for JPEG. If some bridge/sensor pair
uses User1 for V4L2_MBUS_FMT_JPEG_1X8 and other uses User2 then there is no way
to make any of these sensors work with any bridge without code modifications. 
Looks like we would need MIPI-CSI DT field in format description data structure 
((like) struct soc_mbus_lookup).

--

Thanks,
Sylwester

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-04 17:00           ` Sylwester Nawrocki
@ 2012-02-05 13:30             ` Laurent Pinchart
  2012-02-08 22:48               ` Sylwester Nawrocki
  0 siblings, 1 reply; 30+ messages in thread
From: Laurent Pinchart @ 2012-02-05 13:30 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: Sylwester Nawrocki, Sakari Ailus, linux-media@vger.kernel.org,
	Guennadi Liakhovetski,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

Hi Sylwester,

On Saturday 04 February 2012 18:00:10 Sylwester Nawrocki wrote:
> On 02/04/2012 12:34 PM, Laurent Pinchart wrote:
> > On Thursday 02 February 2012 12:14:08 Sylwester Nawrocki wrote:
> >> On 02/02/2012 10:55 AM, Laurent Pinchart wrote:
> >>> Do all those sensors interleave the data in the same way ? This sounds
> >>> quite
> >> 
> >> No, each one uses it's own interleaving method.
> >> 
> >>> hackish and vendor-specific to me, I'm not sure if we should try to
> >>> generalize that. Maybe vendor-specific media bus format codes would be
> >>> the way to go. I don't expect ISPs to understand the format, they will
> >>> likely be configured in pass-through mode. Instead of adding explicit
> >>> support for all those weird formats to all ISP drivers, it might make
> >>> sense to add a "binary blob" media bus code to be used by the ISP.
> >> 
> >> This could work, except that there is no way to match a fourcc with media
> >> bus code. Different fourcc would map to same media bus code, making it
> >> impossible for the brigde to handle multiple sensors or one sensor
> >> supporting multiple interleaved formats. Moreover there is a need to map
> >> media bus code to the MIPI-CSI data ID. What if one sensor sends "binary"
> >> blob with MIPI-CSI "User Define Data 1" and the other with "User Define
> >> Data 2" ?
> > 
> > My gut feeling is that the information should be retrieved from the sensor
> > driver. This is all pretty vendor-specific, and adding explicit support
> > for such sensors to each bridge driver wouldn't be very clean. Could the
> > bridge
> 
> We have many standard pixel codes in include/linux/v4l2-mediabus.h, yet each
> bridge driver supports only a subset of them. I wouldn't expect a sudden
> need for all existing bridge drivers to support some strange interleaved
> image formats.

Those media bus codes are standard, so implementing explicit support for them 
in bridge drivers is fine with me. What I want to avoid is adding explicit 
support for sensor-specific formats to bridges. There should be no dependency 
between the bridge and the sensor.

> > query the sensor using a subdev operation ?
> 
> There is also a MIPI-CSI2 receiver in between that needs to be configured.
> I.e. it must know that it processes the User Defined Data 1, which implies
> certain pixel alignment, etc. So far a media bus pixel codes have been
> a base information to handle such things.

For CSI user-defined data types, I still think that the information required 
to configure the CSI receiver should come from the sensor. Only the sensor 
knows what user-defined data type it will generate.

> >> Maybe we could create e.g. V4L2_MBUS_FMT_USER?, for each MIPI-CSI User
> >> Defined data identifier, but as I remember it was decided not to map
> >> MIPI-CSI data codes directly onto media bus pixel codes.
> > 
> > Would setting the format directly on the sensor subdev be an option ?
> 
> Do you mean setting a MIPI-CSI2 format ?

No, I mean setting the media bus code on the sensor output pad to a vendor-
specific value.

> It should work as long as the bridge driver can identify media bus code
> given a fourcc. I can't recall situation where a reverse lookup is
> necessary, i.e. struct v4l2_mbus_framefmt::code -> fourcc. This would
> fail since e.g. JPEG and YUV/JPEG would both correspond to User 1 format.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-05 13:30             ` Laurent Pinchart
@ 2012-02-08 22:48               ` Sylwester Nawrocki
       [not found]                 ` <12779203.vQPWKN8eZf@avalon>
  0 siblings, 1 reply; 30+ messages in thread
From: Sylwester Nawrocki @ 2012-02-08 22:48 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sylwester Nawrocki, Sakari Ailus, linux-media@vger.kernel.org,
	Guennadi Liakhovetski,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

Hi Laurent,

On 02/05/2012 02:30 PM, Laurent Pinchart wrote:
> On Saturday 04 February 2012 18:00:10 Sylwester Nawrocki wrote:
>> On 02/04/2012 12:34 PM, Laurent Pinchart wrote:
>>> On Thursday 02 February 2012 12:14:08 Sylwester Nawrocki wrote:
>>>> On 02/02/2012 10:55 AM, Laurent Pinchart wrote:
>>>>> Do all those sensors interleave the data in the same way ? This sounds
>>>>> quite
>>>>
>>>> No, each one uses it's own interleaving method.
>>>>
>>>>> hackish and vendor-specific to me, I'm not sure if we should try to
>>>>> generalize that. Maybe vendor-specific media bus format codes would be
>>>>> the way to go. I don't expect ISPs to understand the format, they will
>>>>> likely be configured in pass-through mode. Instead of adding explicit
>>>>> support for all those weird formats to all ISP drivers, it might make
>>>>> sense to add a "binary blob" media bus code to be used by the ISP.
>>>>
>>>> This could work, except that there is no way to match a fourcc with media
>>>> bus code. Different fourcc would map to same media bus code, making it
>>>> impossible for the brigde to handle multiple sensors or one sensor
>>>> supporting multiple interleaved formats. Moreover there is a need to map
>>>> media bus code to the MIPI-CSI data ID. What if one sensor sends "binary"
>>>> blob with MIPI-CSI "User Define Data 1" and the other with "User Define
>>>> Data 2" ?
>>>
>>> My gut feeling is that the information should be retrieved from the sensor
>>> driver. This is all pretty vendor-specific, and adding explicit support
>>> for such sensors to each bridge driver wouldn't be very clean. Could the
>>> bridge
>>
>> We have many standard pixel codes in include/linux/v4l2-mediabus.h, yet each
>> bridge driver supports only a subset of them. I wouldn't expect a sudden
>> need for all existing bridge drivers to support some strange interleaved
>> image formats.
> 
> Those media bus codes are standard, so implementing explicit support for them
> in bridge drivers is fine with me. What I want to avoid is adding explicit
> support for sensor-specific formats to bridges. There should be no dependency
> between the bridge and the sensor.

OK, I see your point. Naturally I agree here, even though sometimes the hardware
engineers make this process of getting rid of the dependencies more painful that
it really could be.

>>> query the sensor using a subdev operation ?
>>
>> There is also a MIPI-CSI2 receiver in between that needs to be configured.
>> I.e. it must know that it processes the User Defined Data 1, which implies
>> certain pixel alignment, etc. So far a media bus pixel codes have been
>> a base information to handle such things.
> 
> For CSI user-defined data types, I still think that the information required
> to configure the CSI receiver should come from the sensor. Only the sensor
> knows what user-defined data type it will generate.

I agree. Should we have separate callback at the sensor ops for this or should 
it belong to a bigger data structure (like the "frame description" structure 
mentioned before) ? The latter might be more reasonable.

>>>> Maybe we could create e.g. V4L2_MBUS_FMT_USER?, for each MIPI-CSI User
>>>> Defined data identifier, but as I remember it was decided not to map
>>>> MIPI-CSI data codes directly onto media bus pixel codes.
>>>
>>> Would setting the format directly on the sensor subdev be an option ?
>>
>> Do you mean setting a MIPI-CSI2 format ?
> 
> No, I mean setting the media bus code on the sensor output pad to a vendor-
> specific value.

I'm afraid we need a vendor/sensor specific format identifier since the sensor
produces truly vendor specific format. In fact this format is made to overcome
hardware limitations of the video bridge. We can of course standardize things 
like: embedded (non-image) data presence and size at beginning and an end of 
frame, MIPI-CSIS2 data type, interleaving method (different data type and/or 
virtual channel), etc. But still there will be some crap that is relevant to 
only one hardware type and it would need to be distinguished in some way.

--

Regards,
Sylwester

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
       [not found]                 ` <12779203.vQPWKN8eZf@avalon>
@ 2012-02-10  8:42                   ` Guennadi Liakhovetski
  2012-02-10 10:19                     ` Sylwester Nawrocki
  0 siblings, 1 reply; 30+ messages in thread
From: Guennadi Liakhovetski @ 2012-02-10  8:42 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sylwester Nawrocki, Sylwester Nawrocki, Sakari Ailus,
	linux-media@vger.kernel.org,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

...thinking about this interleaved data, is there anything else left, that 
the following scheme would be failing to describe:

* The data is sent in repeated blocks (periods)
* Each block can be fully described by a list of format specifiers, each 
containing
** data format code
** number of alignment bytes
** number of data bytes

Can there actually be anything more complicated than that?

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-10  8:42                   ` Guennadi Liakhovetski
@ 2012-02-10 10:19                     ` Sylwester Nawrocki
  2012-02-10 10:31                       ` Sylwester Nawrocki
  2012-02-10 10:33                       ` Guennadi Liakhovetski
  0 siblings, 2 replies; 30+ messages in thread
From: Sylwester Nawrocki @ 2012-02-10 10:19 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Laurent Pinchart, Sylwester Nawrocki, Sakari Ailus,
	linux-media@vger.kernel.org,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

On 02/10/2012 09:42 AM, Guennadi Liakhovetski wrote:
> ...thinking about this interleaved data, is there anything else left, that 
> the following scheme would be failing to describe:
> 
> * The data is sent in repeated blocks (periods)

The data is sent in irregular chunks of varying size (few hundred of bytes
for example).

> * Each block can be fully described by a list of format specifiers, each 
> containing
> ** data format code
> ** number of alignment bytes
> ** number of data bytes

Each frame would have its own list of such format specifiers, as the data
chunk sizes vary from frame to frame. Therefore the above is unfortunately
more a frame meta data, rather than a static frame description.

> Can there actually be anything more complicated than that?

There is an embedded data at end of frame (could be also at the beginning)
which describes layout of the interleaved data.

Some data types would have padding bytes.

Even if we somehow find a way to describe the frame on media bus, using a set
of properties, it would be difficult to pass this information to user space.
A similar description would have to be probably exposed to applications, now
everything is described in user space by a single fourcc..


Regards,
-- 
Sylwester Nawrocki
Samsung Poland R&D Center

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-10 10:19                     ` Sylwester Nawrocki
@ 2012-02-10 10:31                       ` Sylwester Nawrocki
  2012-02-10 10:33                       ` Guennadi Liakhovetski
  1 sibling, 0 replies; 30+ messages in thread
From: Sylwester Nawrocki @ 2012-02-10 10:31 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Laurent Pinchart, Sylwester Nawrocki, Sakari Ailus,
	linux-media@vger.kernel.org,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

On 02/10/2012 11:19 AM, Sylwester Nawrocki wrote:
> On 02/10/2012 09:42 AM, Guennadi Liakhovetski wrote:
> Even if we somehow find a way to describe the frame on media bus, using a set
> of properties, it would be difficult to pass this information to user space.
> A similar description would have to be probably exposed to applications, now
> everything is described in user space by a single fourcc..

OK, we could associate a fourcc with an entry of some static table entry,
thus avoiding vendor specific media bus codes and leaving only vendor/sensor
specific fourcc. But still I'm not sure we can come up with a capable enough
frame description.

Thanks,
-- 
Sylwester Nawrocki
Samsung Poland R&D Center

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-10 10:19                     ` Sylwester Nawrocki
  2012-02-10 10:31                       ` Sylwester Nawrocki
@ 2012-02-10 10:33                       ` Guennadi Liakhovetski
  2012-02-10 10:58                         ` Sylwester Nawrocki
  1 sibling, 1 reply; 30+ messages in thread
From: Guennadi Liakhovetski @ 2012-02-10 10:33 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: Laurent Pinchart, Sylwester Nawrocki, Sakari Ailus,
	linux-media@vger.kernel.org,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

On Fri, 10 Feb 2012, Sylwester Nawrocki wrote:

> On 02/10/2012 09:42 AM, Guennadi Liakhovetski wrote:
> > ...thinking about this interleaved data, is there anything else left, that 
> > the following scheme would be failing to describe:
> > 
> > * The data is sent in repeated blocks (periods)
> 
> The data is sent in irregular chunks of varying size (few hundred of bytes
> for example).

Right, the data includes headers. How about sensors providing 
header-parsing callbacks?

Thanks
Guennadi

> 
> > * Each block can be fully described by a list of format specifiers, each 
> > containing
> > ** data format code
> > ** number of alignment bytes
> > ** number of data bytes
> 
> Each frame would have its own list of such format specifiers, as the data
> chunk sizes vary from frame to frame. Therefore the above is unfortunately
> more a frame meta data, rather than a static frame description.
> 
> > Can there actually be anything more complicated than that?
> 
> There is an embedded data at end of frame (could be also at the beginning)
> which describes layout of the interleaved data.
> 
> Some data types would have padding bytes.
> 
> Even if we somehow find a way to describe the frame on media bus, using a set
> of properties, it would be difficult to pass this information to user space.
> A similar description would have to be probably exposed to applications, now
> everything is described in user space by a single fourcc..
> 
> 
> Regards,
> -- 
> Sylwester Nawrocki
> Samsung Poland R&D Center
> 

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-10 10:33                       ` Guennadi Liakhovetski
@ 2012-02-10 10:58                         ` Sylwester Nawrocki
  2012-02-10 11:15                           ` Guennadi Liakhovetski
  0 siblings, 1 reply; 30+ messages in thread
From: Sylwester Nawrocki @ 2012-02-10 10:58 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Laurent Pinchart, Sylwester Nawrocki, Sakari Ailus,
	linux-media@vger.kernel.org,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

On 02/10/2012 11:33 AM, Guennadi Liakhovetski wrote:
> On Fri, 10 Feb 2012, Sylwester Nawrocki wrote:
> 
>> On 02/10/2012 09:42 AM, Guennadi Liakhovetski wrote:
>>> ...thinking about this interleaved data, is there anything else left, that 
>>> the following scheme would be failing to describe:
>>>
>>> * The data is sent in repeated blocks (periods)
>>
>> The data is sent in irregular chunks of varying size (few hundred of bytes
>> for example).
> 
> Right, the data includes headers. How about sensors providing 
> header-parsing callbacks?

This implies processing of headers/footers in kernel space to some generic format.
It might work, but sometimes there might be an unwanted performance loss. However
I wouldn't expect it to be that significant, depends on how the format of an 
embedded data from the sensor looks like. Processing 4KiB of data could be 
acceptable.

I'm assuming here, we want to convert the frame embedded (meta) data for each 
sensor to some generic description format ? It would have to be then relatively 
simple, not to increase the frame header size unnecessarily.

--

Thanks
Sylwester

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-10 10:58                         ` Sylwester Nawrocki
@ 2012-02-10 11:15                           ` Guennadi Liakhovetski
  2012-02-10 11:35                             ` Sylwester Nawrocki
  0 siblings, 1 reply; 30+ messages in thread
From: Guennadi Liakhovetski @ 2012-02-10 11:15 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: Laurent Pinchart, Sylwester Nawrocki, Sakari Ailus,
	linux-media@vger.kernel.org,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

On Fri, 10 Feb 2012, Sylwester Nawrocki wrote:

> On 02/10/2012 11:33 AM, Guennadi Liakhovetski wrote:
> > On Fri, 10 Feb 2012, Sylwester Nawrocki wrote:
> > 
> >> On 02/10/2012 09:42 AM, Guennadi Liakhovetski wrote:
> >>> ...thinking about this interleaved data, is there anything else left, that 
> >>> the following scheme would be failing to describe:
> >>>
> >>> * The data is sent in repeated blocks (periods)
> >>
> >> The data is sent in irregular chunks of varying size (few hundred of bytes
> >> for example).
> > 
> > Right, the data includes headers. How about sensors providing 
> > header-parsing callbacks?
> 
> This implies processing of headers/footers in kernel space to some generic format.
> It might work, but sometimes there might be an unwanted performance loss. However
> I wouldn't expect it to be that significant, depends on how the format of an 
> embedded data from the sensor looks like. Processing 4KiB of data could be 
> acceptable.

In principle I agree - (ideally) no processing in the kernel _at all_. 
Just pass the complete frame data as is to the user-space. But if we need 
any internal knowledge at all about the data, maybe callbacks would be a 
better option, than trying to develop a generic descriptor. Perhaps, 
something like "get me the location of n'th block of data of format X."

Notice, this does not (necessarily) have anything to do with the previous 
discussion, concerning the way, how the CSI receiver should be getting its 
configuration.

Thanks
Guennadi

> I'm assuming here, we want to convert the frame embedded (meta) data for each 
> sensor to some generic description format ? It would have to be then relatively 
> simple, not to increase the frame header size unnecessarily.
> 
> --
> 
> Thanks
> Sylwester

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-10 11:15                           ` Guennadi Liakhovetski
@ 2012-02-10 11:35                             ` Sylwester Nawrocki
  2012-02-10 11:51                               ` Guennadi Liakhovetski
  0 siblings, 1 reply; 30+ messages in thread
From: Sylwester Nawrocki @ 2012-02-10 11:35 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Laurent Pinchart, Sylwester Nawrocki, Sakari Ailus,
	linux-media@vger.kernel.org,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

On 02/10/2012 12:15 PM, Guennadi Liakhovetski wrote:
>>>> On 02/10/2012 09:42 AM, Guennadi Liakhovetski wrote:
>>>>> ...thinking about this interleaved data, is there anything else left, that 
>>>>> the following scheme would be failing to describe:
>>>>>
>>>>> * The data is sent in repeated blocks (periods)
>>>>
>>>> The data is sent in irregular chunks of varying size (few hundred of bytes
>>>> for example).
>>>
>>> Right, the data includes headers. How about sensors providing 
>>> header-parsing callbacks?
>>
>> This implies processing of headers/footers in kernel space to some generic 
>> format. It might work, but sometimes there might be an unwanted performance 
>> loss. However I wouldn't expect it to be that significant, depends on how 
>> the format of an embedded data from the sensor looks like. Processing 4KiB
>> of data could be acceptable.
> 
> In principle I agree - (ideally) no processing in the kernel _at all_. 
> Just pass the complete frame data as is to the user-space. But if we need 
> any internal knowledge at all about the data, maybe callbacks would be a 
> better option, than trying to develop a generic descriptor. Perhaps, 
> something like "get me the location of n'th block of data of format X."

Hmm, I thought about only processing frame embedded data to some generic
format. I find the callbacks for extracting the data in the kernel 
impractical, with full HD video stream you may want to use some sort of
hardware accelerated processing, like using NEON for example. We can 
allow this only by leaving the deinterleave process to the user space.

> Notice, this does not (necessarily) have anything to do with the previous 
> discussion, concerning the way, how the CSI receiver should be getting its 
> configuration.

--

Regards,
Sylwester

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Q] Interleaved formats on the media bus
  2012-02-10 11:35                             ` Sylwester Nawrocki
@ 2012-02-10 11:51                               ` Guennadi Liakhovetski
  0 siblings, 0 replies; 30+ messages in thread
From: Guennadi Liakhovetski @ 2012-02-10 11:51 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: Laurent Pinchart, Sylwester Nawrocki, Sakari Ailus,
	linux-media@vger.kernel.org,
	HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3,
	Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4, Hans Verkuil

On Fri, 10 Feb 2012, Sylwester Nawrocki wrote:

> On 02/10/2012 12:15 PM, Guennadi Liakhovetski wrote:
> >>>> On 02/10/2012 09:42 AM, Guennadi Liakhovetski wrote:
> >>>>> ...thinking about this interleaved data, is there anything else left, that 
> >>>>> the following scheme would be failing to describe:
> >>>>>
> >>>>> * The data is sent in repeated blocks (periods)
> >>>>
> >>>> The data is sent in irregular chunks of varying size (few hundred of bytes
> >>>> for example).
> >>>
> >>> Right, the data includes headers. How about sensors providing 
> >>> header-parsing callbacks?
> >>
> >> This implies processing of headers/footers in kernel space to some generic 
> >> format. It might work, but sometimes there might be an unwanted performance 
> >> loss. However I wouldn't expect it to be that significant, depends on how 
> >> the format of an embedded data from the sensor looks like. Processing 4KiB
> >> of data could be acceptable.
> > 
> > In principle I agree - (ideally) no processing in the kernel _at all_. 
> > Just pass the complete frame data as is to the user-space. But if we need 
> > any internal knowledge at all about the data, maybe callbacks would be a 
> > better option, than trying to develop a generic descriptor. Perhaps, 
> > something like "get me the location of n'th block of data of format X."
> 
> Hmm, I thought about only processing frame embedded data to some generic
> format. I find the callbacks for extracting the data in the kernel 
> impractical, with full HD video stream you may want to use some sort of
> hardware accelerated processing, like using NEON for example. We can 
> allow this only by leaving the deinterleave process to the user space.

Sorry for confusion:-) I'm not proposing to implement this now nor do I 
have a specific use-case for this. This was just an abstract idea about 
how we could do this _if_ we ever need any internal information about the 
data anywhere outside of the sensor driver. So, so far this doesn't have 
any practical implications.

OTOH - how we parse the data in the user-space. The obvious way would be 
the one, that seems to be currently favoured here too - just use a 
vendor-specific fourcc. OTOH, maybe it would be better to do something 
like the above - define new (subdev) IOCTLs to parse headers and extract 
necessary data blocks. Yes, it adds overhead, but if the user-space anyway 
has to "manually" process the data, maybe this could be tolerated.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2012-02-10 11:52 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-01-31 11:23 [Q] Interleaved formats on the media bus Sylwester Nawrocki
2012-02-01  1:44 ` Guennadi Liakhovetski
2012-02-01 10:44   ` Sylwester Nawrocki
2012-02-01 10:00 ` Sakari Ailus
2012-02-01 11:41   ` Sylwester Nawrocki
2012-02-02  9:55     ` Laurent Pinchart
2012-02-02 11:00       ` Guennadi Liakhovetski
2012-02-04 11:36         ` Laurent Pinchart
2012-02-02 11:14       ` Sylwester Nawrocki
2012-02-04 11:34         ` Laurent Pinchart
2012-02-04 17:00           ` Sylwester Nawrocki
2012-02-05 13:30             ` Laurent Pinchart
2012-02-08 22:48               ` Sylwester Nawrocki
     [not found]                 ` <12779203.vQPWKN8eZf@avalon>
2012-02-10  8:42                   ` Guennadi Liakhovetski
2012-02-10 10:19                     ` Sylwester Nawrocki
2012-02-10 10:31                       ` Sylwester Nawrocki
2012-02-10 10:33                       ` Guennadi Liakhovetski
2012-02-10 10:58                         ` Sylwester Nawrocki
2012-02-10 11:15                           ` Guennadi Liakhovetski
2012-02-10 11:35                             ` Sylwester Nawrocki
2012-02-10 11:51                               ` Guennadi Liakhovetski
2012-02-04 11:22     ` Sakari Ailus
2012-02-04 11:30       ` Laurent Pinchart
2012-02-04 15:38         ` Sylwester Nawrocki
2012-02-04 15:26       ` Sylwester Nawrocki
2012-02-04 15:43         ` Sakari Ailus
2012-02-04 18:32           ` Sylwester Nawrocki
2012-02-04 23:44             ` Guennadi Liakhovetski
2012-02-05  0:36               ` Sylwester Nawrocki
2012-02-05  0:04           ` Guennadi Liakhovetski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox