public inbox for linux-media@vger.kernel.org
 help / color / mirror / Atom feed
From: Sakari Ailus <sakari.ailus@iki.fi>
To: Sylwester Nawrocki <snjw23@gmail.com>
Cc: Sylwester Nawrocki <s.nawrocki@samsung.com>,
	"linux-media@vger.kernel.org" <linux-media@vger.kernel.org>,
	Guennadi Liakhovetski <g.liakhovetski@gmx.de>,
	Laurent Pinchart <laurent.pinchart@ideasonboard.com>,
	"HeungJun Kim/Mobile S/W Platform Lab(DMC)/E3"
	<riverful.kim@samsung.com>,
	"Seung-Woo Kim/Mobile S/W Platform Lab(DMC)/E4"
	<sw0312.kim@samsung.com>, Hans Verkuil <hverkuil@xs4all.nl>
Subject: Re: [Q] Interleaved formats on the media bus
Date: Sat, 04 Feb 2012 17:43:45 +0200	[thread overview]
Message-ID: <4F2D5231.4000703@iki.fi> (raw)
In-Reply-To: <4F2D4E2D.1030107@gmail.com>

Hi Sylwester,

Sylwester Nawrocki wrote:
> On 02/04/2012 12:22 PM, Sakari Ailus wrote:
>> Sylwester Nawrocki wrote:
>>> On 02/01/2012 11:00 AM, Sakari Ailus wrote:
>>>> I'd guess that all the ISP would do to such formats is to write them to
>>>> memory since I don't see much use for either in ISPs --- both typically are
>>>> output of the ISP.
>>>
>>> Yep, correct. In fact in those cases the sensor has complicated ISP built in,
>>> so everything a bridge have to do is to pass data over to user space.
>>>
>>> Also non-image data might need to be passed to user space as well.
>>
>> How does one know in the user space which part of the video buffer
>> contains jpeg data and which part is yuv? Does the data contain some
>> kind of header, or how is this done currently?
> 
> There is an additional data appended to the image data. Part of it must
> be retrieved out of the main DMA channel. I someway missed to mention in
> the previous e-mails that the bridge is somehow retarded, probably because
> of the way is has been evolving over time. That is, it originally 
> supported only the parallel video bus and then a MIPI-CSI2 frontend was 
> added. So it cannot split MIPI-CSI data channels into separate memory 
> buffers, AFAIK - at this stage. I think it just ignores the VC field of 
> the Data Identifier (DI), but it's just a guess for now.
> 
> If you look at the S5PV210 datasheet and the MIPI-CSIS device registers,
> at the end of the IO region it has 4 x ~4kiB internal buffers for 
> "non-image" data. These buffers must be emptied in the interrupt handler 
> and I'm going to need this data in user space in order to decode data 
> from sensors.
> 
> Sounds like a 2-plane buffers is a way to go, one plane for interleaved
> YUV/JPEG and the second one for the "metadata".
> 
> I originally thought about a separate buffer queue at the MIPI-CSIS driver,
> but it likely would have added unnecessary complication in the applications.
> 
>> I'd be much in favour or using a separate channel ID as Guennadi asked;
>> that way you could quite probably save one memory copy as well. But if
>> the hardware already exists and behaves badly there's usually not much
>> you can do about it.
> 
> As I explained above I suspect that the sensor sends each image data type
> on separate channels (I'm not 100% sure) but the bridge is unable to DMA
> it into separate memory regions.
> 
> Currently we have no support in V4L2 for specifying separate image data 
> format per MIPI-CSI2 channel. Maybe the solution is just about that - 
> adding support for virtual channels and a possibility to specify an image 
> format separately per each channel ?
> Still, there would be nothing telling how the channels are interleaved :-/

_If_ the sensor sends YUV and compressed JPEG data in separate CSI-2
channels then definitely the correct way to implement this is to take
this kind of setup into account in the frame format description --- we
do need that quite badly.

However, this doesn't really help you with your current problem, and
perhaps just creating a custom format for your sensor driver is the best
way to go for the time being. But. When someone attaches this kind of
sensor to another CSI-2 receiver that can separate the data from
different channels, I think we should start working towards for a
correct solution which this driver also should support.

With information on the frame format, the CSI-2 hardware could properly
write the data into two separate buffers. Possibly it should provide two
video nodes, but I'm not sure about that. A multi-plane buffer is
another option.

Cheers,

-- 
Sakari Ailus
sakari.ailus@iki.fi

  reply	other threads:[~2012-02-04 15:44 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-01-31 11:23 [Q] Interleaved formats on the media bus Sylwester Nawrocki
2012-02-01  1:44 ` Guennadi Liakhovetski
2012-02-01 10:44   ` Sylwester Nawrocki
2012-02-01 10:00 ` Sakari Ailus
2012-02-01 11:41   ` Sylwester Nawrocki
2012-02-02  9:55     ` Laurent Pinchart
2012-02-02 11:00       ` Guennadi Liakhovetski
2012-02-04 11:36         ` Laurent Pinchart
2012-02-02 11:14       ` Sylwester Nawrocki
2012-02-04 11:34         ` Laurent Pinchart
2012-02-04 17:00           ` Sylwester Nawrocki
2012-02-05 13:30             ` Laurent Pinchart
2012-02-08 22:48               ` Sylwester Nawrocki
     [not found]                 ` <12779203.vQPWKN8eZf@avalon>
2012-02-10  8:42                   ` Guennadi Liakhovetski
2012-02-10 10:19                     ` Sylwester Nawrocki
2012-02-10 10:31                       ` Sylwester Nawrocki
2012-02-10 10:33                       ` Guennadi Liakhovetski
2012-02-10 10:58                         ` Sylwester Nawrocki
2012-02-10 11:15                           ` Guennadi Liakhovetski
2012-02-10 11:35                             ` Sylwester Nawrocki
2012-02-10 11:51                               ` Guennadi Liakhovetski
2012-02-04 11:22     ` Sakari Ailus
2012-02-04 11:30       ` Laurent Pinchart
2012-02-04 15:38         ` Sylwester Nawrocki
2012-02-04 15:26       ` Sylwester Nawrocki
2012-02-04 15:43         ` Sakari Ailus [this message]
2012-02-04 18:32           ` Sylwester Nawrocki
2012-02-04 23:44             ` Guennadi Liakhovetski
2012-02-05  0:36               ` Sylwester Nawrocki
2012-02-05  0:04           ` Guennadi Liakhovetski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4F2D5231.4000703@iki.fi \
    --to=sakari.ailus@iki.fi \
    --cc=g.liakhovetski@gmx.de \
    --cc=hverkuil@xs4all.nl \
    --cc=laurent.pinchart@ideasonboard.com \
    --cc=linux-media@vger.kernel.org \
    --cc=riverful.kim@samsung.com \
    --cc=s.nawrocki@samsung.com \
    --cc=snjw23@gmail.com \
    --cc=sw0312.kim@samsung.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox