linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: snjw23@gmail.com (Sylwester Nawrocki)
To: linux-arm-kernel@lists.infradead.org
Subject: [RFC PATCH v1 6/7] media: video: introduce face detection driver module
Date: Tue, 06 Dec 2011 23:01:07 +0100	[thread overview]
Message-ID: <4EDE90A3.7050900@gmail.com> (raw)
In-Reply-To: <CACVXFVPrAro=3t-wpbR_cVahzcx7SCa2J=s2nyyKfQ6SG-i0VQ@mail.gmail.com>

On 12/06/2011 03:07 PM, Ming Lei wrote:
> Hi,
> 
> Thanks for your review.
> 
> On Tue, Dec 6, 2011 at 5:55 AM, Sylwester Nawrocki <snjw23@gmail.com> wrote:
>> Hi Ming,
>>
>> (I've pruned the Cc list, leaving just the mailing lists)
>>
>> On 12/02/2011 04:02 PM, Ming Lei wrote:
>>> This patch introduces one driver for face detection purpose.
>>>
>>> The driver is responsible for all v4l2 stuff, buffer management
>>> and other general things, and doesn't touch face detection hardware
>>> directly. Several interfaces are exported to low level drivers
>>> (such as the coming omap4 FD driver)which will communicate with
>>> face detection hw module.
>>>
>>> So the driver will make driving face detection hw modules more
>>> easy.
>>
>>
>> I would hold on for a moment on implementing generic face detection
>> module which is based on the V4L2 video device interface. We need to
>> first define an API that would be also usable at sub-device interface
>> level (http://linuxtv.org/downloads/v4l-dvb-apis/subdev.html).
> 
> If we can define a good/stable enough APIs between kernel and user space,
> I think the patches can be merged first. For internal kernel APIs, we should
> allow it to evolve as new hardware comes or new features are to be introduced.

I also don't see a problem in discussing it a bit more;)

> 
> I understand the API you mentioned here should belong to kernel internal
> API, correct me if it is wrong.

Yes, I meant the in kernel design, i.e. generic face detection kernel module
and an OMAP4 FDIF driver. It makes lots of sense to separate common code
in this way, maybe even when there would be only OMAP devices using it.

I'm sure now the Samsung devices won't fit in video output node based driver
design. They read image data in different ways and also the FD result format
is totally different.

> 
>> AFAICS OMAP4 FDIF processes only data stored in memory, thus it seems
>> reasonable to use the videodev interface for passing data to the kernel
>> from user space.
>>
>> But there might be face detection devices that accept data from other
>> H/W modules, e.g. transferred through SoC internal data buses between
>> image processing pipeline blocks. Thus any new interfaces need to be
>> designed with such devices in mind.
>>
>> Also the face detection hardware block might now have an input DMA
>> engine in it, the data could be fed from memory through some other
>> subsystem (e.g. resize/colour converter). Then the driver for that
>> subsystem would implement a video node.
> 
> I think the direct input image or frame data to FD should be from memory
> no matter the actual data is from external H/W modules or input DMA because
> FD will take lot of time to detect faces in one image or frame and FD can't
> have so much memory to cache several images or frames data.

Sorry, I cannot provide much details at the moment, but there exists hardware
that reads data from internal SoC buses and even if it uses some sort of
cache memory it doesn't necessarily have to be available for the user.

Still the FD result is associated with an image frame for such H/W, but not
necessarily with a memory buffer queued by a user application.

How long it approximately takes to process single image for OMAP4 FDIF ?

> 
> If you have seen this kind of FD hardware design, please let me know.
> 
>> I'm for leaving the buffer handling details for individual drivers
>> and focusing on a standard interface for applications, i.e. new
> 
> I think leaving buffer handling details in generic FD driver or
> individual drivers
> doesn't matter now, since it don't have effect on interfaces between kernel
> and user space.

I think you misunderstood me. I wasn't talking about core/driver module split,
I meant we should not be making the user interface video node centric.

I think for Samsung devices I'll need a capture video node for passing
the result to the user. So instead of associating FD result with a buffer index
we could try to use the frame sequence number (struct v4l2_buffer.sequence,
http://linuxtv.org/downloads/v4l-dvb-apis/buffer.html#v4l2-buffer).

It might be much better as the v4l2 events are associated with the frame
sequence. And if we use controls then you get control events for free,
and each event carries a frame sequence number int it
(http://linuxtv.org/downloads/v4l-dvb-apis/vidioc-dqevent.html).

-- 

Regards,
Sylwester

  reply	other threads:[~2011-12-06 22:01 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-12-02 15:02 [RFC PATCH v1 0/7] media&omap4: introduce face detection(FD) driver Ming Lei
2011-12-02 15:02 ` [RFC PATCH v1 1/7] omap4: introduce fdif(face detect module) hwmod Ming Lei
2011-12-02 15:02 ` [RFC PATCH v1 2/7] omap4: build fdif omap device from hwmod Ming Lei
2011-12-02 16:28   ` Aguirre, Sergio
2011-12-05  4:27     ` Ming Lei
2011-12-02 15:02 ` [RFC PATCH v1 3/7] media: videobuf2: move out of setting pgprot_noncached from vb2_mmap_pfn_range Ming Lei
2011-12-02 15:02 ` [RFC PATCH v1 4/7] media: videobuf2: introduce VIDEOBUF2_PAGE memops Ming Lei
2011-12-02 15:02 ` [RFC PATCH v1 5/7] media: v4l2: introduce two IOCTLs for face detection Ming Lei
2011-12-05 22:15   ` Sylwester Nawrocki
2011-12-08  3:42     ` Ming Lei
2011-12-08 22:27       ` Sylwester Nawrocki
2011-12-09  4:34         ` Ming Lei
2011-12-11 17:27           ` Sylwester Nawrocki
2011-12-02 15:02 ` [RFC PATCH v1 6/7] media: video: introduce face detection driver module Ming Lei
2011-12-05 21:55   ` Sylwester Nawrocki
2011-12-06 14:07     ` Ming Lei
2011-12-06 22:01       ` Sylwester Nawrocki [this message]
2011-12-06 22:39         ` Sylwester Nawrocki
2011-12-07 13:40         ` Ming Lei
2011-12-08 23:25           ` Sylwester Nawrocki
2011-12-09 15:10             ` Ming Lei
2011-12-11 17:43               ` Sylwester Nawrocki
2011-12-12  9:49                 ` Ming Lei
2011-12-12 12:08                   ` HeungJun, Kim
2011-12-13  4:01                     ` Ming Lei
2011-12-13  5:59                       ` HeungJun, Kim
2011-12-13  6:29                         ` Ming Lei
2011-12-12 21:57                   ` Sylwester Nawrocki
2011-12-11 18:38   ` Sylwester Nawrocki
2011-12-02 15:02 ` [RFC PATCH v1 7/7] media: video: introduce omap4 face detection module driver Ming Lei
  -- strict thread matches above, loose matches on Subject: below --
2011-12-02  9:12 [RFC PATCH v1 0/7] media&omap4: introduce face detection(FD) driver Ming Lei
2011-12-02  9:12 ` [RFC PATCH v1 6/7] media: video: introduce face detection driver module Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4EDE90A3.7050900@gmail.com \
    --to=snjw23@gmail.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).