* Asking advice for Camera/ISP driver framework design [not found] <CAFhB-RACaxtkBuXsch5-giTBqCHR+s5_SP-sGeR=E1HVeGfQLQ@mail.gmail.com> @ 2011-09-14 6:13 ` Cliff Cai 2011-09-14 7:41 ` Scott Jiang ` (2 more replies) 0 siblings, 3 replies; 12+ messages in thread From: Cliff Cai @ 2011-09-14 6:13 UTC (permalink / raw) To: linux-media Dear guys, I'm currently working on a camera/ISP Linux driver project.Of course,I want it to be a V4L2 driver,but I got a problem about how to design the driver framework. let me introduce the background of this ISP(Image signal processor) a little bit. 1.The ISP has two output paths,first one called main path which is used to transfer image data for taking picture and recording,the other one called preview path which is used to transfer image data for previewing. 2.the two paths have the same image data input from sensor,but their outputs are different,the output of main path is high quality and larger image,while the output of preview path is smaller image. 3.the two output paths have independent DMA engines used to move image data to system memory. The problem is currently, the V4L2 framework seems only support one buffer queue,and in my case,obviously,two buffer queues are required. Any idea/advice for implementing such kind of V4L2 driver? or any other better solutions? Thanks a lot! Cliff ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Asking advice for Camera/ISP driver framework design 2011-09-14 6:13 ` Asking advice for Camera/ISP driver framework design Cliff Cai @ 2011-09-14 7:41 ` Scott Jiang 2011-09-15 10:20 ` Laurent Pinchart 2011-09-15 17:14 ` Sakari Ailus 2 siblings, 0 replies; 12+ messages in thread From: Scott Jiang @ 2011-09-14 7:41 UTC (permalink / raw) To: Cliff Cai; +Cc: linux-media 2011/9/14 Cliff Cai <cliffcai.sh@gmail.com>: > Dear guys, > > I'm currently working on a camera/ISP Linux driver project.Of course,I > want it to be a V4L2 driver,but I got a problem about how to design > the driver framework. > let me introduce the background of this ISP(Image signal processor) a > little bit. > 1.The ISP has two output paths,first one called main path which is > used to transfer image data for taking picture and recording,the other > one called preview path which is used to transfer image data for > previewing. > 2.the two paths have the same image data input from sensor,but their > outputs are different,the output of main path is high quality and > larger image,while the output of preview path is smaller image. > 3.the two output paths have independent DMA engines used to move image > data to system memory. > > The problem is currently, the V4L2 framework seems only support one > buffer queue,and in my case,obviously,two buffer queues are required. > Any idea/advice for implementing such kind of V4L2 driver? or any > other better solutions? > > Thanks a lot! > Cliff > -- > To unsubscribe from this list: send the line "unsubscribe linux-media" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Your chip seems like davinci isp, only difference is dma. So you can reference davinci drivers. If dma interrupt doesn't happen at the same time, I guess you must wait because source image is the same. Scott ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Asking advice for Camera/ISP driver framework design 2011-09-14 6:13 ` Asking advice for Camera/ISP driver framework design Cliff Cai 2011-09-14 7:41 ` Scott Jiang @ 2011-09-15 10:20 ` Laurent Pinchart 2011-09-15 15:38 ` Cliff Cai 2011-09-15 17:14 ` Sakari Ailus 2 siblings, 1 reply; 12+ messages in thread From: Laurent Pinchart @ 2011-09-15 10:20 UTC (permalink / raw) To: Cliff Cai; +Cc: linux-media Hi Cliff, On Wednesday 14 September 2011 08:13:32 Cliff Cai wrote: > Dear guys, > > I'm currently working on a camera/ISP Linux driver project.Of course,I > want it to be a V4L2 driver,but I got a problem about how to design > the driver framework. > let me introduce the background of this ISP(Image signal processor) a > little bit. > 1.The ISP has two output paths,first one called main path which is > used to transfer image data for taking picture and recording,the other > one called preview path which is used to transfer image data for > previewing. > 2.the two paths have the same image data input from sensor,but their > outputs are different,the output of main path is high quality and > larger image,while the output of preview path is smaller image. > 3.the two output paths have independent DMA engines used to move image > data to system memory. > > The problem is currently, the V4L2 framework seems only support one > buffer queue,and in my case,obviously,two buffer queues are required. > Any idea/advice for implementing such kind of V4L2 driver? or any > other better solutions? Your driver should create two video nodes, one for each stream. They will each have their own buffers queue. The driver should also implement the media controller API to let applications discover that the video nodes are related and how they interact with the ISP. -- Regards, Laurent Pinchart ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Asking advice for Camera/ISP driver framework design 2011-09-15 10:20 ` Laurent Pinchart @ 2011-09-15 15:38 ` Cliff Cai 2011-09-15 17:10 ` Sakari Ailus 0 siblings, 1 reply; 12+ messages in thread From: Cliff Cai @ 2011-09-15 15:38 UTC (permalink / raw) To: Laurent Pinchart; +Cc: linux-media On Thu, Sep 15, 2011 at 6:20 PM, Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote: > Hi Cliff, > > On Wednesday 14 September 2011 08:13:32 Cliff Cai wrote: >> Dear guys, >> >> I'm currently working on a camera/ISP Linux driver project.Of course,I >> want it to be a V4L2 driver,but I got a problem about how to design >> the driver framework. >> let me introduce the background of this ISP(Image signal processor) a >> little bit. >> 1.The ISP has two output paths,first one called main path which is >> used to transfer image data for taking picture and recording,the other >> one called preview path which is used to transfer image data for >> previewing. >> 2.the two paths have the same image data input from sensor,but their >> outputs are different,the output of main path is high quality and >> larger image,while the output of preview path is smaller image. >> 3.the two output paths have independent DMA engines used to move image >> data to system memory. >> >> The problem is currently, the V4L2 framework seems only support one >> buffer queue,and in my case,obviously,two buffer queues are required. >> Any idea/advice for implementing such kind of V4L2 driver? or any >> other better solutions? > > Your driver should create two video nodes, one for each stream. They will each > have their own buffers queue. > > The driver should also implement the media controller API to let applications > discover that the video nodes are related and how they interact with the ISP. Hi Laurent, As "Documentation/media-framework" says, one of the goals of media device model is "Discovering a device internal topology,and configuring it at runtime".I'm just a bit confused about how applications can discover the related video notes? Could you explain it a little more? Thanks a lot! Cliff ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Asking advice for Camera/ISP driver framework design 2011-09-15 15:38 ` Cliff Cai @ 2011-09-15 17:10 ` Sakari Ailus 2011-09-18 23:25 ` Laurent Pinchart 2011-09-26 10:55 ` Hans Verkuil 0 siblings, 2 replies; 12+ messages in thread From: Sakari Ailus @ 2011-09-15 17:10 UTC (permalink / raw) To: Cliff Cai; +Cc: Laurent Pinchart, linux-media Cliff Cai wrote: > On Thu, Sep 15, 2011 at 6:20 PM, Laurent Pinchart > <laurent.pinchart@ideasonboard.com> wrote: >> Hi Cliff, >> >> On Wednesday 14 September 2011 08:13:32 Cliff Cai wrote: >>> Dear guys, >>> >>> I'm currently working on a camera/ISP Linux driver project.Of course,I >>> want it to be a V4L2 driver,but I got a problem about how to design >>> the driver framework. >>> let me introduce the background of this ISP(Image signal processor) a >>> little bit. >>> 1.The ISP has two output paths,first one called main path which is >>> used to transfer image data for taking picture and recording,the other >>> one called preview path which is used to transfer image data for >>> previewing. >>> 2.the two paths have the same image data input from sensor,but their >>> outputs are different,the output of main path is high quality and >>> larger image,while the output of preview path is smaller image. >>> 3.the two output paths have independent DMA engines used to move image >>> data to system memory. >>> >>> The problem is currently, the V4L2 framework seems only support one >>> buffer queue,and in my case,obviously,two buffer queues are required. >>> Any idea/advice for implementing such kind of V4L2 driver? or any >>> other better solutions? >> >> Your driver should create two video nodes, one for each stream. They will each >> have their own buffers queue. >> >> The driver should also implement the media controller API to let applications >> discover that the video nodes are related and how they interact with the ISP. > > Hi Laurent, > > As "Documentation/media-framework" says, one of the goals of media > device model is "Discovering a device internal topology,and > configuring it at runtime".I'm just a bit confused about how > applications can discover the related video notes? Could you explain > it a little more? Hi Cliff, The major and minor numbers of video nodes are provided to the user space in struct media_entity_desc (defined in include/linux/media.h) using MEDIA_IOC_ENUM_ENTITIES IOCTL. The major and minor numbers define which device node corresponds to the video device; this isn't trivial for an application to do so there's a library which makes it easier: <URL:http://git.ideasonboard.org/?p=media-ctl.git;a=summary> See src/media.h for the interface. An example how to use this is available in src/main.c. Entities the type of which is MEDIA_ENT_T_DEVNODE_V4L are V4L2 device nodes. Regards, -- Sakari Ailus sakari.ailus@iki.fi ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Asking advice for Camera/ISP driver framework design 2011-09-15 17:10 ` Sakari Ailus @ 2011-09-18 23:25 ` Laurent Pinchart 2011-09-26 10:55 ` Hans Verkuil 1 sibling, 0 replies; 12+ messages in thread From: Laurent Pinchart @ 2011-09-18 23:25 UTC (permalink / raw) To: Sakari Ailus; +Cc: Cliff Cai, linux-media Hi Cliff, On Thursday 15 September 2011 19:10:52 Sakari Ailus wrote: > Cliff Cai wrote: > > On Thu, Sep 15, 2011 at 6:20 PM, Laurent Pinchart wrote: > >> On Wednesday 14 September 2011 08:13:32 Cliff Cai wrote: > >>> Dear guys, > >>> > >>> I'm currently working on a camera/ISP Linux driver project.Of course,I > >>> want it to be a V4L2 driver,but I got a problem about how to design > >>> the driver framework. > >>> let me introduce the background of this ISP(Image signal processor) a > >>> little bit. > >>> 1.The ISP has two output paths,first one called main path which is > >>> used to transfer image data for taking picture and recording,the other > >>> one called preview path which is used to transfer image data for > >>> previewing. > >>> 2.the two paths have the same image data input from sensor,but their > >>> outputs are different,the output of main path is high quality and > >>> larger image,while the output of preview path is smaller image. > >>> 3.the two output paths have independent DMA engines used to move image > >>> data to system memory. > >>> > >>> The problem is currently, the V4L2 framework seems only support one > >>> buffer queue,and in my case,obviously,two buffer queues are required. > >>> Any idea/advice for implementing such kind of V4L2 driver? or any > >>> other better solutions? > >> > >> Your driver should create two video nodes, one for each stream. They > >> will each have their own buffers queue. > >> > >> The driver should also implement the media controller API to let > >> applications discover that the video nodes are related and how they > >> interact with the ISP. > > > > As "Documentation/media-framework" says, one of the goals of media > > device model is "Discovering a device internal topology,and > > configuring it at runtime".I'm just a bit confused about how > > applications can discover the related video notes? Could you explain > > it a little more? > > The major and minor numbers of video nodes are provided to the user > space in struct media_entity_desc (defined in include/linux/media.h) > using MEDIA_IOC_ENUM_ENTITIES IOCTL. The major and minor numbers define > which device node corresponds to the video device; this isn't trivial > for an application to do so there's a library which makes it easier: > > <URL:http://git.ideasonboard.org/?p=media-ctl.git;a=summary> > > See src/media.h for the interface. An example how to use this is > available in src/main.c. > > Entities the type of which is MEDIA_ENT_T_DEVNODE_V4L are V4L2 device > nodes. http://www.ideasonboard.org/media/omap3isp.ps shows a device topology example generated automatically from the output of the media-ctl tool. -- Regards, Laurent Pinchart ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Asking advice for Camera/ISP driver framework design 2011-09-15 17:10 ` Sakari Ailus 2011-09-18 23:25 ` Laurent Pinchart @ 2011-09-26 10:55 ` Hans Verkuil 2011-09-26 16:03 ` Laurent Pinchart 1 sibling, 1 reply; 12+ messages in thread From: Hans Verkuil @ 2011-09-26 10:55 UTC (permalink / raw) To: Laurent Pinchart; +Cc: Sakari Ailus, Cliff Cai, linux-media On Thursday, September 15, 2011 19:10:52 Sakari Ailus wrote: > Cliff Cai wrote: > > On Thu, Sep 15, 2011 at 6:20 PM, Laurent Pinchart > > <laurent.pinchart@ideasonboard.com> wrote: > >> Hi Cliff, > >> > >> On Wednesday 14 September 2011 08:13:32 Cliff Cai wrote: > >>> Dear guys, > >>> > >>> I'm currently working on a camera/ISP Linux driver project.Of course,I > >>> want it to be a V4L2 driver,but I got a problem about how to design > >>> the driver framework. > >>> let me introduce the background of this ISP(Image signal processor) a > >>> little bit. > >>> 1.The ISP has two output paths,first one called main path which is > >>> used to transfer image data for taking picture and recording,the other > >>> one called preview path which is used to transfer image data for > >>> previewing. > >>> 2.the two paths have the same image data input from sensor,but their > >>> outputs are different,the output of main path is high quality and > >>> larger image,while the output of preview path is smaller image. > >>> 3.the two output paths have independent DMA engines used to move image > >>> data to system memory. > >>> > >>> The problem is currently, the V4L2 framework seems only support one > >>> buffer queue,and in my case,obviously,two buffer queues are required. > >>> Any idea/advice for implementing such kind of V4L2 driver? or any > >>> other better solutions? > >> > >> Your driver should create two video nodes, one for each stream. They will each > >> have their own buffers queue. > >> > >> The driver should also implement the media controller API to let applications > >> discover that the video nodes are related and how they interact with the ISP. > > > > Hi Laurent, > > > > As "Documentation/media-framework" says, one of the goals of media > > device model is "Discovering a device internal topology,and > > configuring it at runtime".I'm just a bit confused about how > > applications can discover the related video notes? Could you explain > > it a little more? > > Hi Cliff, > > The major and minor numbers of video nodes are provided to the user > space in struct media_entity_desc (defined in include/linux/media.h) > using MEDIA_IOC_ENUM_ENTITIES IOCTL. The major and minor numbers define > which device node corresponds to the video device; this isn't trivial > for an application to do so there's a library which makes it easier: > > <URL:http://git.ideasonboard.org/?p=media-ctl.git;a=summary> That reminds me: Laurent, this should really be moved to v4l-utils.git. Any progress on that? Regards, Hans > > See src/media.h for the interface. An example how to use this is > available in src/main.c. > > Entities the type of which is MEDIA_ENT_T_DEVNODE_V4L are V4L2 device nodes. > > Regards, > > ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Asking advice for Camera/ISP driver framework design 2011-09-26 10:55 ` Hans Verkuil @ 2011-09-26 16:03 ` Laurent Pinchart 2011-09-26 16:38 ` Hans Verkuil 0 siblings, 1 reply; 12+ messages in thread From: Laurent Pinchart @ 2011-09-26 16:03 UTC (permalink / raw) To: Hans Verkuil; +Cc: Sakari Ailus, Cliff Cai, linux-media Hi Hans, On Monday 26 September 2011 12:55:05 Hans Verkuil wrote: > On Thursday, September 15, 2011 19:10:52 Sakari Ailus wrote: > > Cliff Cai wrote: > > > On Thu, Sep 15, 2011 at 6:20 PM, Laurent Pinchart wrote: > > >> On Wednesday 14 September 2011 08:13:32 Cliff Cai wrote: > > >>> Dear guys, > > >>> > > >>> I'm currently working on a camera/ISP Linux driver project.Of > > >>> course,I want it to be a V4L2 driver,but I got a problem about how > > >>> to design the driver framework. > > >>> let me introduce the background of this ISP(Image signal processor) a > > >>> little bit. > > >>> 1.The ISP has two output paths,first one called main path which is > > >>> used to transfer image data for taking picture and recording,the > > >>> other one called preview path which is used to transfer image data > > >>> for previewing. > > >>> 2.the two paths have the same image data input from sensor,but their > > >>> outputs are different,the output of main path is high quality and > > >>> larger image,while the output of preview path is smaller image. > > >>> 3.the two output paths have independent DMA engines used to move > > >>> image data to system memory. > > >>> > > >>> The problem is currently, the V4L2 framework seems only support one > > >>> buffer queue,and in my case,obviously,two buffer queues are required. > > >>> Any idea/advice for implementing such kind of V4L2 driver? or any > > >>> other better solutions? > > >> > > >> Your driver should create two video nodes, one for each stream. They > > >> will each have their own buffers queue. > > >> > > >> The driver should also implement the media controller API to let > > >> applications discover that the video nodes are related and how they > > >> interact with the ISP. > > > > > > Hi Laurent, > > > > > > As "Documentation/media-framework" says, one of the goals of media > > > device model is "Discovering a device internal topology,and > > > configuring it at runtime".I'm just a bit confused about how > > > applications can discover the related video notes? Could you explain > > > it a little more? > > > > Hi Cliff, > > > > The major and minor numbers of video nodes are provided to the user > > space in struct media_entity_desc (defined in include/linux/media.h) > > using MEDIA_IOC_ENUM_ENTITIES IOCTL. The major and minor numbers define > > which device node corresponds to the video device; this isn't trivial > > for an application to do so there's a library which makes it easier: > > > > <URL:http://git.ideasonboard.org/?p=media-ctl.git;a=summary> > > That reminds me: Laurent, this should really be moved to v4l-utils.git. > Any progress on that? There are several pending patches for media-ctl that I want to apply first. BTW, the MC API is not restricted to V4L devices. Wouldn't it be a bad signal for the MC API adoption to move media-ctl to v4l-utils ? -- Regards, Laurent Pinchart ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Asking advice for Camera/ISP driver framework design 2011-09-26 16:03 ` Laurent Pinchart @ 2011-09-26 16:38 ` Hans Verkuil 0 siblings, 0 replies; 12+ messages in thread From: Hans Verkuil @ 2011-09-26 16:38 UTC (permalink / raw) To: Laurent Pinchart; +Cc: Sakari Ailus, Cliff Cai, linux-media On Monday, September 26, 2011 18:03:16 Laurent Pinchart wrote: > Hi Hans, > > On Monday 26 September 2011 12:55:05 Hans Verkuil wrote: > > On Thursday, September 15, 2011 19:10:52 Sakari Ailus wrote: > > > Cliff Cai wrote: > > > > On Thu, Sep 15, 2011 at 6:20 PM, Laurent Pinchart wrote: > > > >> On Wednesday 14 September 2011 08:13:32 Cliff Cai wrote: > > > >>> Dear guys, > > > >>> > > > >>> I'm currently working on a camera/ISP Linux driver project.Of > > > >>> course,I want it to be a V4L2 driver,but I got a problem about how > > > >>> to design the driver framework. > > > >>> let me introduce the background of this ISP(Image signal processor) a > > > >>> little bit. > > > >>> 1.The ISP has two output paths,first one called main path which is > > > >>> used to transfer image data for taking picture and recording,the > > > >>> other one called preview path which is used to transfer image data > > > >>> for previewing. > > > >>> 2.the two paths have the same image data input from sensor,but their > > > >>> outputs are different,the output of main path is high quality and > > > >>> larger image,while the output of preview path is smaller image. > > > >>> 3.the two output paths have independent DMA engines used to move > > > >>> image data to system memory. > > > >>> > > > >>> The problem is currently, the V4L2 framework seems only support one > > > >>> buffer queue,and in my case,obviously,two buffer queues are required. > > > >>> Any idea/advice for implementing such kind of V4L2 driver? or any > > > >>> other better solutions? > > > >> > > > >> Your driver should create two video nodes, one for each stream. They > > > >> will each have their own buffers queue. > > > >> > > > >> The driver should also implement the media controller API to let > > > >> applications discover that the video nodes are related and how they > > > >> interact with the ISP. > > > > > > > > Hi Laurent, > > > > > > > > As "Documentation/media-framework" says, one of the goals of media > > > > device model is "Discovering a device internal topology,and > > > > configuring it at runtime".I'm just a bit confused about how > > > > applications can discover the related video notes? Could you explain > > > > it a little more? > > > > > > Hi Cliff, > > > > > > The major and minor numbers of video nodes are provided to the user > > > space in struct media_entity_desc (defined in include/linux/media.h) > > > using MEDIA_IOC_ENUM_ENTITIES IOCTL. The major and minor numbers define > > > which device node corresponds to the video device; this isn't trivial > > > for an application to do so there's a library which makes it easier: > > > > > > <URL:http://git.ideasonboard.org/?p=media-ctl.git;a=summary> > > > > That reminds me: Laurent, this should really be moved to v4l-utils.git. > > Any progress on that? > > There are several pending patches for media-ctl that I want to apply first. > > BTW, the MC API is not restricted to V4L devices. Wouldn't it be a bad signal > for the MC API adoption to move media-ctl to v4l-utils ? Right now the MC is primarily used by V4L, so I think a public git repo on linuxtv.org is a better place for it. Perhaps called media-utils.git. Actually, what I would like to see is that the V4L, DVB and media utilities are all combined in one git repository. They are all closely related IMO. Regards, Hans ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Asking advice for Camera/ISP driver framework design 2011-09-14 6:13 ` Asking advice for Camera/ISP driver framework design Cliff Cai 2011-09-14 7:41 ` Scott Jiang 2011-09-15 10:20 ` Laurent Pinchart @ 2011-09-15 17:14 ` Sakari Ailus 2011-09-16 2:44 ` Cliff Cai 2 siblings, 1 reply; 12+ messages in thread From: Sakari Ailus @ 2011-09-15 17:14 UTC (permalink / raw) To: Cliff Cai; +Cc: linux-media Cliff Cai wrote: > Dear guys, Hi Cliff, > I'm currently working on a camera/ISP Linux driver project.Of course,I > want it to be a V4L2 driver,but I got a problem about how to design > the driver framework. > let me introduce the background of this ISP(Image signal processor) a > little bit. > 1.The ISP has two output paths,first one called main path which is > used to transfer image data for taking picture and recording,the other > one called preview path which is used to transfer image data for > previewing. > 2.the two paths have the same image data input from sensor,but their > outputs are different,the output of main path is high quality and > larger image,while the output of preview path is smaller image. Is the ISP able to process images which already are in memory, or is this only from the sensor? > 3.the two output paths have independent DMA engines used to move image > data to system memory. > > The problem is currently, the V4L2 framework seems only support one > buffer queue,and in my case,obviously,two buffer queues are required. > Any idea/advice for implementing such kind of V4L2 driver? or any > other better solutions? Regards, -- Sakari Ailus sakari.ailus@iki.fi ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Asking advice for Camera/ISP driver framework design 2011-09-15 17:14 ` Sakari Ailus @ 2011-09-16 2:44 ` Cliff Cai 2011-09-16 20:23 ` Sakari Ailus 0 siblings, 1 reply; 12+ messages in thread From: Cliff Cai @ 2011-09-16 2:44 UTC (permalink / raw) To: Sakari Ailus; +Cc: linux-media On Fri, Sep 16, 2011 at 1:14 AM, Sakari Ailus <sakari.ailus@iki.fi> wrote: > Cliff Cai wrote: >> Dear guys, > > Hi Cliff, > >> I'm currently working on a camera/ISP Linux driver project.Of course,I >> want it to be a V4L2 driver,but I got a problem about how to design >> the driver framework. >> let me introduce the background of this ISP(Image signal processor) a >> little bit. >> 1.The ISP has two output paths,first one called main path which is >> used to transfer image data for taking picture and recording,the other >> one called preview path which is used to transfer image data for >> previewing. >> 2.the two paths have the same image data input from sensor,but their >> outputs are different,the output of main path is high quality and >> larger image,while the output of preview path is smaller image. > > Is the ISP able to process images which already are in memory, or is > this only from the sensor? yes,it has another DMA to achieve this. Cliff >> 3.the two output paths have independent DMA engines used to move image >> data to system memory. >> >> The problem is currently, the V4L2 framework seems only support one >> buffer queue,and in my case,obviously,two buffer queues are required. >> Any idea/advice for implementing such kind of V4L2 driver? or any >> other better solutions? > > Regards, > > -- > Sakari Ailus > sakari.ailus@iki.fi > ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Asking advice for Camera/ISP driver framework design 2011-09-16 2:44 ` Cliff Cai @ 2011-09-16 20:23 ` Sakari Ailus 0 siblings, 0 replies; 12+ messages in thread From: Sakari Ailus @ 2011-09-16 20:23 UTC (permalink / raw) To: Cliff Cai; +Cc: linux-media On Fri, Sep 16, 2011 at 10:44:00AM +0800, Cliff Cai wrote: > On Fri, Sep 16, 2011 at 1:14 AM, Sakari Ailus <sakari.ailus@iki.fi> wrote: > > Cliff Cai wrote: > >> Dear guys, > > > > Hi Cliff, > > > >> I'm currently working on a camera/ISP Linux driver project.Of course,I > >> want it to be a V4L2 driver,but I got a problem about how to design > >> the driver framework. > >> let me introduce the background of this ISP(Image signal processor) a > >> little bit. > >> 1.The ISP has two output paths,first one called main path which is > >> used to transfer image data for taking picture and recording,the other > >> one called preview path which is used to transfer image data for > >> previewing. > >> 2.the two paths have the same image data input from sensor,but their > >> outputs are different,the output of main path is high quality and > >> larger image,while the output of preview path is smaller image. > > > > Is the ISP able to process images which already are in memory, or is > > this only from the sensor? > > yes,it has another DMA to achieve this. If you wish to support this, there would need to be an additional video node. What about the image processing performed by this ISP? Does it e.g. do scaling or cropping? They also should be configured using the V4L2 subdev interface. The OMAP 3 ISP is a good example of this; the technical reference manual is publicly available and the driver is exemplary. Your original message hints such functionality is available. It would be very helpful to know what kind of processing (scaling, pixel format conversion, crop, etc.) is supported by the ISP and what are the exact data paths through it. That defines what the media device graph implemented by the ISP driver should be. If you could show a graphical representation of this, all the better. Kind regards, -- Sakari Ailus e-mail: sakari.ailus@iki.fi jabber/XMPP/Gmail: sailus@retiisi.org.uk ^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2011-09-26 16:38 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <CAFhB-RACaxtkBuXsch5-giTBqCHR+s5_SP-sGeR=E1HVeGfQLQ@mail.gmail.com>
2011-09-14 6:13 ` Asking advice for Camera/ISP driver framework design Cliff Cai
2011-09-14 7:41 ` Scott Jiang
2011-09-15 10:20 ` Laurent Pinchart
2011-09-15 15:38 ` Cliff Cai
2011-09-15 17:10 ` Sakari Ailus
2011-09-18 23:25 ` Laurent Pinchart
2011-09-26 10:55 ` Hans Verkuil
2011-09-26 16:03 ` Laurent Pinchart
2011-09-26 16:38 ` Hans Verkuil
2011-09-15 17:14 ` Sakari Ailus
2011-09-16 2:44 ` Cliff Cai
2011-09-16 20:23 ` Sakari Ailus
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox