* [PATCH RFC 0/4] V4L2: Vendor specific media bus formats/ frame size control
@ 2012-08-23 9:51 Sylwester Nawrocki
2012-08-23 9:51 ` [PATCH RFC 1/4] V4L: Add V4L2_CID_FRAMESIZE image source class control Sylwester Nawrocki
` (5 more replies)
0 siblings, 6 replies; 23+ messages in thread
From: Sylwester Nawrocki @ 2012-08-23 9:51 UTC (permalink / raw)
To: linux-media
Cc: riverful.kim, sw0312.kim, sakari.ailus, g.liakhovetski,
laurent.pinchart, kyungmin.park, Sylwester Nawrocki
This patch series introduces new image source class control - V4L2_CID_FRAMESIZE
and vendor or device specific media bus format section.
There was already a discussion WRT handling interleaved image data [1].
I'm not terribly happy with those vendor specific media bus formats but I
couldn't find better solution that would comply with the V4L2 API concepts
and would work reliably.
What could be improved is lookup of media bus code based on fourcc, it could
probably be moved to some common module. But with only one driver it might
not make currently much sense to add it. Especially that there had to be
a lookup entry added in the private format info array in the s5p-fimc.
Comments ?
--
Regards,
Sylwester
Sylwester Nawrocki (4):
V4L: Add V4L2_CID_FRAMESIZE image source class control
V4L: Add V4L2_MBUS_FMT_S5C_UYVY_JPEG_1X8 media bus format
V4L: Add V4L2_PIX_FMT_S5C_UYVY_JPG fourcc definition
s5p-fimc: Add support for V4L2_PIX_FMT_S5C_UYVY_JPG fourcc
Documentation/DocBook/media/v4l/compat.xml | 4 +
Documentation/DocBook/media/v4l/controls.xml | 12 +++
Documentation/DocBook/media/v4l/pixfmt.xml | 10 +++
Documentation/DocBook/media/v4l/subdev-formats.xml | 45 +++++++++++
drivers/media/platform/s5p-fimc/fimc-capture.c | 86 +++++++++++++++++-----
drivers/media/platform/s5p-fimc/fimc-core.c | 16 +++-
drivers/media/platform/s5p-fimc/fimc-core.h | 26 +++++--
drivers/media/platform/s5p-fimc/fimc-reg.c | 3 +-
drivers/media/platform/s5p-fimc/mipi-csis.c | 6 +-
drivers/media/v4l2-core/v4l2-ctrls.c | 2 +
include/linux/v4l2-mediabus.h | 5 ++
include/linux/videodev2.h | 2 +
12 files changed, 191 insertions(+), 26 deletions(-)
--
1.7.11.3
[1] http://www.mail-archive.com/linux-media@vger.kernel.org/msg42707.html
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH RFC 1/4] V4L: Add V4L2_CID_FRAMESIZE image source class control
2012-08-23 9:51 [PATCH RFC 0/4] V4L2: Vendor specific media bus formats/ frame size control Sylwester Nawrocki
@ 2012-08-23 9:51 ` Sylwester Nawrocki
2012-08-23 12:13 ` Sakari Ailus
2012-08-23 9:51 ` [PATCH RFC 2/4] V4L: Add V4L2_MBUS_FMT_S5C_UYVY_JPEG_1X8 media bus format Sylwester Nawrocki
` (4 subsequent siblings)
5 siblings, 1 reply; 23+ messages in thread
From: Sylwester Nawrocki @ 2012-08-23 9:51 UTC (permalink / raw)
To: linux-media
Cc: riverful.kim, sw0312.kim, sakari.ailus, g.liakhovetski,
laurent.pinchart, kyungmin.park, Sylwester Nawrocki
The V4L2_CID_FRAMESIZE control determines maximum number
of media bus samples transmitted within a single data frame.
It is useful for determining size of data buffer at the
receiver side.
Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
---
Documentation/DocBook/media/v4l/controls.xml | 12 ++++++++++++
drivers/media/v4l2-core/v4l2-ctrls.c | 2 ++
include/linux/videodev2.h | 1 +
3 files changed, 15 insertions(+)
diff --git a/Documentation/DocBook/media/v4l/controls.xml b/Documentation/DocBook/media/v4l/controls.xml
index 93b9c68..ad5d4e5 100644
--- a/Documentation/DocBook/media/v4l/controls.xml
+++ b/Documentation/DocBook/media/v4l/controls.xml
@@ -4184,6 +4184,18 @@ interface and may change in the future.</para>
conversion.
</entry>
</row>
+ <row>
+ <entry spanname="id"><constant>V4L2_CID_FRAMESIZE</constant></entry>
+ <entry>integer</entry>
+ </row>
+ <row>
+ <entry spanname="descr">Maximum size of a data frame in media bus
+ sample units. This control determines maximum number of samples
+ transmitted per single compressed data frame. For generic raw
+ pixel formats the value of this control is undefined. This is
+ a read-only control.
+ </entry>
+ </row>
<row><entry></entry></row>
</tbody>
</tgroup>
diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
index b6a2ee7..0043fd2 100644
--- a/drivers/media/v4l2-core/v4l2-ctrls.c
+++ b/drivers/media/v4l2-core/v4l2-ctrls.c
@@ -727,6 +727,7 @@ const char *v4l2_ctrl_get_name(u32 id)
case V4L2_CID_VBLANK: return "Vertical Blanking";
case V4L2_CID_HBLANK: return "Horizontal Blanking";
case V4L2_CID_ANALOGUE_GAIN: return "Analogue Gain";
+ case V4L2_CID_FRAMESIZE: return "Maximum Frame Size";
/* Image processing controls */
case V4L2_CID_IMAGE_PROC_CLASS: return "Image Processing Controls";
@@ -933,6 +934,7 @@ void v4l2_ctrl_fill(u32 id, const char **name, enum v4l2_ctrl_type *type,
case V4L2_CID_FLASH_STROBE_STATUS:
case V4L2_CID_AUTO_FOCUS_STATUS:
case V4L2_CID_FLASH_READY:
+ case V4L2_CID_FRAMESIZE:
*flags |= V4L2_CTRL_FLAG_READ_ONLY;
break;
}
diff --git a/include/linux/videodev2.h b/include/linux/videodev2.h
index 7a147c8..d3fd19f 100644
--- a/include/linux/videodev2.h
+++ b/include/linux/videodev2.h
@@ -1985,6 +1985,7 @@ enum v4l2_jpeg_chroma_subsampling {
#define V4L2_CID_VBLANK (V4L2_CID_IMAGE_SOURCE_CLASS_BASE + 1)
#define V4L2_CID_HBLANK (V4L2_CID_IMAGE_SOURCE_CLASS_BASE + 2)
#define V4L2_CID_ANALOGUE_GAIN (V4L2_CID_IMAGE_SOURCE_CLASS_BASE + 3)
+#define V4L2_CID_FRAMESIZE (V4L2_CID_IMAGE_SOURCE_CLASS_BASE + 4)
/* Image processing controls */
#define V4L2_CID_IMAGE_PROC_CLASS_BASE (V4L2_CTRL_CLASS_IMAGE_PROC | 0x900)
--
1.7.11.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH RFC 2/4] V4L: Add V4L2_MBUS_FMT_S5C_UYVY_JPEG_1X8 media bus format
2012-08-23 9:51 [PATCH RFC 0/4] V4L2: Vendor specific media bus formats/ frame size control Sylwester Nawrocki
2012-08-23 9:51 ` [PATCH RFC 1/4] V4L: Add V4L2_CID_FRAMESIZE image source class control Sylwester Nawrocki
@ 2012-08-23 9:51 ` Sylwester Nawrocki
2012-08-23 9:51 ` [PATCH RFC 3/4] V4L: Add V4L2_PIX_FMT_S5C_UYVY_JPG fourcc definition Sylwester Nawrocki
` (3 subsequent siblings)
5 siblings, 0 replies; 23+ messages in thread
From: Sylwester Nawrocki @ 2012-08-23 9:51 UTC (permalink / raw)
To: linux-media
Cc: riverful.kim, sw0312.kim, sakari.ailus, g.liakhovetski,
laurent.pinchart, kyungmin.park, Sylwester Nawrocki
This patch adds media bus pixel code for the interleaved JPEG/UYVY
image format used by S5C73MX Samsung cameras. This interleaved image
data is transferred on MIPI-CSI2 bus as User Defined Byte-based Data.
It also defines an experimental vendor and device specific media bus
formats section and adds related DocBook documentation.
Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
---
Documentation/DocBook/media/v4l/compat.xml | 4 ++
Documentation/DocBook/media/v4l/subdev-formats.xml | 45 ++++++++++++++++++++++
include/linux/v4l2-mediabus.h | 5 +++
3 files changed, 54 insertions(+)
diff --git a/Documentation/DocBook/media/v4l/compat.xml b/Documentation/DocBook/media/v4l/compat.xml
index 98e8d08..5d2480b 100644
--- a/Documentation/DocBook/media/v4l/compat.xml
+++ b/Documentation/DocBook/media/v4l/compat.xml
@@ -2605,6 +2605,10 @@ ioctls.</para>
<listitem>
<para>Support for frequency band enumeration: &VIDIOC-ENUM-FREQ-BANDS; ioctl.</para>
</listitem>
+ <listitem>
+ <para>Vendor and device specific media bus pixel formats.
+ <xref linkend="v4l2-mbus-vendor-spec-fmts" />.</para>
+ </listitem>
</itemizedlist>
</section>
diff --git a/Documentation/DocBook/media/v4l/subdev-formats.xml b/Documentation/DocBook/media/v4l/subdev-formats.xml
index 49c532e..d7aa870 100644
--- a/Documentation/DocBook/media/v4l/subdev-formats.xml
+++ b/Documentation/DocBook/media/v4l/subdev-formats.xml
@@ -2565,5 +2565,50 @@
</tgroup>
</table>
</section>
+
+ <section id="v4l2-mbus-vendor-spec-fmts">
+ <title>Vendor and Device Specific Formats</title>
+
+ <note>
+ <title> Experimental </title>
+ <para>This is an <link linkend="experimental">experimental</link>
+interface and may change in the future.</para>
+ </note>
+
+ <para> This section lists complex data formats that are either vendor or
+ device specific. These formats comprise raw and compressed image data
+ and optional meta-data within a single frame.
+ </para>
+
+ <para>The following table lists the existing vendor and device specific
+ formats.</para>
+
+ <table pgwide="0" frame="none" id="v4l2-mbus-pixelcode-vendor-specific">
+ <title>Vendor and device specific formats</title>
+ <tgroup cols="3">
+ <colspec colname="id" align="left" />
+ <colspec colname="code" align="left"/>
+ <colspec colname="remarks" align="left"/>
+ <thead>
+ <row>
+ <entry>Identifier</entry>
+ <entry>Code</entry>
+ <entry>Comments</entry>
+ </row>
+ </thead>
+ <tbody valign="top">
+ <row id="V4L2-MBUS-FMT-S5C-UYVY-JPG-1X8">
+ <entry>V4L2_MBUS_FMT_S5C_UYVY_JPG_1X8</entry>
+ <entry>0x8001</entry>
+ <entry>
+ Interleaved raw UYVY and JPEG image format with embedded
+ meta-data, produced by S3C73M3 camera sensors.
+ </entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </section>
+
</section>
</section>
diff --git a/include/linux/v4l2-mediabus.h b/include/linux/v4l2-mediabus.h
index 5ea7f75..b98c566 100644
--- a/include/linux/v4l2-mediabus.h
+++ b/include/linux/v4l2-mediabus.h
@@ -92,6 +92,11 @@ enum v4l2_mbus_pixelcode {
/* JPEG compressed formats - next is 0x4002 */
V4L2_MBUS_FMT_JPEG_1X8 = 0x4001,
+
+ /* Vendor specific formats - next is 0x8002 */
+
+ /* S5C73M3 interleaved UYVY and JPEG */
+ V4L2_MBUS_FMT_S5C_UYVY_JPEG_1X8 = 0x8001,
};
/**
--
1.7.11.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH RFC 3/4] V4L: Add V4L2_PIX_FMT_S5C_UYVY_JPG fourcc definition
2012-08-23 9:51 [PATCH RFC 0/4] V4L2: Vendor specific media bus formats/ frame size control Sylwester Nawrocki
2012-08-23 9:51 ` [PATCH RFC 1/4] V4L: Add V4L2_CID_FRAMESIZE image source class control Sylwester Nawrocki
2012-08-23 9:51 ` [PATCH RFC 2/4] V4L: Add V4L2_MBUS_FMT_S5C_UYVY_JPEG_1X8 media bus format Sylwester Nawrocki
@ 2012-08-23 9:51 ` Sylwester Nawrocki
2012-08-23 9:51 ` [PATCH RFC 4/4] s5p-fimc: Add support for V4L2_PIX_FMT_S5C_UYVY_JPG fourcc Sylwester Nawrocki
` (2 subsequent siblings)
5 siblings, 0 replies; 23+ messages in thread
From: Sylwester Nawrocki @ 2012-08-23 9:51 UTC (permalink / raw)
To: linux-media
Cc: riverful.kim, sw0312.kim, sakari.ailus, g.liakhovetski,
laurent.pinchart, kyungmin.park, Sylwester Nawrocki
The V4L2_PIX_FMT_S5C_UYVY_JPG is a single-planar image format generated
by S5C73M3 camera. The image data consist of interleaved JPEG compressed
data and raw UYVY data and the meta data appended at the end.
The meta-data is a list of offsets to UYVY data blocks of fixed size, which
may be followed by a JPEG data chunk. Whether the JPEG chunk is present
between two subsequent VYUY blocks can be figured out from the difference
between two offset values.
All numbers are 4 byte unsigned integers.
Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
---
Documentation/DocBook/media/v4l/pixfmt.xml | 10 ++++++++++
include/linux/videodev2.h | 1 +
2 files changed, 11 insertions(+)
diff --git a/Documentation/DocBook/media/v4l/pixfmt.xml b/Documentation/DocBook/media/v4l/pixfmt.xml
index e58934c..8492ffb 100644
--- a/Documentation/DocBook/media/v4l/pixfmt.xml
+++ b/Documentation/DocBook/media/v4l/pixfmt.xml
@@ -995,6 +995,16 @@ the other bits are set to 0.</entry>
<entry>Old 6-bit greyscale format. Only the most significant 6 bits of each byte are used,
the other bits are set to 0.</entry>
</row>
+ <row id="V4L2-PIX-FMT-S5C-UYVY-JPG">
+ <entry><constant>V4L2_PIX_FMT_S5C_UYVY_JPG</constant></entry>
+ <entry>'S5CJ'</entry>
+ <entry>A single planar image format generated by S5C73M3 camera.
+An interleaved JPEG compressed and raw UYVY data which is immediatey followed
+by a meta data containing a list of offsets to the UYVY data blocks. The
+meta-data is a list of offsets to UYVY chunks of fixed size, which may be
+followed by an JPEG data chunk. All numbers in the meta-data part are 4-byte
+unsigned integers.</entry>
+ </row>
</tbody>
</tgroup>
</table>
diff --git a/include/linux/videodev2.h b/include/linux/videodev2.h
index d3fd19f..fc7ee90 100644
--- a/include/linux/videodev2.h
+++ b/include/linux/videodev2.h
@@ -434,6 +434,7 @@ struct v4l2_pix_format {
#define V4L2_PIX_FMT_KONICA420 v4l2_fourcc('K', 'O', 'N', 'I') /* YUV420 planar in blocks of 256 pixels */
#define V4L2_PIX_FMT_JPGL v4l2_fourcc('J', 'P', 'G', 'L') /* JPEG-Lite */
#define V4L2_PIX_FMT_SE401 v4l2_fourcc('S', '4', '0', '1') /* se401 janggu compressed rgb */
+#define V4L2_PIX_FMT_S5C_UYVY_JPG v4l2_fourcc('S', '5', 'C', 'I') /* S5C73M3 interleaved UYVY/JPEG */
/*
* F O R M A T E N U M E R A T I O N
--
1.7.11.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH RFC 4/4] s5p-fimc: Add support for V4L2_PIX_FMT_S5C_UYVY_JPG fourcc
2012-08-23 9:51 [PATCH RFC 0/4] V4L2: Vendor specific media bus formats/ frame size control Sylwester Nawrocki
` (2 preceding siblings ...)
2012-08-23 9:51 ` [PATCH RFC 3/4] V4L: Add V4L2_PIX_FMT_S5C_UYVY_JPG fourcc definition Sylwester Nawrocki
@ 2012-08-23 9:51 ` Sylwester Nawrocki
2012-08-27 15:48 ` [PATCH RFC 0/4] V4L2: Vendor specific media bus formats/ frame size control Nicolas THERY
2012-09-14 15:00 ` how to crop/scale in mono-subdev camera sensor driver? Nicolas THERY
5 siblings, 0 replies; 23+ messages in thread
From: Sylwester Nawrocki @ 2012-08-23 9:51 UTC (permalink / raw)
To: linux-media
Cc: riverful.kim, sw0312.kim, sakari.ailus, g.liakhovetski,
laurent.pinchart, kyungmin.park, Sylwester Nawrocki
This patch adds support for the interleaved JPEG/UYVY
V4L2_PIX_FMT_S5C_UYVY_JPG image format.
To ensure the size of allocated buffers is correct for a subdev
configuration during VIDIOC_STREAMON ioctl, an additional check
is added the video at the data pipeline validation routine.
Flag FMT_FLAGS_COMPRESSED indicates the buffer size must be
retrieved from a sensor subdev, by means of V4L2_CID_FRAMESIZE
control.
Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
---
drivers/media/platform/s5p-fimc/fimc-capture.c | 86 ++++++++++++++++++++------
drivers/media/platform/s5p-fimc/fimc-core.c | 16 ++++-
drivers/media/platform/s5p-fimc/fimc-core.h | 26 ++++++--
drivers/media/platform/s5p-fimc/fimc-reg.c | 3 +-
drivers/media/platform/s5p-fimc/mipi-csis.c | 6 +-
5 files changed, 111 insertions(+), 26 deletions(-)
diff --git a/drivers/media/platform/s5p-fimc/fimc-capture.c b/drivers/media/platform/s5p-fimc/fimc-capture.c
index 8e413dd..ab062f3 100644
--- a/drivers/media/platform/s5p-fimc/fimc-capture.c
+++ b/drivers/media/platform/s5p-fimc/fimc-capture.c
@@ -349,6 +349,8 @@ static int queue_setup(struct vb2_queue *vq, const struct v4l2_format *pfmt,
unsigned int size = (wh * fmt->depth[i]) / 8;
if (pixm)
sizes[i] = max(size, pixm->plane_fmt[i].sizeimage);
+ else if (fimc_fmt_is_user_defined(fmt->color))
+ sizes[i] = frame->payload[i];
else
sizes[i] = max_t(u32, size, frame->payload[i]);
@@ -608,10 +610,10 @@ static struct fimc_fmt *fimc_capture_try_format(struct fimc_ctx *ctx,
u32 mask = FMT_FLAGS_CAM;
struct fimc_fmt *ffmt;
- /* Color conversion from/to JPEG is not supported */
+ /* Conversion from/to JPEG or User Defined format is not supported */
if (code && ctx->s_frame.fmt && pad == FIMC_SD_PAD_SOURCE &&
- fimc_fmt_is_jpeg(ctx->s_frame.fmt->color))
- *code = V4L2_MBUS_FMT_JPEG_1X8;
+ fimc_fmt_is_user_defined(ctx->s_frame.fmt->color))
+ *code = ctx->s_frame.fmt->mbus_code;
if (fourcc && *fourcc != V4L2_PIX_FMT_JPEG && pad != FIMC_SD_PAD_SINK)
mask |= FMT_FLAGS_M2M;
@@ -625,18 +627,19 @@ static struct fimc_fmt *fimc_capture_try_format(struct fimc_ctx *ctx,
*fourcc = ffmt->fourcc;
if (pad == FIMC_SD_PAD_SINK) {
- max_w = fimc_fmt_is_jpeg(ffmt->color) ?
+ max_w = fimc_fmt_is_user_defined(ffmt->color) ?
pl->scaler_dis_w : pl->scaler_en_w;
/* Apply the camera input interface pixel constraints */
v4l_bound_align_image(width, max_t(u32, *width, 32), max_w, 4,
height, max_t(u32, *height, 32),
FIMC_CAMIF_MAX_HEIGHT,
- fimc_fmt_is_jpeg(ffmt->color) ? 3 : 1,
+ fimc_fmt_is_user_defined(ffmt->color) ?
+ 3 : 1,
0);
return ffmt;
}
/* Can't scale or crop in transparent (JPEG) transfer mode */
- if (fimc_fmt_is_jpeg(ffmt->color)) {
+ if (fimc_fmt_is_user_defined(ffmt->color)) {
*width = ctx->s_frame.f_width;
*height = ctx->s_frame.f_height;
return ffmt;
@@ -681,7 +684,7 @@ static void fimc_capture_try_selection(struct fimc_ctx *ctx,
u32 max_sc_h, max_sc_v;
/* In JPEG transparent transfer mode cropping is not supported */
- if (fimc_fmt_is_jpeg(ctx->d_frame.fmt->color)) {
+ if (fimc_fmt_is_user_defined(ctx->d_frame.fmt->color)) {
r->width = sink->f_width;
r->height = sink->f_height;
r->left = r->top = 0;
@@ -844,6 +847,23 @@ static int fimc_pipeline_try_format(struct fimc_ctx *ctx,
return 0;
}
+static int get_sensor_frame_size(struct v4l2_subdev *sensor, __u32 *size)
+{
+ struct v4l2_ctrl *ctrl;
+
+ ctrl = v4l2_ctrl_find(sensor->ctrl_handler, V4L2_CID_FRAMESIZE);
+ if (ctrl == NULL)
+ return -ENXIO;
+
+ *size = v4l2_ctrl_g_ctrl(ctrl);
+
+ if (*size <= FIMC_MAX_JPEG_BUF_SIZE)
+ return 0;
+
+ v4l2_err(sensor->v4l2_dev, "Unsupported buffer size: %u\n", *size);
+ return -EINVAL;
+}
+
static int fimc_cap_g_fmt_mplane(struct file *file, void *fh,
struct v4l2_format *f)
{
@@ -862,7 +882,7 @@ static int fimc_cap_try_fmt_mplane(struct file *file, void *fh,
struct v4l2_mbus_framefmt mf;
struct fimc_fmt *ffmt = NULL;
- if (pix->pixelformat == V4L2_PIX_FMT_JPEG) {
+ if (fimc_jpeg_fourcc(pix->pixelformat)) {
fimc_capture_try_format(ctx, &pix->width, &pix->height,
NULL, &pix->pixelformat,
FIMC_SD_PAD_SINK);
@@ -876,25 +896,32 @@ static int fimc_cap_try_fmt_mplane(struct file *file, void *fh,
return -EINVAL;
if (!fimc->vid_cap.user_subdev_api) {
- mf.width = pix->width;
+ mf.width = pix->width;
mf.height = pix->height;
- mf.code = ffmt->mbus_code;
+ mf.code = ffmt->mbus_code;
fimc_md_graph_lock(fimc);
fimc_pipeline_try_format(ctx, &mf, &ffmt, false);
fimc_md_graph_unlock(fimc);
-
- pix->width = mf.width;
- pix->height = mf.height;
+ pix->width = mf.width;
+ pix->height = mf.height;
if (ffmt)
pix->pixelformat = ffmt->fourcc;
}
fimc_adjust_mplane_format(ffmt, pix->width, pix->height, pix);
+
+ if (ffmt->flags & FMT_FLAGS_COMPRESSED)
+ get_sensor_frame_size(fimc->pipeline.subdevs[IDX_SENSOR],
+ &pix->plane_fmt[0].sizeimage);
+
return 0;
}
-static void fimc_capture_mark_jpeg_xfer(struct fimc_ctx *ctx, bool jpeg)
+static void fimc_capture_mark_jpeg_xfer(struct fimc_ctx *ctx,
+ enum fimc_color_fmt color)
{
+ bool jpeg = fimc_fmt_is_user_defined(color);
+
ctx->scaler.enabled = !jpeg;
fimc_ctrls_activate(ctx, !jpeg);
@@ -917,7 +944,7 @@ static int fimc_capture_set_format(struct fimc_dev *fimc, struct v4l2_format *f)
return -EBUSY;
/* Pre-configure format at camera interface input, for JPEG only */
- if (pix->pixelformat == V4L2_PIX_FMT_JPEG) {
+ if (fimc_jpeg_fourcc(pix->pixelformat)) {
fimc_capture_try_format(ctx, &pix->width, &pix->height,
NULL, &pix->pixelformat,
FIMC_SD_PAD_SINK);
@@ -950,7 +977,15 @@ static int fimc_capture_set_format(struct fimc_dev *fimc, struct v4l2_format *f)
}
fimc_adjust_mplane_format(ff->fmt, pix->width, pix->height, pix);
- for (i = 0; i < ff->fmt->colplanes; i++)
+
+ if (ff->fmt->flags & FMT_FLAGS_COMPRESSED) {
+ ret = get_sensor_frame_size(fimc->pipeline.subdevs[IDX_SENSOR],
+ &pix->plane_fmt[0].sizeimage);
+ if (ret < 0)
+ return ret;
+ }
+
+ for (i = 0; i < ff->fmt->memplanes; i++)
ff->payload[i] = pix->plane_fmt[i].sizeimage;
set_frame_bounds(ff, pix->width, pix->height);
@@ -958,7 +993,7 @@ static int fimc_capture_set_format(struct fimc_dev *fimc, struct v4l2_format *f)
if (!(ctx->state & FIMC_COMPOSE))
set_frame_crop(ff, 0, 0, pix->width, pix->height);
- fimc_capture_mark_jpeg_xfer(ctx, fimc_fmt_is_jpeg(ff->fmt->color));
+ fimc_capture_mark_jpeg_xfer(ctx, ff->fmt->color);
/* Reset cropping and set format at the camera interface input */
if (!fimc->vid_cap.user_subdev_api) {
@@ -1060,6 +1095,21 @@ static int fimc_pipeline_validate(struct fimc_dev *fimc)
src_fmt.format.height != sink_fmt.format.height ||
src_fmt.format.code != sink_fmt.format.code)
return -EPIPE;
+
+ if (sd == fimc->pipeline.subdevs[IDX_SENSOR] &&
+ fimc_user_defined_mbus_fmt(src_fmt.format.code)) {
+ struct v4l2_plane_pix_format plane_fmt[FIMC_MAX_PLANES];
+ struct fimc_frame *frame = &vid_cap->ctx->d_frame;
+ unsigned int i;
+
+ ret = get_sensor_frame_size(sd, &plane_fmt[0].sizeimage);
+ if (ret < 0)
+ return -EPIPE;
+ for (i = 0; i < frame->fmt->memplanes; i++) {
+ if (frame->payload[i] < plane_fmt[i].sizeimage)
+ return -EPIPE;
+ }
+ }
}
return 0;
}
@@ -1421,7 +1471,7 @@ static int fimc_subdev_set_fmt(struct v4l2_subdev *sd,
/* Update RGB Alpha control state and value range */
fimc_alpha_ctrl_update(ctx);
- fimc_capture_mark_jpeg_xfer(ctx, fimc_fmt_is_jpeg(ffmt->color));
+ fimc_capture_mark_jpeg_xfer(ctx, ffmt->color);
ff = fmt->pad == FIMC_SD_PAD_SINK ?
&ctx->s_frame : &ctx->d_frame;
diff --git a/drivers/media/platform/s5p-fimc/fimc-core.c b/drivers/media/platform/s5p-fimc/fimc-core.c
index 1a44540..b3249b1 100644
--- a/drivers/media/platform/s5p-fimc/fimc-core.c
+++ b/drivers/media/platform/s5p-fimc/fimc-core.c
@@ -184,7 +184,16 @@ static struct fimc_fmt fimc_formats[] = {
.memplanes = 1,
.colplanes = 1,
.mbus_code = V4L2_MBUS_FMT_JPEG_1X8,
- .flags = FMT_FLAGS_CAM,
+ .flags = FMT_FLAGS_CAM | FMT_FLAGS_COMPRESSED,
+ }, {
+ .name = "S5C73MX interleaved JPEG/YUYV",
+ .fourcc = V4L2_PIX_FMT_S5C_UYVY_JPG,
+ .color = FIMC_FMT_UYVY_JPEG,
+ .depth = { 8 },
+ .memplanes = 1,
+ .colplanes = 1,
+ .mbus_code = V4L2_MBUS_FMT_S5C_UYVY_JPEG_1X8,
+ .flags = FMT_FLAGS_CAM | FMT_FLAGS_COMPRESSED,
},
};
@@ -698,6 +707,11 @@ int fimc_fill_format(struct fimc_frame *frame, struct v4l2_format *f)
if (frame->fmt->colplanes == 1) /* packed formats */
bpl = (bpl * frame->fmt->depth[0]) / 8;
pixm->plane_fmt[i].bytesperline = bpl;
+
+ if (frame->fmt->flags & FMT_FLAGS_COMPRESSED) {
+ pixm->plane_fmt[i].sizeimage = frame->payload[i];
+ continue;
+ }
pixm->plane_fmt[i].sizeimage = (frame->o_width *
frame->o_height * frame->fmt->depth[i]) / 8;
}
diff --git a/drivers/media/platform/s5p-fimc/fimc-core.h b/drivers/media/platform/s5p-fimc/fimc-core.h
index 808ccc6..8e1a9e8 100644
--- a/drivers/media/platform/s5p-fimc/fimc-core.h
+++ b/drivers/media/platform/s5p-fimc/fimc-core.h
@@ -40,6 +40,8 @@
#define SCALER_MAX_VRATIO 64
#define DMA_MIN_SIZE 8
#define FIMC_CAMIF_MAX_HEIGHT 0x2000
+#define FIMC_MAX_JPEG_BUF_SIZE SZ_8M
+#define FIMC_MAX_PLANES 3
/* indices to the clocks array */
enum {
@@ -83,7 +85,7 @@ enum fimc_datapath {
};
enum fimc_color_fmt {
- FIMC_FMT_RGB444 = 0x10,
+ FIMC_FMT_RGB444 = 0x10,
FIMC_FMT_RGB555,
FIMC_FMT_RGB565,
FIMC_FMT_RGB666,
@@ -95,14 +97,15 @@ enum fimc_color_fmt {
FIMC_FMT_CBYCRY422,
FIMC_FMT_CRYCBY422,
FIMC_FMT_YCBCR444_LOCAL,
- FIMC_FMT_JPEG = 0x40,
- FIMC_FMT_RAW8 = 0x80,
+ FIMC_FMT_RAW8 = 0x40,
FIMC_FMT_RAW10,
FIMC_FMT_RAW12,
+ FIMC_FMT_JPEG = 0x80,
+ FIMC_FMT_UYVY_JPEG = 0x100,
};
+#define fimc_fmt_is_user_defined(x) (!!((x) & 0x180))
#define fimc_fmt_is_rgb(x) (!!((x) & 0x10))
-#define fimc_fmt_is_jpeg(x) (!!((x) & 0x40))
#define IS_M2M(__strt) ((__strt) == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE || \
__strt == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE)
@@ -155,6 +158,7 @@ struct fimc_fmt {
#define FMT_FLAGS_M2M_OUT (1 << 2)
#define FMT_FLAGS_M2M (1 << 1 | 1 << 2)
#define FMT_HAS_ALPHA (1 << 3)
+#define FMT_FLAGS_COMPRESSED (1 << 4)
};
/**
@@ -272,7 +276,7 @@ struct fimc_frame {
u32 offs_v;
u32 width;
u32 height;
- unsigned long payload[VIDEO_MAX_PLANES];
+ unsigned int payload[VIDEO_MAX_PLANES];
struct fimc_addr paddr;
struct fimc_dma_offset dma_offset;
struct fimc_fmt *fmt;
@@ -576,6 +580,18 @@ static inline int tiled_fmt(struct fimc_fmt *fmt)
return fmt->fourcc == V4L2_PIX_FMT_NV12MT;
}
+static inline bool fimc_jpeg_fourcc(u32 pixelformat)
+{
+ return (pixelformat == V4L2_PIX_FMT_JPEG ||
+ pixelformat == V4L2_PIX_FMT_S5C_UYVY_JPG);
+}
+
+static inline bool fimc_user_defined_mbus_fmt(u32 code)
+{
+ return (code == V4L2_MBUS_FMT_JPEG_1X8 ||
+ code == V4L2_MBUS_FMT_S5C_UYVY_JPEG_1X8);
+}
+
/* Return the alpha component bit mask */
static inline int fimc_get_alpha_mask(struct fimc_fmt *fmt)
{
diff --git a/drivers/media/platform/s5p-fimc/fimc-reg.c b/drivers/media/platform/s5p-fimc/fimc-reg.c
index 0e3eb9c..db03152 100644
--- a/drivers/media/platform/s5p-fimc/fimc-reg.c
+++ b/drivers/media/platform/s5p-fimc/fimc-reg.c
@@ -625,7 +625,7 @@ int fimc_hw_set_camera_source(struct fimc_dev *fimc,
cfg |= FIMC_REG_CISRCFMT_ITU601_16BIT;
} /* else defaults to ITU-R BT.656 8-bit */
} else if (cam->bus_type == FIMC_MIPI_CSI2) {
- if (fimc_fmt_is_jpeg(f->fmt->color))
+ if (fimc_fmt_is_user_defined(f->fmt->color))
cfg |= FIMC_REG_CISRCFMT_ITU601_8BIT;
}
@@ -680,6 +680,7 @@ int fimc_hw_set_camera_type(struct fimc_dev *fimc,
tmp = FIMC_REG_CSIIMGFMT_YCBCR422_8BIT;
break;
case V4L2_MBUS_FMT_JPEG_1X8:
+ case V4L2_MBUS_FMT_S5C_UYVY_JPEG_1X8:
tmp = FIMC_REG_CSIIMGFMT_USER(1);
cfg |= FIMC_REG_CIGCTRL_CAM_JPEG;
break;
diff --git a/drivers/media/platform/s5p-fimc/mipi-csis.c b/drivers/media/platform/s5p-fimc/mipi-csis.c
index 2f73d9e..e479fe0 100644
--- a/drivers/media/platform/s5p-fimc/mipi-csis.c
+++ b/drivers/media/platform/s5p-fimc/mipi-csis.c
@@ -145,7 +145,11 @@ static const struct csis_pix_format s5pcsis_formats[] = {
.code = V4L2_MBUS_FMT_JPEG_1X8,
.fmt_reg = S5PCSIS_CFG_FMT_USER(1),
.data_alignment = 32,
- },
+ }, {
+ .code = V4L2_MBUS_FMT_S5C_UYVY_JPEG_1X8,
+ .fmt_reg = S5PCSIS_CFG_FMT_USER(1),
+ .data_alignment = 32,
+ }
};
#define s5pcsis_write(__csis, __r, __v) writel(__v, __csis->regs + __r)
--
1.7.11.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH RFC 1/4] V4L: Add V4L2_CID_FRAMESIZE image source class control
2012-08-23 9:51 ` [PATCH RFC 1/4] V4L: Add V4L2_CID_FRAMESIZE image source class control Sylwester Nawrocki
@ 2012-08-23 12:13 ` Sakari Ailus
2012-08-23 14:32 ` Sylwester Nawrocki
0 siblings, 1 reply; 23+ messages in thread
From: Sakari Ailus @ 2012-08-23 12:13 UTC (permalink / raw)
To: Sylwester Nawrocki
Cc: linux-media, riverful.kim, sw0312.kim, g.liakhovetski,
laurent.pinchart, kyungmin.park
Hi Sylwester,
Thanks for the patch.
On Thu, Aug 23, 2012 at 11:51:26AM +0200, Sylwester Nawrocki wrote:
> The V4L2_CID_FRAMESIZE control determines maximum number
> of media bus samples transmitted within a single data frame.
> It is useful for determining size of data buffer at the
> receiver side.
>
> Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
> ---
> Documentation/DocBook/media/v4l/controls.xml | 12 ++++++++++++
> drivers/media/v4l2-core/v4l2-ctrls.c | 2 ++
> include/linux/videodev2.h | 1 +
> 3 files changed, 15 insertions(+)
>
> diff --git a/Documentation/DocBook/media/v4l/controls.xml b/Documentation/DocBook/media/v4l/controls.xml
> index 93b9c68..ad5d4e5 100644
> --- a/Documentation/DocBook/media/v4l/controls.xml
> +++ b/Documentation/DocBook/media/v4l/controls.xml
> @@ -4184,6 +4184,18 @@ interface and may change in the future.</para>
> conversion.
> </entry>
> </row>
> + <row>
> + <entry spanname="id"><constant>V4L2_CID_FRAMESIZE</constant></entry>
> + <entry>integer</entry>
> + </row>
> + <row>
> + <entry spanname="descr">Maximum size of a data frame in media bus
> + sample units. This control determines maximum number of samples
> + transmitted per single compressed data frame. For generic raw
> + pixel formats the value of this control is undefined. This is
> + a read-only control.
> + </entry>
> + </row>
> <row><entry></entry></row>
> </tbody>
> </tgroup>
> diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
> index b6a2ee7..0043fd2 100644
> --- a/drivers/media/v4l2-core/v4l2-ctrls.c
> +++ b/drivers/media/v4l2-core/v4l2-ctrls.c
> @@ -727,6 +727,7 @@ const char *v4l2_ctrl_get_name(u32 id)
> case V4L2_CID_VBLANK: return "Vertical Blanking";
> case V4L2_CID_HBLANK: return "Horizontal Blanking";
> case V4L2_CID_ANALOGUE_GAIN: return "Analogue Gain";
> + case V4L2_CID_FRAMESIZE: return "Maximum Frame Size";
I would put this to the image processing class, as the control isn't related
to image capture. Jpeg encoding (or image compression in general) after all
is related to image processing rather than capturing it.
> /* Image processing controls */
> case V4L2_CID_IMAGE_PROC_CLASS: return "Image Processing Controls";
> @@ -933,6 +934,7 @@ void v4l2_ctrl_fill(u32 id, const char **name, enum v4l2_ctrl_type *type,
> case V4L2_CID_FLASH_STROBE_STATUS:
> case V4L2_CID_AUTO_FOCUS_STATUS:
> case V4L2_CID_FLASH_READY:
> + case V4L2_CID_FRAMESIZE:
> *flags |= V4L2_CTRL_FLAG_READ_ONLY;
> break;
> }
> diff --git a/include/linux/videodev2.h b/include/linux/videodev2.h
> index 7a147c8..d3fd19f 100644
> --- a/include/linux/videodev2.h
> +++ b/include/linux/videodev2.h
> @@ -1985,6 +1985,7 @@ enum v4l2_jpeg_chroma_subsampling {
> #define V4L2_CID_VBLANK (V4L2_CID_IMAGE_SOURCE_CLASS_BASE + 1)
> #define V4L2_CID_HBLANK (V4L2_CID_IMAGE_SOURCE_CLASS_BASE + 2)
> #define V4L2_CID_ANALOGUE_GAIN (V4L2_CID_IMAGE_SOURCE_CLASS_BASE + 3)
> +#define V4L2_CID_FRAMESIZE (V4L2_CID_IMAGE_SOURCE_CLASS_BASE + 4)
>
> /* Image processing controls */
> #define V4L2_CID_IMAGE_PROC_CLASS_BASE (V4L2_CTRL_CLASS_IMAGE_PROC | 0x900)
Kind regards,
--
Sakari Ailus
e-mail: sakari.ailus@iki.fi XMPP: sailus@retiisi.org.uk
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH RFC 1/4] V4L: Add V4L2_CID_FRAMESIZE image source class control
2012-08-23 12:13 ` Sakari Ailus
@ 2012-08-23 14:32 ` Sylwester Nawrocki
2012-08-23 18:24 ` Sakari Ailus
0 siblings, 1 reply; 23+ messages in thread
From: Sylwester Nawrocki @ 2012-08-23 14:32 UTC (permalink / raw)
To: Sakari Ailus
Cc: linux-media, riverful.kim, sw0312.kim, g.liakhovetski,
laurent.pinchart, kyungmin.park
Hi Sakari,
On 08/23/2012 02:13 PM, Sakari Ailus wrote:
> Hi Sylwester,
>
> Thanks for the patch.
Thanks for your review.
> On Thu, Aug 23, 2012 at 11:51:26AM +0200, Sylwester Nawrocki wrote:
>> The V4L2_CID_FRAMESIZE control determines maximum number
>> of media bus samples transmitted within a single data frame.
>> It is useful for determining size of data buffer at the
>> receiver side.
>>
>> Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
>> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
>> ---
>> Documentation/DocBook/media/v4l/controls.xml | 12 ++++++++++++
>> drivers/media/v4l2-core/v4l2-ctrls.c | 2 ++
>> include/linux/videodev2.h | 1 +
>> 3 files changed, 15 insertions(+)
>>
>> diff --git a/Documentation/DocBook/media/v4l/controls.xml b/Documentation/DocBook/media/v4l/controls.xml
>> index 93b9c68..ad5d4e5 100644
>> --- a/Documentation/DocBook/media/v4l/controls.xml
>> +++ b/Documentation/DocBook/media/v4l/controls.xml
>> @@ -4184,6 +4184,18 @@ interface and may change in the future.</para>
>> conversion.
>> </entry>
>> </row>
>> + <row>
>> + <entry spanname="id"><constant>V4L2_CID_FRAMESIZE</constant></entry>
>> + <entry>integer</entry>
>> + </row>
>> + <row>
>> + <entry spanname="descr">Maximum size of a data frame in media bus
>> + sample units. This control determines maximum number of samples
>> + transmitted per single compressed data frame. For generic raw
>> + pixel formats the value of this control is undefined. This is
>> + a read-only control.
>> + </entry>
>> + </row>
>> <row><entry></entry></row>
>> </tbody>
>> </tgroup>
>> diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
>> index b6a2ee7..0043fd2 100644
>> --- a/drivers/media/v4l2-core/v4l2-ctrls.c
>> +++ b/drivers/media/v4l2-core/v4l2-ctrls.c
>> @@ -727,6 +727,7 @@ const char *v4l2_ctrl_get_name(u32 id)
>> case V4L2_CID_VBLANK: return "Vertical Blanking";
>> case V4L2_CID_HBLANK: return "Horizontal Blanking";
>> case V4L2_CID_ANALOGUE_GAIN: return "Analogue Gain";
>> + case V4L2_CID_FRAMESIZE: return "Maximum Frame Size";
>
> I would put this to the image processing class, as the control isn't related
> to image capture. Jpeg encoding (or image compression in general) after all
> is related to image processing rather than capturing it.
All right, might make more sense that way. Let me move it to the image
processing class then. It probably also makes sense to name it
V4L2_CID_FRAME_SIZE, rather than V4L2_CID_FRAMESIZE.
>> /* Image processing controls */
>> case V4L2_CID_IMAGE_PROC_CLASS: return "Image Processing Controls";
>> @@ -933,6 +934,7 @@ void v4l2_ctrl_fill(u32 id, const char **name, enum v4l2_ctrl_type *type,
>> case V4L2_CID_FLASH_STROBE_STATUS:
>> case V4L2_CID_AUTO_FOCUS_STATUS:
>> case V4L2_CID_FLASH_READY:
>> + case V4L2_CID_FRAMESIZE:
>> *flags |= V4L2_CTRL_FLAG_READ_ONLY;
>> break;
>> }
>> diff --git a/include/linux/videodev2.h b/include/linux/videodev2.h
>> index 7a147c8..d3fd19f 100644
>> --- a/include/linux/videodev2.h
>> +++ b/include/linux/videodev2.h
>> @@ -1985,6 +1985,7 @@ enum v4l2_jpeg_chroma_subsampling {
>> #define V4L2_CID_VBLANK (V4L2_CID_IMAGE_SOURCE_CLASS_BASE + 1)
>> #define V4L2_CID_HBLANK (V4L2_CID_IMAGE_SOURCE_CLASS_BASE + 2)
>> #define V4L2_CID_ANALOGUE_GAIN (V4L2_CID_IMAGE_SOURCE_CLASS_BASE + 3)
>> +#define V4L2_CID_FRAMESIZE (V4L2_CID_IMAGE_SOURCE_CLASS_BASE + 4)
>>
>> /* Image processing controls */
>> #define V4L2_CID_IMAGE_PROC_CLASS_BASE (V4L2_CTRL_CLASS_IMAGE_PROC | 0x900)
--
Regards,
Sylwester
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH RFC 1/4] V4L: Add V4L2_CID_FRAMESIZE image source class control
2012-08-23 14:32 ` Sylwester Nawrocki
@ 2012-08-23 18:24 ` Sakari Ailus
2012-08-23 22:41 ` Laurent Pinchart
0 siblings, 1 reply; 23+ messages in thread
From: Sakari Ailus @ 2012-08-23 18:24 UTC (permalink / raw)
To: Sylwester Nawrocki
Cc: linux-media, riverful.kim, sw0312.kim, g.liakhovetski,
laurent.pinchart, kyungmin.park
Hi Sylwester,
Sylwester Nawrocki wrote:
> On 08/23/2012 02:13 PM, Sakari Ailus wrote:
>> Hi Sylwester,
>>
>> Thanks for the patch.
>
> Thanks for your review.
>
>> On Thu, Aug 23, 2012 at 11:51:26AM +0200, Sylwester Nawrocki wrote:
>>> The V4L2_CID_FRAMESIZE control determines maximum number
>>> of media bus samples transmitted within a single data frame.
>>> It is useful for determining size of data buffer at the
>>> receiver side.
>>>
>>> Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
>>> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
>>> ---
>>> Documentation/DocBook/media/v4l/controls.xml | 12 ++++++++++++
>>> drivers/media/v4l2-core/v4l2-ctrls.c | 2 ++
>>> include/linux/videodev2.h | 1 +
>>> 3 files changed, 15 insertions(+)
>>>
>>> diff --git a/Documentation/DocBook/media/v4l/controls.xml b/Documentation/DocBook/media/v4l/controls.xml
>>> index 93b9c68..ad5d4e5 100644
>>> --- a/Documentation/DocBook/media/v4l/controls.xml
>>> +++ b/Documentation/DocBook/media/v4l/controls.xml
>>> @@ -4184,6 +4184,18 @@ interface and may change in the future.</para>
>>> conversion.
>>> </entry>
>>> </row>
>>> + <row>
>>> + <entry spanname="id"><constant>V4L2_CID_FRAMESIZE</constant></entry>
>>> + <entry>integer</entry>
>>> + </row>
>>> + <row>
>>> + <entry spanname="descr">Maximum size of a data frame in media bus
>>> + sample units. This control determines maximum number of samples
>>> + transmitted per single compressed data frame. For generic raw
>>> + pixel formats the value of this control is undefined. This is
>>> + a read-only control.
>>> + </entry>
>>> + </row>
>>> <row><entry></entry></row>
>>> </tbody>
>>> </tgroup>
>>> diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
>>> index b6a2ee7..0043fd2 100644
>>> --- a/drivers/media/v4l2-core/v4l2-ctrls.c
>>> +++ b/drivers/media/v4l2-core/v4l2-ctrls.c
>>> @@ -727,6 +727,7 @@ const char *v4l2_ctrl_get_name(u32 id)
>>> case V4L2_CID_VBLANK: return "Vertical Blanking";
>>> case V4L2_CID_HBLANK: return "Horizontal Blanking";
>>> case V4L2_CID_ANALOGUE_GAIN: return "Analogue Gain";
>>> + case V4L2_CID_FRAMESIZE: return "Maximum Frame Size";
>>
>> I would put this to the image processing class, as the control isn't related
>> to image capture. Jpeg encoding (or image compression in general) after all
>> is related to image processing rather than capturing it.
>
> All right, might make more sense that way. Let me move it to the image
> processing class then. It probably also makes sense to name it
> V4L2_CID_FRAME_SIZE, rather than V4L2_CID_FRAMESIZE.
Hmm. While we're at it, as the size is maximum --- it can be lower ---
how about V4L2_CID_MAX_FRAME_SIZE or V4L2_CID_MAX_FRAME_SAMPLES, as the
unit is samples?
Does sample in this context mean pixels for uncompressed formats and
bytes (octets) for compressed formats? It's important to define it as
we're also using the term "sample" to refer to data units transferred
over a parallel bus per a clock cycle.
On serial busses the former meaning is more obvious.
Kind regards,
--
Sakari Ailus
sakari.ailus@iki.fi
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH RFC 1/4] V4L: Add V4L2_CID_FRAMESIZE image source class control
2012-08-23 18:24 ` Sakari Ailus
@ 2012-08-23 22:41 ` Laurent Pinchart
2012-08-24 8:15 ` Sylwester Nawrocki
0 siblings, 1 reply; 23+ messages in thread
From: Laurent Pinchart @ 2012-08-23 22:41 UTC (permalink / raw)
To: Sakari Ailus
Cc: Sylwester Nawrocki, linux-media, riverful.kim, sw0312.kim,
g.liakhovetski, kyungmin.park
Hi Sylwester,
On Thursday 23 August 2012 21:24:12 Sakari Ailus wrote:
> Sylwester Nawrocki wrote:
> >> On Thu, Aug 23, 2012 at 11:51:26AM +0200, Sylwester Nawrocki wrote:
> >>> The V4L2_CID_FRAMESIZE control determines maximum number
> >>> of media bus samples transmitted within a single data frame.
> >>> It is useful for determining size of data buffer at the
> >>> receiver side.
> >>>
> >>> Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
> >>> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
> >>> ---
> >>>
> >>> Documentation/DocBook/media/v4l/controls.xml | 12 ++++++++++++
> >>> drivers/media/v4l2-core/v4l2-ctrls.c | 2 ++
> >>> include/linux/videodev2.h | 1 +
> >>> 3 files changed, 15 insertions(+)
> >>>
> >>> diff --git a/Documentation/DocBook/media/v4l/controls.xml
> >>> b/Documentation/DocBook/media/v4l/controls.xml index 93b9c68..ad5d4e5
> >>> 100644
> >>> --- a/Documentation/DocBook/media/v4l/controls.xml
> >>> +++ b/Documentation/DocBook/media/v4l/controls.xml
> >>> @@ -4184,6 +4184,18 @@ interface and may change in the future.</para>
> >>>
> >>> conversion.
> >>> </entry>
> >>>
> >>> </row>
> >>>
> >>> + <row>
> >>> + <entry
> >>> spanname="id"><constant>V4L2_CID_FRAMESIZE</constant></entry>
> >>> + <entry>integer</entry>
> >>> + </row>
> >>> + <row>
> >>> + <entry spanname="descr">Maximum size of a data frame in media bus
> >>> + sample units. This control determines maximum number of samples
> >>> + transmitted per single compressed data frame. For generic raw
> >>> + pixel formats the value of this control is undefined. This is
> >>> + a read-only control.
> >>> + </entry>
> >>> + </row>
> >>>
> >>> <row><entry></entry></row>
> >>>
> >>> </tbody>
> >>>
> >>> </tgroup>
> >>>
> >>> diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c
> >>> b/drivers/media/v4l2-core/v4l2-ctrls.c index b6a2ee7..0043fd2 100644
> >>> --- a/drivers/media/v4l2-core/v4l2-ctrls.c
> >>> +++ b/drivers/media/v4l2-core/v4l2-ctrls.c
> >>> @@ -727,6 +727,7 @@ const char *v4l2_ctrl_get_name(u32 id)
> >>>
> >>> case V4L2_CID_VBLANK: return "Vertical Blanking";
> >>> case V4L2_CID_HBLANK: return "Horizontal Blanking";
> >>> case V4L2_CID_ANALOGUE_GAIN: return "Analogue Gain";
> >>>
> >>> + case V4L2_CID_FRAMESIZE: return "Maximum Frame Size";
> >>
> >> I would put this to the image processing class, as the control isn't
> >> related to image capture. Jpeg encoding (or image compression in
> >> general) after all is related to image processing rather than capturing
> >> it.
> >
> > All right, might make more sense that way. Let me move it to the image
> > processing class then. It probably also makes sense to name it
> > V4L2_CID_FRAME_SIZE, rather than V4L2_CID_FRAMESIZE.
>
> Hmm. While we're at it, as the size is maximum --- it can be lower ---
> how about V4L2_CID_MAX_FRAME_SIZE or V4L2_CID_MAX_FRAME_SAMPLES, as the
> unit is samples?
>
> Does sample in this context mean pixels for uncompressed formats and
> bytes (octets) for compressed formats? It's important to define it as
> we're also using the term "sample" to refer to data units transferred
> over a parallel bus per a clock cycle.
I agree with Sakari here, I find the documentation quite vague, I wouldn't
understand what the control is meant for from the documentation only.
> On serial busses the former meaning is more obvious.
--
Regards,
Laurent Pinchart
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH RFC 1/4] V4L: Add V4L2_CID_FRAMESIZE image source class control
2012-08-23 22:41 ` Laurent Pinchart
@ 2012-08-24 8:15 ` Sylwester Nawrocki
2012-08-24 22:51 ` Sakari Ailus
0 siblings, 1 reply; 23+ messages in thread
From: Sylwester Nawrocki @ 2012-08-24 8:15 UTC (permalink / raw)
To: Laurent Pinchart, Sakari Ailus
Cc: linux-media, riverful.kim, sw0312.kim, g.liakhovetski,
kyungmin.park
Hi,
On 08/24/2012 12:41 AM, Laurent Pinchart wrote:
> On Thursday 23 August 2012 21:24:12 Sakari Ailus wrote:
>> Sylwester Nawrocki wrote:
>>>> On Thu, Aug 23, 2012 at 11:51:26AM +0200, Sylwester Nawrocki wrote:
>>>>> The V4L2_CID_FRAMESIZE control determines maximum number
>>>>> of media bus samples transmitted within a single data frame.
>>>>> It is useful for determining size of data buffer at the
>>>>> receiver side.
>>>>>
>>>>> Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
>>>>> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
>>>>> ---
>>>>>
>>>>> Documentation/DocBook/media/v4l/controls.xml | 12 ++++++++++++
>>>>> drivers/media/v4l2-core/v4l2-ctrls.c | 2 ++
>>>>> include/linux/videodev2.h | 1 +
>>>>> 3 files changed, 15 insertions(+)
>>>>>
>>>>> diff --git a/Documentation/DocBook/media/v4l/controls.xml
>>>>> b/Documentation/DocBook/media/v4l/controls.xml index 93b9c68..ad5d4e5
>>>>> 100644
>>>>> --- a/Documentation/DocBook/media/v4l/controls.xml
>>>>> +++ b/Documentation/DocBook/media/v4l/controls.xml
>>>>> @@ -4184,6 +4184,18 @@ interface and may change in the future.</para>
>>>>>
>>>>> conversion.
>>>>> </entry>
>>>>>
>>>>> </row>
>>>>>
>>>>> + <row>
>>>>> + <entry
>>>>> spanname="id"><constant>V4L2_CID_FRAMESIZE</constant></entry>
>>>>> + <entry>integer</entry>
>>>>> + </row>
>>>>> + <row>
>>>>> + <entry spanname="descr">Maximum size of a data frame in media bus
>>>>> + sample units. This control determines maximum number of samples
>>>>> + transmitted per single compressed data frame. For generic raw
>>>>> + pixel formats the value of this control is undefined. This is
>>>>> + a read-only control.
>>>>> + </entry>
>>>>> + </row>
>>>>>
>>>>> <row><entry></entry></row>
>>>>>
>>>>> </tbody>
>>>>>
>>>>> </tgroup>
>>>>>
>>>>> diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c
>>>>> b/drivers/media/v4l2-core/v4l2-ctrls.c index b6a2ee7..0043fd2 100644
>>>>> --- a/drivers/media/v4l2-core/v4l2-ctrls.c
>>>>> +++ b/drivers/media/v4l2-core/v4l2-ctrls.c
>>>>> @@ -727,6 +727,7 @@ const char *v4l2_ctrl_get_name(u32 id)
>>>>>
>>>>> case V4L2_CID_VBLANK: return "Vertical Blanking";
>>>>> case V4L2_CID_HBLANK: return "Horizontal Blanking";
>>>>> case V4L2_CID_ANALOGUE_GAIN: return "Analogue Gain";
>>>>>
>>>>> + case V4L2_CID_FRAMESIZE: return "Maximum Frame Size";
>>>>
>>>> I would put this to the image processing class, as the control isn't
>>>> related to image capture. Jpeg encoding (or image compression in
>>>> general) after all is related to image processing rather than capturing
>>>> it.
>>>
>>> All right, might make more sense that way. Let me move it to the image
>>> processing class then. It probably also makes sense to name it
>>> V4L2_CID_FRAME_SIZE, rather than V4L2_CID_FRAMESIZE.
>>
>> Hmm. While we're at it, as the size is maximum --- it can be lower ---
>> how about V4L2_CID_MAX_FRAME_SIZE or V4L2_CID_MAX_FRAME_SAMPLES, as the
>> unit is samples?
>
>> Does sample in this context mean pixels for uncompressed formats and
>> bytes (octets) for compressed formats? It's important to define it as
>> we're also using the term "sample" to refer to data units transferred
>> over a parallel bus per a clock cycle.
>
> I agree with Sakari here, I find the documentation quite vague, I wouldn't
> understand what the control is meant for from the documentation only.
I thought it was clear enough:
Maximum size of a data frame in media bus sample units.
^^^^^^^^^^^^^^^^^^^^^^^^^
So that means the unit is a number of bits clocked by a single clock
pulse on parallel video bus... :) But how is media bus sample defined
in case of CSI bus ? Looks like "media bus sample" is a useless term
for our purpose.
I thought it was better than just 8-bit byte, because the data receiver
(bridge) could layout data received from video bus in various ways in
memory, e.g. add some padding. OTOH, would any padding make sense
for compressed streams ? It would break the content, wouldn't it ?
So I would propose to use 8-bit byte as a unit for this control and
name it V4L2_CID_MAX_FRAME_SIZE. All in all it's not really tied
to the media bus.
>> On serial busses the former meaning is more obvious.
--
Regards,
Sylwester
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH RFC 1/4] V4L: Add V4L2_CID_FRAMESIZE image source class control
2012-08-24 8:15 ` Sylwester Nawrocki
@ 2012-08-24 22:51 ` Sakari Ailus
2012-08-26 19:22 ` Sylwester Nawrocki
0 siblings, 1 reply; 23+ messages in thread
From: Sakari Ailus @ 2012-08-24 22:51 UTC (permalink / raw)
To: Sylwester Nawrocki
Cc: Laurent Pinchart, linux-media, riverful.kim, sw0312.kim,
g.liakhovetski, kyungmin.park
Hi Sylwester,
On Fri, Aug 24, 2012 at 10:15:58AM +0200, Sylwester Nawrocki wrote:
> On 08/24/2012 12:41 AM, Laurent Pinchart wrote:
> > On Thursday 23 August 2012 21:24:12 Sakari Ailus wrote:
> >> Sylwester Nawrocki wrote:
> >>>> On Thu, Aug 23, 2012 at 11:51:26AM +0200, Sylwester Nawrocki wrote:
> >>>>> The V4L2_CID_FRAMESIZE control determines maximum number
> >>>>> of media bus samples transmitted within a single data frame.
> >>>>> It is useful for determining size of data buffer at the
> >>>>> receiver side.
> >>>>>
> >>>>> Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
> >>>>> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
> >>>>> ---
> >>>>>
> >>>>> Documentation/DocBook/media/v4l/controls.xml | 12 ++++++++++++
> >>>>> drivers/media/v4l2-core/v4l2-ctrls.c | 2 ++
> >>>>> include/linux/videodev2.h | 1 +
> >>>>> 3 files changed, 15 insertions(+)
> >>>>>
> >>>>> diff --git a/Documentation/DocBook/media/v4l/controls.xml
> >>>>> b/Documentation/DocBook/media/v4l/controls.xml index 93b9c68..ad5d4e5
> >>>>> 100644
> >>>>> --- a/Documentation/DocBook/media/v4l/controls.xml
> >>>>> +++ b/Documentation/DocBook/media/v4l/controls.xml
> >>>>> @@ -4184,6 +4184,18 @@ interface and may change in the future.</para>
> >>>>>
> >>>>> conversion.
> >>>>> </entry>
> >>>>>
> >>>>> </row>
> >>>>>
> >>>>> + <row>
> >>>>> + <entry
> >>>>> spanname="id"><constant>V4L2_CID_FRAMESIZE</constant></entry>
> >>>>> + <entry>integer</entry>
> >>>>> + </row>
> >>>>> + <row>
> >>>>> + <entry spanname="descr">Maximum size of a data frame in media bus
> >>>>> + sample units. This control determines maximum number of samples
> >>>>> + transmitted per single compressed data frame. For generic raw
> >>>>> + pixel formats the value of this control is undefined. This is
> >>>>> + a read-only control.
> >>>>> + </entry>
> >>>>> + </row>
> >>>>>
> >>>>> <row><entry></entry></row>
> >>>>>
> >>>>> </tbody>
> >>>>>
> >>>>> </tgroup>
> >>>>>
> >>>>> diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c
> >>>>> b/drivers/media/v4l2-core/v4l2-ctrls.c index b6a2ee7..0043fd2 100644
> >>>>> --- a/drivers/media/v4l2-core/v4l2-ctrls.c
> >>>>> +++ b/drivers/media/v4l2-core/v4l2-ctrls.c
> >>>>> @@ -727,6 +727,7 @@ const char *v4l2_ctrl_get_name(u32 id)
> >>>>>
> >>>>> case V4L2_CID_VBLANK: return "Vertical Blanking";
> >>>>> case V4L2_CID_HBLANK: return "Horizontal Blanking";
> >>>>> case V4L2_CID_ANALOGUE_GAIN: return "Analogue Gain";
> >>>>>
> >>>>> + case V4L2_CID_FRAMESIZE: return "Maximum Frame Size";
> >>>>
> >>>> I would put this to the image processing class, as the control isn't
> >>>> related to image capture. Jpeg encoding (or image compression in
> >>>> general) after all is related to image processing rather than capturing
> >>>> it.
> >>>
> >>> All right, might make more sense that way. Let me move it to the image
> >>> processing class then. It probably also makes sense to name it
> >>> V4L2_CID_FRAME_SIZE, rather than V4L2_CID_FRAMESIZE.
> >>
> >> Hmm. While we're at it, as the size is maximum --- it can be lower ---
> >> how about V4L2_CID_MAX_FRAME_SIZE or V4L2_CID_MAX_FRAME_SAMPLES, as the
> >> unit is samples?
> >
> >> Does sample in this context mean pixels for uncompressed formats and
> >> bytes (octets) for compressed formats? It's important to define it as
> >> we're also using the term "sample" to refer to data units transferred
> >> over a parallel bus per a clock cycle.
> >
> > I agree with Sakari here, I find the documentation quite vague, I wouldn't
> > understand what the control is meant for from the documentation only.
>
> I thought it was clear enough:
>
> Maximum size of a data frame in media bus sample units.
> ^^^^^^^^^^^^^^^^^^^^^^^^^
Oops. I somehow managed to miss that. My mistake.
> So that means the unit is a number of bits clocked by a single clock
> pulse on parallel video bus... :) But how is media bus sample defined
> in case of CSI bus ? Looks like "media bus sample" is a useless term
> for our purpose.
The CSI2 transmitters and receivers, as far as I understand, want to know a
lot more about the data that I think would be necessary. This doesn't only
involve the bits per sample (e.g. pixel for raw bayer formats) but some
information on the media bus code, too. I.e. if you're transferring 11 bit
pixels the bus has to know that.
I think it would have been better to use different media bus codes for
serial busses than on parallel busses that transfer the sample on a single
clock cycle. But that's out of the scope of this patch.
In respect to this the CCP2 AFAIR works mostly the same way.
> I thought it was better than just 8-bit byte, because the data receiver
> (bridge) could layout data received from video bus in various ways in
> memory, e.g. add some padding. OTOH, would any padding make sense
> for compressed streams ? It would break the content, wouldn't it ?
I guess it't break if the padding is applied anywhere else than the end,
where I hope it's ok. I'm not that familiar with compressed formats, though.
The hardware typically has limitations on the DMA transaction width and that
can easily be e.g. 32 or 64 bytes, so some padding may easily be introduced
at the end of the compressed image.
> So I would propose to use 8-bit byte as a unit for this control and
> name it V4L2_CID_MAX_FRAME_SIZE. All in all it's not really tied
> to the media bus.
It took me quite a while toe remember what the control really was for. ;)
How about using bytes on video nodes and bus and media bus code specific
extended samples (or how we should call pixels in uncompressed formats and
units of data in compressed formats?) on subdevs? The information how the
former is derived from the latter resides in the driver controlling the DMA
anyway.
As you proposed originally, this control is more relevant to subdevs so we
could also not define it for video nodes at all. Especially if the control
isn't needed: the information should be available from VIDIOC_TRY_FMT.
Kind regards,
--
Sakari Ailus
e-mail: sakari.ailus@iki.fi XMPP: sailus@retiisi.org.uk
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH RFC 1/4] V4L: Add V4L2_CID_FRAMESIZE image source class control
2012-08-24 22:51 ` Sakari Ailus
@ 2012-08-26 19:22 ` Sylwester Nawrocki
2012-08-27 19:28 ` Sakari Ailus
0 siblings, 1 reply; 23+ messages in thread
From: Sylwester Nawrocki @ 2012-08-26 19:22 UTC (permalink / raw)
To: Sakari Ailus
Cc: Sylwester Nawrocki, Laurent Pinchart, linux-media, riverful.kim,
sw0312.kim, g.liakhovetski, kyungmin.park
Hi Sakari,
On 08/25/2012 12:51 AM, Sakari Ailus wrote:
>>>>>>> --- a/drivers/media/v4l2-core/v4l2-ctrls.c
>>>>>>> +++ b/drivers/media/v4l2-core/v4l2-ctrls.c
>>>>>>> @@ -727,6 +727,7 @@ const char *v4l2_ctrl_get_name(u32 id)
>>>>>>>
>>>>>>> case V4L2_CID_VBLANK: return "Vertical Blanking";
>>>>>>> case V4L2_CID_HBLANK: return "Horizontal Blanking";
>>>>>>> case V4L2_CID_ANALOGUE_GAIN: return "Analogue Gain";
>>>>>>>
>>>>>>> + case V4L2_CID_FRAMESIZE: return "Maximum Frame Size";
>>>>>>
>>>>>> I would put this to the image processing class, as the control isn't
>>>>>> related to image capture. Jpeg encoding (or image compression in
>>>>>> general) after all is related to image processing rather than capturing
>>>>>> it.
>>>>>
>>>>> All right, might make more sense that way. Let me move it to the image
>>>>> processing class then. It probably also makes sense to name it
>>>>> V4L2_CID_FRAME_SIZE, rather than V4L2_CID_FRAMESIZE.
>>>>
>>>> Hmm. While we're at it, as the size is maximum --- it can be lower ---
>>>> how about V4L2_CID_MAX_FRAME_SIZE or V4L2_CID_MAX_FRAME_SAMPLES, as the
>>>> unit is samples?
>>>
>>>> Does sample in this context mean pixels for uncompressed formats and
>>>> bytes (octets) for compressed formats? It's important to define it as
>>>> we're also using the term "sample" to refer to data units transferred
>>>> over a parallel bus per a clock cycle.
>>>
>>> I agree with Sakari here, I find the documentation quite vague, I wouldn't
>>> understand what the control is meant for from the documentation only.
>>
>> I thought it was clear enough:
>>
>> Maximum size of a data frame in media bus sample units.
>> ^^^^^^^^^^^^^^^^^^^^^^^^^
>
> Oops. I somehow managed to miss that. My mistake.
>
>> So that means the unit is a number of bits clocked by a single clock
>> pulse on parallel video bus... :) But how is media bus sample defined
>> in case of CSI bus ? Looks like "media bus sample" is a useless term
>> for our purpose.
>
> The CSI2 transmitters and receivers, as far as I understand, want to know a
> lot more about the data that I think would be necessary. This doesn't only
> involve the bits per sample (e.g. pixel for raw bayer formats) but some
> information on the media bus code, too. I.e. if you're transferring 11 bit
> pixels the bus has to know that.
>
> I think it would have been better to use different media bus codes for
> serial busses than on parallel busses that transfer the sample on a single
> clock cycle. But that's out of the scope of this patch.
>
> In respect to this the CCP2 AFAIR works mostly the same way.
>
>> I thought it was better than just 8-bit byte, because the data receiver
>> (bridge) could layout data received from video bus in various ways in
>> memory, e.g. add some padding. OTOH, would any padding make sense
>> for compressed streams ? It would break the content, wouldn't it ?
>
> I guess it't break if the padding is applied anywhere else than the end,
> where I hope it's ok. I'm not that familiar with compressed formats, though.
> The hardware typically has limitations on the DMA transaction width and that
> can easily be e.g. 32 or 64 bytes, so some padding may easily be introduced
> at the end of the compressed image.
The padding at the frame end is not a problem, since it would be a property
of a bridge. So the reported sizeimage value with VIDIOC_G_FMT, for example,
could be easily adjusted a a bridge driver to account any padding.
>> So I would propose to use 8-bit byte as a unit for this control and
>> name it V4L2_CID_MAX_FRAME_SIZE. All in all it's not really tied
>> to the media bus.
>
> It took me quite a while toe remember what the control really was for. ;)
:-) Yeah, looks like I have been going in circles with this issue.. ;)
> How about using bytes on video nodes and bus and media bus code specific
> extended samples (or how we should call pixels in uncompressed formats and
> units of data in compressed formats?) on subdevs? The information how the
> former is derived from the latter resides in the driver controlling the DMA
> anyway.
>
> As you proposed originally, this control is more relevant to subdevs so we
> could also not define it for video nodes at all. Especially if the control
> isn't needed: the information should be available from VIDIOC_TRY_FMT.
Yeah, I seem to have forgotten to add a note that this control would be
valid only on subdev nodes :/
OTOH, how do we handle cases where subdev's controls are inherited by
a video node ? Referring to an media bus pixel code seems wrong in that
cases.
Also for compressed formats, where this control is only needed, the bus
receivers/DMA just do transparent transfers, without actually modifying
the data stream.
The problem is that ideally this V4L2_CID_FRAMESIZE control shouldn't be
exposed to user space. It might be useful, for example for checking what
would be resulting file size on a subdev modelling a JPEG encoder, etc.
I agree on video nodes ioctls like VIDIOC_[S/G/TRY]_FMT are just sufficient
for retrieving that information.
--
Regards,
Sylwester
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH RFC 0/4] V4L2: Vendor specific media bus formats/ frame size control
2012-08-23 9:51 [PATCH RFC 0/4] V4L2: Vendor specific media bus formats/ frame size control Sylwester Nawrocki
` (3 preceding siblings ...)
2012-08-23 9:51 ` [PATCH RFC 4/4] s5p-fimc: Add support for V4L2_PIX_FMT_S5C_UYVY_JPG fourcc Sylwester Nawrocki
@ 2012-08-27 15:48 ` Nicolas THERY
2012-08-29 18:41 ` sakari.ailus
2012-08-29 21:51 ` Sylwester Nawrocki
2012-09-14 15:00 ` how to crop/scale in mono-subdev camera sensor driver? Nicolas THERY
5 siblings, 2 replies; 23+ messages in thread
From: Nicolas THERY @ 2012-08-27 15:48 UTC (permalink / raw)
To: Sylwester Nawrocki
Cc: linux-media@vger.kernel.org, riverful.kim@samsung.com,
sw0312.kim@samsung.com, sakari.ailus@iki.fi,
g.liakhovetski@gmx.de, laurent.pinchart@ideasonboard.com,
kyungmin.park@samsung.com, Jean-Marc VOLLE, Pierre-yves TALOUD,
Willy POISSON, Benjamin GAIGNARD
Hello,
On 2012-08-23 11:51, Sylwester Nawrocki wrote:
> This patch series introduces new image source class control - V4L2_CID_FRAMESIZE
> and vendor or device specific media bus format section.
>
> There was already a discussion WRT handling interleaved image data [1].
> I'm not terribly happy with those vendor specific media bus formats but I
> couldn't find better solution that would comply with the V4L2 API concepts
> and would work reliably.
What about Sakari's "Frame format descriptors" RFC[1] that would allow to
describe arbitrary pixel code combinations and provide required information
(virtual channel and data type) to the CSI receiver driver for configuring the
hardware?
Thanks in advance.
Best regards,
Nicolas
[1] http://www.mail-archive.com/linux-media@vger.kernel.org/msg43530.html
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH RFC 1/4] V4L: Add V4L2_CID_FRAMESIZE image source class control
2012-08-26 19:22 ` Sylwester Nawrocki
@ 2012-08-27 19:28 ` Sakari Ailus
2012-09-11 19:21 ` Sylwester Nawrocki
0 siblings, 1 reply; 23+ messages in thread
From: Sakari Ailus @ 2012-08-27 19:28 UTC (permalink / raw)
To: Sylwester Nawrocki
Cc: Sylwester Nawrocki, Laurent Pinchart, linux-media, riverful.kim,
sw0312.kim, g.liakhovetski, kyungmin.park, hverkuil
Hi Sylwester,
On Sun, Aug 26, 2012 at 09:22:12PM +0200, Sylwester Nawrocki wrote:
> On 08/25/2012 12:51 AM, Sakari Ailus wrote:
> >>>>>>> --- a/drivers/media/v4l2-core/v4l2-ctrls.c
> >>>>>>> +++ b/drivers/media/v4l2-core/v4l2-ctrls.c
> >>>>>>> @@ -727,6 +727,7 @@ const char *v4l2_ctrl_get_name(u32 id)
> >>>>>>>
> >>>>>>> case V4L2_CID_VBLANK: return "Vertical Blanking";
> >>>>>>> case V4L2_CID_HBLANK: return "Horizontal Blanking";
> >>>>>>> case V4L2_CID_ANALOGUE_GAIN: return "Analogue Gain";
> >>>>>>>
> >>>>>>> + case V4L2_CID_FRAMESIZE: return "Maximum Frame Size";
> >>>>>>
> >>>>>> I would put this to the image processing class, as the control isn't
> >>>>>> related to image capture. Jpeg encoding (or image compression in
> >>>>>> general) after all is related to image processing rather than capturing
> >>>>>> it.
> >>>>>
> >>>>> All right, might make more sense that way. Let me move it to the image
> >>>>> processing class then. It probably also makes sense to name it
> >>>>> V4L2_CID_FRAME_SIZE, rather than V4L2_CID_FRAMESIZE.
> >>>>
> >>>> Hmm. While we're at it, as the size is maximum --- it can be lower ---
> >>>> how about V4L2_CID_MAX_FRAME_SIZE or V4L2_CID_MAX_FRAME_SAMPLES, as the
> >>>> unit is samples?
> >>>
> >>>> Does sample in this context mean pixels for uncompressed formats and
> >>>> bytes (octets) for compressed formats? It's important to define it as
> >>>> we're also using the term "sample" to refer to data units transferred
> >>>> over a parallel bus per a clock cycle.
> >>>
> >>> I agree with Sakari here, I find the documentation quite vague, I wouldn't
> >>> understand what the control is meant for from the documentation only.
> >>
> >> I thought it was clear enough:
> >>
> >> Maximum size of a data frame in media bus sample units.
> >> ^^^^^^^^^^^^^^^^^^^^^^^^^
> >
> > Oops. I somehow managed to miss that. My mistake.
> >
> >> So that means the unit is a number of bits clocked by a single clock
> >> pulse on parallel video bus... :) But how is media bus sample defined
> >> in case of CSI bus ? Looks like "media bus sample" is a useless term
> >> for our purpose.
> >
> > The CSI2 transmitters and receivers, as far as I understand, want to know a
> > lot more about the data that I think would be necessary. This doesn't only
> > involve the bits per sample (e.g. pixel for raw bayer formats) but some
> > information on the media bus code, too. I.e. if you're transferring 11 bit
> > pixels the bus has to know that.
> >
> > I think it would have been better to use different media bus codes for
> > serial busses than on parallel busses that transfer the sample on a single
> > clock cycle. But that's out of the scope of this patch.
> >
> > In respect to this the CCP2 AFAIR works mostly the same way.
> >
> >> I thought it was better than just 8-bit byte, because the data receiver
> >> (bridge) could layout data received from video bus in various ways in
> >> memory, e.g. add some padding. OTOH, would any padding make sense
> >> for compressed streams ? It would break the content, wouldn't it ?
> >
> > I guess it't break if the padding is applied anywhere else than the end,
> > where I hope it's ok. I'm not that familiar with compressed formats, though.
> > The hardware typically has limitations on the DMA transaction width and that
> > can easily be e.g. 32 or 64 bytes, so some padding may easily be introduced
> > at the end of the compressed image.
>
> The padding at the frame end is not a problem, since it would be a property
> of a bridge. So the reported sizeimage value with VIDIOC_G_FMT, for example,
> could be easily adjusted a a bridge driver to account any padding.
>
> >> So I would propose to use 8-bit byte as a unit for this control and
> >> name it V4L2_CID_MAX_FRAME_SIZE. All in all it's not really tied
> >> to the media bus.
> >
> > It took me quite a while toe remember what the control really was for. ;)
>
> :-) Yeah, looks like I have been going in circles with this issue.. ;)
>
> > How about using bytes on video nodes and bus and media bus code specific
> > extended samples (or how we should call pixels in uncompressed formats and
> > units of data in compressed formats?) on subdevs? The information how the
> > former is derived from the latter resides in the driver controlling the DMA
> > anyway.
> >
> > As you proposed originally, this control is more relevant to subdevs so we
> > could also not define it for video nodes at all. Especially if the control
> > isn't needed: the information should be available from VIDIOC_TRY_FMT.
>
> Yeah, I seem to have forgotten to add a note that this control would be
> valid only on subdev nodes :/
> OTOH, how do we handle cases where subdev's controls are inherited by
> a video node ? Referring to an media bus pixel code seems wrong in that
> cases.
I don't think this control ever should be visible on a video device if we
define it's only valid for subdevs. Even then, if it's implemented by a
subdev driver, its value refers to samples rather than bytes which would
make more sense on a video node (in case we'd define it there).
AFAIR Hans once suggested a flag to hide low-level controls from video nodes
and inherit the rest from subdevs; I think that would be a valid use case
for this flag. On the other hand, I see little or no meaningful use for
inheriting controls from subdevs on systems I'm the most familiar with, but
it may be more useful elsewhere. Most of the subdev controls are not
something a regular V4L2 application would like to touch as such; look at
the image processing controls, for example.
> Also for compressed formats, where this control is only needed, the bus
> receivers/DMA just do transparent transfers, without actually modifying
> the data stream.
That makes sense. I don't have the documentation in front of my eyes right
now, but what would you think about adding a note to the documentation that
the control is only valid for compressed formats? That would limit the
defined use of it to cases we (or you) know well?
> The problem is that ideally this V4L2_CID_FRAMESIZE control shouldn't be
> exposed to user space. It might be useful, for example for checking what
> would be resulting file size on a subdev modelling a JPEG encoder, etc.
> I agree on video nodes ioctls like VIDIOC_[S/G/TRY]_FMT are just sufficient
> for retrieving that information.
One case I could imagine where the user might want to touch the control is
to modify the maximum size of the compressed images, should the hardware
support that. But in that case, the user should be aware of how to convert
samples into bytes, and I don't see a regular V4L2 application should
necessarily be aware of something potentially as hardware-dependent as that.
Cc Hans.
Kind regards,
--
Sakari Ailus
e-mail: sakari.ailus@iki.fi XMPP: sailus@retiisi.org.uk
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH RFC 0/4] V4L2: Vendor specific media bus formats/ frame size control
2012-08-27 15:48 ` [PATCH RFC 0/4] V4L2: Vendor specific media bus formats/ frame size control Nicolas THERY
@ 2012-08-29 18:41 ` sakari.ailus
2012-08-30 8:03 ` Nicolas THERY
2012-08-29 21:51 ` Sylwester Nawrocki
1 sibling, 1 reply; 23+ messages in thread
From: sakari.ailus @ 2012-08-29 18:41 UTC (permalink / raw)
To: Nicolas THERY
Cc: Sylwester Nawrocki, linux-media@vger.kernel.org,
riverful.kim@samsung.com, sw0312.kim@samsung.com,
g.liakhovetski@gmx.de, laurent.pinchart@ideasonboard.com,
kyungmin.park@samsung.com, Jean-Marc VOLLE, Pierre-yves TALOUD,
Willy POISSON, Benjamin GAIGNARD
Hi Nicolas,
On Mon, Aug 27, 2012 at 05:48:43PM +0200, Nicolas THERY wrote:
> Hello,
>
> On 2012-08-23 11:51, Sylwester Nawrocki wrote:
> > This patch series introduces new image source class control - V4L2_CID_FRAMESIZE
> > and vendor or device specific media bus format section.
> >
> > There was already a discussion WRT handling interleaved image data [1].
> > I'm not terribly happy with those vendor specific media bus formats but I
> > couldn't find better solution that would comply with the V4L2 API concepts
> > and would work reliably.
>
> What about Sakari's "Frame format descriptors" RFC[1] that would allow to
> describe arbitrary pixel code combinations and provide required information
> (virtual channel and data type) to the CSI receiver driver for configuring the
> hardware?
I we'll need to continue that work as well, unfortunately I've had higher
priority things to do. Still, getting that right is complex and will take
time. The V4L2 pixel format for this sensor will likely be a
hardware-specific one for quite a while: this sensor in question sends
several frames in different formats of a single image at once which doesn't
match to V4L2's pixel format configuration that assumes a single format.
Kind regards,
--
Sakari Ailus
e-mail: sakari.ailus@iki.fi XMPP: sailus@retiisi.org.uk
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH RFC 0/4] V4L2: Vendor specific media bus formats/ frame size control
2012-08-27 15:48 ` [PATCH RFC 0/4] V4L2: Vendor specific media bus formats/ frame size control Nicolas THERY
2012-08-29 18:41 ` sakari.ailus
@ 2012-08-29 21:51 ` Sylwester Nawrocki
2012-08-30 8:06 ` Nicolas THERY
1 sibling, 1 reply; 23+ messages in thread
From: Sylwester Nawrocki @ 2012-08-29 21:51 UTC (permalink / raw)
To: Nicolas THERY
Cc: Sylwester Nawrocki, linux-media@vger.kernel.org,
riverful.kim@samsung.com, sw0312.kim@samsung.com,
sakari.ailus@iki.fi, g.liakhovetski@gmx.de,
laurent.pinchart@ideasonboard.com, kyungmin.park@samsung.com,
Jean-Marc VOLLE, Pierre-yves TALOUD, Willy POISSON,
Benjamin GAIGNARD
Hi Nicolas,
On 08/27/2012 05:48 PM, Nicolas THERY wrote:
> Hello,
>
> On 2012-08-23 11:51, Sylwester Nawrocki wrote:
>> This patch series introduces new image source class control - V4L2_CID_FRAMESIZE
>> and vendor or device specific media bus format section.
>>
>> There was already a discussion WRT handling interleaved image data [1].
>> I'm not terribly happy with those vendor specific media bus formats but I
>> couldn't find better solution that would comply with the V4L2 API concepts
>> and would work reliably.
>
> What about Sakari's "Frame format descriptors" RFC[1] that would allow to
> describe arbitrary pixel code combinations and provide required information
> (virtual channel and data type) to the CSI receiver driver for configuring the
> hardware?
Thanks for reminding about this. The "Frame format descriptors" would not
necessarily solve the main problem which I tried to address in this RFC.
The sensor in question uses single MIPI-CSI data type frame as a container
for multiple data planes, e.g. JPEG compressed stream interleaved with YUV
image data, some optional padding and a specific metadata describing the
interleaved image data. There is no MIPI-CSI2 virtual channel or data type
interleaving. Everything is transferred on single VC and single DT.
Such a frames need sensor specific S/W algorithm do extract each component.
So it didn't look like the frame descriptors would be helpful here, since
all this needs to be mapped to a single fourcc. Not sure if defining a
"binary blob" fourcc and retrieving frame format information by some other
means would have been a way to go.
I also had some patches adopting design from Sakari's RFC, for the case where
in addition to the above frame format there was captured a copy of meta-data,
(as in the frame footer) send on separate DT (Embedded Data). And this was
mapped to 2-planar V4L2 pixel format. Even then I used a sensor specific
media bus code.
In the end of the day I switched to a single-planar format as it had all
what's needed to decode the data. And the were some H/W limitations on using
additional DT.
The frame format descriptors might be worth to work on, but this doesn't
look like a solution to my problem and it is going to take some time to get
it right, as Sakari pointed out.
--
Regards,
Sylwester
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH RFC 0/4] V4L2: Vendor specific media bus formats/ frame size control
2012-08-29 18:41 ` sakari.ailus
@ 2012-08-30 8:03 ` Nicolas THERY
0 siblings, 0 replies; 23+ messages in thread
From: Nicolas THERY @ 2012-08-30 8:03 UTC (permalink / raw)
To: sakari.ailus@iki.fi
Cc: Sylwester Nawrocki, linux-media@vger.kernel.org,
riverful.kim@samsung.com, sw0312.kim@samsung.com,
g.liakhovetski@gmx.de, laurent.pinchart@ideasonboard.com,
kyungmin.park@samsung.com, Jean-Marc VOLLE, Pierre-yves TALOUD,
Willy POISSON, Benjamin GAIGNARD
Hello,
Thanks for your reply. It's good to know your proposal is simply on the
back-burner.
Best regards,
Nicolas
On 2012-08-29 20:41, sakari.ailus@iki.fi wrote:
> Hi Nicolas,
>
> On Mon, Aug 27, 2012 at 05:48:43PM +0200, Nicolas THERY wrote:
>> Hello,
>>
>> On 2012-08-23 11:51, Sylwester Nawrocki wrote:
>>> This patch series introduces new image source class control - V4L2_CID_FRAMESIZE
>>> and vendor or device specific media bus format section.
>>>
>>> There was already a discussion WRT handling interleaved image data [1].
>>> I'm not terribly happy with those vendor specific media bus formats but I
>>> couldn't find better solution that would comply with the V4L2 API concepts
>>> and would work reliably.
>> What about Sakari's "Frame format descriptors" RFC[1] that would allow to
>> describe arbitrary pixel code combinations and provide required information
>> (virtual channel and data type) to the CSI receiver driver for configuring the
>> hardware?
> I we'll need to continue that work as well, unfortunately I've had higher
> priority things to do. Still, getting that right is complex and will take
> time. The V4L2 pixel format for this sensor will likely be a
> hardware-specific one for quite a while: this sensor in question sends
> several frames in different formats of a single image at once which doesn't
> match to V4L2's pixel format configuration that assumes a single format.
>
> Kind regards,
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH RFC 0/4] V4L2: Vendor specific media bus formats/ frame size control
2012-08-29 21:51 ` Sylwester Nawrocki
@ 2012-08-30 8:06 ` Nicolas THERY
0 siblings, 0 replies; 23+ messages in thread
From: Nicolas THERY @ 2012-08-30 8:06 UTC (permalink / raw)
To: Sylwester Nawrocki
Cc: Sylwester Nawrocki, linux-media@vger.kernel.org,
riverful.kim@samsung.com, sw0312.kim@samsung.com,
sakari.ailus@iki.fi, g.liakhovetski@gmx.de,
laurent.pinchart@ideasonboard.com, kyungmin.park@samsung.com,
Jean-Marc VOLLE, Pierre-yves TALOUD, Willy POISSON,
Benjamin GAIGNARD
Hello,
Thanks for your reply. I overlooked this sensor packages multiple streams in a
single DT. It seems indeed that Sakari's RFC would not help.
Best regards,
On 2012-08-29 23:51, Sylwester Nawrocki wrote:
> Hi Nicolas,
>
> On 08/27/2012 05:48 PM, Nicolas THERY wrote:
>> Hello,
>>
>> On 2012-08-23 11:51, Sylwester Nawrocki wrote:
>>> This patch series introduces new image source class control - V4L2_CID_FRAMESIZE
>>> and vendor or device specific media bus format section.
>>>
>>> There was already a discussion WRT handling interleaved image data [1].
>>> I'm not terribly happy with those vendor specific media bus formats but I
>>> couldn't find better solution that would comply with the V4L2 API concepts
>>> and would work reliably.
>>
>> What about Sakari's "Frame format descriptors" RFC[1] that would allow to
>> describe arbitrary pixel code combinations and provide required information
>> (virtual channel and data type) to the CSI receiver driver for configuring the
>> hardware?
>
> Thanks for reminding about this. The "Frame format descriptors" would not
> necessarily solve the main problem which I tried to address in this RFC.
>
> The sensor in question uses single MIPI-CSI data type frame as a container
> for multiple data planes, e.g. JPEG compressed stream interleaved with YUV
> image data, some optional padding and a specific metadata describing the
> interleaved image data. There is no MIPI-CSI2 virtual channel or data type
> interleaving. Everything is transferred on single VC and single DT.
>
> Such a frames need sensor specific S/W algorithm do extract each component.
>
> So it didn't look like the frame descriptors would be helpful here, since
> all this needs to be mapped to a single fourcc. Not sure if defining a
> "binary blob" fourcc and retrieving frame format information by some other
> means would have been a way to go.
>
> I also had some patches adopting design from Sakari's RFC, for the case where
> in addition to the above frame format there was captured a copy of meta-data,
> (as in the frame footer) send on separate DT (Embedded Data). And this was
> mapped to 2-planar V4L2 pixel format. Even then I used a sensor specific
> media bus code.
>
> In the end of the day I switched to a single-planar format as it had all
> what's needed to decode the data. And the were some H/W limitations on using
> additional DT.
>
> The frame format descriptors might be worth to work on, but this doesn't
> look like a solution to my problem and it is going to take some time to get
> it right, as Sakari pointed out.
>
> --
>
> Regards,
> Sylwester
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH RFC 1/4] V4L: Add V4L2_CID_FRAMESIZE image source class control
2012-08-27 19:28 ` Sakari Ailus
@ 2012-09-11 19:21 ` Sylwester Nawrocki
2012-09-12 6:48 ` Hans Verkuil
0 siblings, 1 reply; 23+ messages in thread
From: Sylwester Nawrocki @ 2012-09-11 19:21 UTC (permalink / raw)
To: Sakari Ailus
Cc: Sylwester Nawrocki, Sylwester Nawrocki, Laurent Pinchart,
linux-media, riverful.kim, sw0312.kim, g.liakhovetski,
kyungmin.park, hverkuil
Hi Sakari,
On 08/27/2012 09:28 PM, Sakari Ailus wrote:
>>> How about using bytes on video nodes and bus and media bus code specific
>>> extended samples (or how we should call pixels in uncompressed formats and
>>> units of data in compressed formats?) on subdevs? The information how the
>>> former is derived from the latter resides in the driver controlling the DMA
>>> anyway.
>>>
>>> As you proposed originally, this control is more relevant to subdevs so we
>>> could also not define it for video nodes at all. Especially if the control
>>> isn't needed: the information should be available from VIDIOC_TRY_FMT.
>>
>> Yeah, I seem to have forgotten to add a note that this control would be
>> valid only on subdev nodes :/
>> OTOH, how do we handle cases where subdev's controls are inherited by
>> a video node ? Referring to an media bus pixel code seems wrong in that
>> cases.
>
> I don't think this control ever should be visible on a video device if we
> define it's only valid for subdevs. Even then, if it's implemented by a
> subdev driver, its value refers to samples rather than bytes which would
> make more sense on a video node (in case we'd define it there).
I agree as for not exposing such a control on video nodes. We don't seem to
have support for that in v4l2-core though.
Hmm, not sure how would media bus data samples make sense on a video node,
where media bus is not quite defined at the user space API level.
> AFAIR Hans once suggested a flag to hide low-level controls from video nodes
> and inherit the rest from subdevs; I think that would be a valid use case
> for this flag. On the other hand, I see little or no meaningful use for
Yeah, sounds interesting.
> inheriting controls from subdevs on systems I'm the most familiar with, but
> it may be more useful elsewhere. Most of the subdev controls are not
> something a regular V4L2 application would like to touch as such; look at
> the image processing controls, for example.
Yeah, but in general controls can be inherited and there could be an
ambiguity. Not all drivers involving subdevs use the subdev user space API.
>> Also for compressed formats, where this control is only needed, the bus
>> receivers/DMA just do transparent transfers, without actually modifying
>> the data stream.
>
> That makes sense. I don't have the documentation in front of my eyes right
> now, but what would you think about adding a note to the documentation that
> the control is only valid for compressed formats? That would limit the
> defined use of it to cases we (or you) know well?
I think that could be done, all in all nobody ever needed such a control
for uncompressed/raw formats so far.
>> The problem is that ideally this V4L2_CID_FRAMESIZE control shouldn't be
>> exposed to user space. It might be useful, for example for checking what
>> would be resulting file size on a subdev modelling a JPEG encoder, etc.
>> I agree on video nodes ioctls like VIDIOC_[S/G/TRY]_FMT are just sufficient
>> for retrieving that information.
>
> One case I could imagine where the user might want to touch the control is
> to modify the maximum size of the compressed images, should the hardware
> support that. But in that case, the user should be aware of how to convert
> samples into bytes, and I don't see a regular V4L2 application should
> necessarily be aware of something potentially as hardware-dependent as that.
Yes, V4L2_CID_FRAME_SIZE/SAMPLES looks like something only for a library
to play with. But for compressed formats conversion framesamples <-> data
octets in a memory buffer should be rather straightforward.
That said, V4L2_CID_FRAME_SAMPLES control would have been insufficient
where you want to query frame sizes for multiple logical streams within
a frame, e.g. multi-planar frame transmitted on MIPI-CSI2 bus with
different data types and received on user space side as a multi-planar
v4l2 buffer. We would need to query a value for each plane and single
control won't do the job in such cases. Hence I'm inclined to drop the
idea of V4L2_CID_FRAME_SAMPLES control and carry on with the frame format
descriptors approach. We probably can't/shouldn't have both solutions
in use.
--
Regards,
Sylwester
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH RFC 1/4] V4L: Add V4L2_CID_FRAMESIZE image source class control
2012-09-11 19:21 ` Sylwester Nawrocki
@ 2012-09-12 6:48 ` Hans Verkuil
0 siblings, 0 replies; 23+ messages in thread
From: Hans Verkuil @ 2012-09-12 6:48 UTC (permalink / raw)
To: Sylwester Nawrocki
Cc: Sakari Ailus, Sylwester Nawrocki, Laurent Pinchart, linux-media,
riverful.kim, sw0312.kim, g.liakhovetski, kyungmin.park
On Tue September 11 2012 21:21:03 Sylwester Nawrocki wrote:
> Hi Sakari,
>
> On 08/27/2012 09:28 PM, Sakari Ailus wrote:
> >>> How about using bytes on video nodes and bus and media bus code specific
> >>> extended samples (or how we should call pixels in uncompressed formats and
> >>> units of data in compressed formats?) on subdevs? The information how the
> >>> former is derived from the latter resides in the driver controlling the DMA
> >>> anyway.
> >>>
> >>> As you proposed originally, this control is more relevant to subdevs so we
> >>> could also not define it for video nodes at all. Especially if the control
> >>> isn't needed: the information should be available from VIDIOC_TRY_FMT.
> >>
> >> Yeah, I seem to have forgotten to add a note that this control would be
> >> valid only on subdev nodes :/
> >> OTOH, how do we handle cases where subdev's controls are inherited by
> >> a video node ? Referring to an media bus pixel code seems wrong in that
> >> cases.
> >
> > I don't think this control ever should be visible on a video device if we
> > define it's only valid for subdevs. Even then, if it's implemented by a
> > subdev driver, its value refers to samples rather than bytes which would
> > make more sense on a video node (in case we'd define it there).
>
> I agree as for not exposing such a control on video nodes. We don't seem to
> have support for that in v4l2-core though.
Yes, there is: if you set the is_private flag of struct v4l2_ctrl then that
control will not be inherited by a video device and will only be visible on
the v4l2-subdevX node.
> Hmm, not sure how would media bus data samples make sense on a video node,
> where media bus is not quite defined at the user space API level.
It doesn't make sense.
> > AFAIR Hans once suggested a flag to hide low-level controls from video nodes
> > and inherit the rest from subdevs; I think that would be a valid use case
> > for this flag. On the other hand, I see little or no meaningful use for
>
> Yeah, sounds interesting.
>
> > inheriting controls from subdevs on systems I'm the most familiar with, but
> > it may be more useful elsewhere.
Inheriting controls is very common for video receivers: in many cases the video
receiver subdev is the one that implements common controls like brightness,
contrast, etc. You really want to inherit controls there.
Private controls are actually not used at the moment. They only make sense if
your driver has the Media Controller API as well so apps can discover the
subdev nodes.
The problem is that all too often I found that you want to control in the bridge
driver whether or not to honor the is_private flag. If you have just a simple
pipeline, then exposing those private controls on a video node actually makes
sense since you don't need to use the MC and open additional subdev nodes.
Currently no method to ignore the is_private flag exists (although it would
be easy to add).
My 'hide low-level controls' flag idea was something different: it would only
control whether such low-level controls would show up in a GUI.
It is basically just a hint for applications.
Regards,
Hans
> > Most of the subdev controls are not
> > something a regular V4L2 application would like to touch as such; look at
> > the image processing controls, for example.
>
> Yeah, but in general controls can be inherited and there could be an
> ambiguity. Not all drivers involving subdevs use the subdev user space API.
>
> >> Also for compressed formats, where this control is only needed, the bus
> >> receivers/DMA just do transparent transfers, without actually modifying
> >> the data stream.
> >
> > That makes sense. I don't have the documentation in front of my eyes right
> > now, but what would you think about adding a note to the documentation that
> > the control is only valid for compressed formats? That would limit the
> > defined use of it to cases we (or you) know well?
>
> I think that could be done, all in all nobody ever needed such a control
> for uncompressed/raw formats so far.
>
> >> The problem is that ideally this V4L2_CID_FRAMESIZE control shouldn't be
> >> exposed to user space. It might be useful, for example for checking what
> >> would be resulting file size on a subdev modelling a JPEG encoder, etc.
> >> I agree on video nodes ioctls like VIDIOC_[S/G/TRY]_FMT are just sufficient
> >> for retrieving that information.
> >
> > One case I could imagine where the user might want to touch the control is
> > to modify the maximum size of the compressed images, should the hardware
> > support that. But in that case, the user should be aware of how to convert
> > samples into bytes, and I don't see a regular V4L2 application should
> > necessarily be aware of something potentially as hardware-dependent as that.
>
> Yes, V4L2_CID_FRAME_SIZE/SAMPLES looks like something only for a library
> to play with. But for compressed formats conversion framesamples <-> data
> octets in a memory buffer should be rather straightforward.
>
> That said, V4L2_CID_FRAME_SAMPLES control would have been insufficient
> where you want to query frame sizes for multiple logical streams within
> a frame, e.g. multi-planar frame transmitted on MIPI-CSI2 bus with
> different data types and received on user space side as a multi-planar
> v4l2 buffer. We would need to query a value for each plane and single
> control won't do the job in such cases. Hence I'm inclined to drop the
> idea of V4L2_CID_FRAME_SAMPLES control and carry on with the frame format
> descriptors approach. We probably can't/shouldn't have both solutions
> in use.
>
> --
>
> Regards,
> Sylwester
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* how to crop/scale in mono-subdev camera sensor driver?
2012-08-23 9:51 [PATCH RFC 0/4] V4L2: Vendor specific media bus formats/ frame size control Sylwester Nawrocki
` (4 preceding siblings ...)
2012-08-27 15:48 ` [PATCH RFC 0/4] V4L2: Vendor specific media bus formats/ frame size control Nicolas THERY
@ 2012-09-14 15:00 ` Nicolas THERY
2012-09-14 21:02 ` Sylwester Nawrocki
5 siblings, 1 reply; 23+ messages in thread
From: Nicolas THERY @ 2012-09-14 15:00 UTC (permalink / raw)
To: linux-media@vger.kernel.org
Cc: Jean-Marc VOLLE, Pierre-yves TALOUD, Willy POISSON,
Benjamin GAIGNARD, Vincent ABRIOU
Hello,
I'm studying how to support cropping and scaling (binning, skipping, digital
scaling if any) for different models for camera sensor drivers. There seems to
be (at least) two kinds of sensor drivers:
1) The smiapp driver has 2 or 3 subdevs: pixel array -> binning (-> scaling).
It gives clients full control over the various ways of cropping and scaling
thanks to the selection API.
2) The mt9p031 driver (and maybe others) has a single subdev. Clients use the
obsolete SUBDEV_S_CROP ioctl for selecting a region of interest in the pixel
array and SUBDEV_S_FMT for setting the source pad mbus size. If the mbus size
differs from the cropping rectangle size, scaling is enabled and the driver
decides internally how to combine skipping and binning to achieve the requested
scaling factors.
As SUBDEV_S_CROP is obsolete, I wonder whether it is okay to support cropping
and scaling in a mono-subdev sensor driver by (a) setting the cropping
rectangle with SUBDEV_S_SELECTION on the source pad, and (b) setting the
scaling factors via the source pad mbus size as in the mt9p031 driver?
Thanks in advance.
Best regards,
Nicolas
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: how to crop/scale in mono-subdev camera sensor driver?
2012-09-14 15:00 ` how to crop/scale in mono-subdev camera sensor driver? Nicolas THERY
@ 2012-09-14 21:02 ` Sylwester Nawrocki
2012-09-15 12:21 ` Sakari Ailus
0 siblings, 1 reply; 23+ messages in thread
From: Sylwester Nawrocki @ 2012-09-14 21:02 UTC (permalink / raw)
To: Nicolas THERY
Cc: linux-media@vger.kernel.org, Jean-Marc VOLLE, Pierre-yves TALOUD,
Willy POISSON, Benjamin GAIGNARD, Vincent ABRIOU, Sakari Ailus,
Laurent Pinchart
Hi,
On 09/14/2012 05:00 PM, Nicolas THERY wrote:
> Hello,
>
> I'm studying how to support cropping and scaling (binning, skipping, digital
> scaling if any) for different models for camera sensor drivers. There seems to
> be (at least) two kinds of sensor drivers:
>
> 1) The smiapp driver has 2 or 3 subdevs: pixel array -> binning (-> scaling).
> It gives clients full control over the various ways of cropping and scaling
> thanks to the selection API.
>
> 2) The mt9p031 driver (and maybe others) has a single subdev. Clients use the
> obsolete SUBDEV_S_CROP ioctl for selecting a region of interest in the pixel
> array and SUBDEV_S_FMT for setting the source pad mbus size. If the mbus size
> differs from the cropping rectangle size, scaling is enabled and the driver
> decides internally how to combine skipping and binning to achieve the requested
> scaling factors.
>
> As SUBDEV_S_CROP is obsolete, I wonder whether it is okay to support cropping
> and scaling in a mono-subdev sensor driver by (a) setting the cropping
> rectangle with SUBDEV_S_SELECTION on the source pad, and (b) setting the
> scaling factors via the source pad mbus size as in the mt9p031 driver?
Cc: Sakari, Laurent
AFAICT in a single subdev with one pad configuration your steps as above
are valid, i.e. crop rectangle can be set with
VIDIOC_SUBDEV_S_SELECTION(V4L2_SEL_TGT_CROP) and the sensor's output
resolution with VIDIOC_SUBDEV_S_FMT.
I guess documentation [1] wasn't clear enough about that ?
The subdev crop ioctls are deprecated in favour of the selection API, so
now VIDIOC_SUBDEV_G/S_SELECTION ioctls and corresponding subdev ops needs
to be used anywhere you would have used SUBDEV_G/S_CROP before.
This reminds me there are still a few drivers that need to be converted
to use set/get_selection subdev pad level ops, rather than set/get_crop.
[1] http://linuxtv.org/downloads/v4l-dvb-apis/subdev.html
--
Regards,
Sylwester
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: how to crop/scale in mono-subdev camera sensor driver?
2012-09-14 21:02 ` Sylwester Nawrocki
@ 2012-09-15 12:21 ` Sakari Ailus
0 siblings, 0 replies; 23+ messages in thread
From: Sakari Ailus @ 2012-09-15 12:21 UTC (permalink / raw)
To: Sylwester Nawrocki
Cc: Nicolas THERY, linux-media@vger.kernel.org, Jean-Marc VOLLE,
Pierre-yves TALOUD, Willy POISSON, Benjamin GAIGNARD,
Vincent ABRIOU, Laurent Pinchart
Hi Sylwester and Nicolas,
Sylwester Nawrocki wrote:
> Hi,
>
> On 09/14/2012 05:00 PM, Nicolas THERY wrote:
>> Hello,
>>
>> I'm studying how to support cropping and scaling (binning, skipping, digital
>> scaling if any) for different models for camera sensor drivers. There seems to
>> be (at least) two kinds of sensor drivers:
>>
>> 1) The smiapp driver has 2 or 3 subdevs: pixel array -> binning (-> scaling).
>> It gives clients full control over the various ways of cropping and scaling
>> thanks to the selection API.
>>
>> 2) The mt9p031 driver (and maybe others) has a single subdev. Clients use the
>> obsolete SUBDEV_S_CROP ioctl for selecting a region of interest in the pixel
>> array and SUBDEV_S_FMT for setting the source pad mbus size. If the mbus size
>> differs from the cropping rectangle size, scaling is enabled and the driver
>> decides internally how to combine skipping and binning to achieve the requested
>> scaling factors.
>>
>> As SUBDEV_S_CROP is obsolete, I wonder whether it is okay to support cropping
>> and scaling in a mono-subdev sensor driver by (a) setting the cropping
>> rectangle with SUBDEV_S_SELECTION on the source pad, and (b) setting the
>> scaling factors via the source pad mbus size as in the mt9p031 driver?
>
> Cc: Sakari, Laurent
>
> AFAICT in a single subdev with one pad configuration your steps as above
> are valid, i.e. crop rectangle can be set with
> VIDIOC_SUBDEV_S_SELECTION(V4L2_SEL_TGT_CROP) and the sensor's output
> resolution with VIDIOC_SUBDEV_S_FMT.
>
> I guess documentation [1] wasn't clear enough about that ?
>
> The subdev crop ioctls are deprecated in favour of the selection API, so
> now VIDIOC_SUBDEV_G/S_SELECTION ioctls and corresponding subdev ops needs
> to be used anywhere you would have used SUBDEV_G/S_CROP before.
>
> This reminds me there are still a few drivers that need to be converted
> to use set/get_selection subdev pad level ops, rather than set/get_crop.
>
> [1] http://linuxtv.org/downloads/v4l-dvb-apis/subdev.html
After the selection IOCTLs were implemented for the subdev API, the
subdev API then allowed explicitly configuring cropping after scaling
inside a single subdev, among other improvements.
Before the introduction of the selection extensions, the subdev API has
never allowed explicitly performing scaling and cropping using a single
subdev with only a single source pad but still some sensor drivers
implement it this way. It may well be that it is technically possible to
use the source pad media bus format to configure scaling after cropping
but that's against what's currently defined in the spec, for the reason
that we wanted to define explicitly which selection targets are used for
configuring cropping and scaling and in which order that configuration
is expected to be done, in order to be able to configure the subdev
without having to know anything else about it except that it implements
the selection API. The scaler in the ISP can now be configured using
exactly the same API as can the scaler in the sensor.
If you wish to expose the scaling configuration of the sensor using the
V4L2 subdev interface, then I suggest doing what the SMIA++ driver does:
multiple subdevs. See "Order of configuration and format propagation"
behind the above URL.
Kind regards,
--
Sakari Ailus
sakari.ailus@iki.fi
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2012-09-15 12:20 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-08-23 9:51 [PATCH RFC 0/4] V4L2: Vendor specific media bus formats/ frame size control Sylwester Nawrocki
2012-08-23 9:51 ` [PATCH RFC 1/4] V4L: Add V4L2_CID_FRAMESIZE image source class control Sylwester Nawrocki
2012-08-23 12:13 ` Sakari Ailus
2012-08-23 14:32 ` Sylwester Nawrocki
2012-08-23 18:24 ` Sakari Ailus
2012-08-23 22:41 ` Laurent Pinchart
2012-08-24 8:15 ` Sylwester Nawrocki
2012-08-24 22:51 ` Sakari Ailus
2012-08-26 19:22 ` Sylwester Nawrocki
2012-08-27 19:28 ` Sakari Ailus
2012-09-11 19:21 ` Sylwester Nawrocki
2012-09-12 6:48 ` Hans Verkuil
2012-08-23 9:51 ` [PATCH RFC 2/4] V4L: Add V4L2_MBUS_FMT_S5C_UYVY_JPEG_1X8 media bus format Sylwester Nawrocki
2012-08-23 9:51 ` [PATCH RFC 3/4] V4L: Add V4L2_PIX_FMT_S5C_UYVY_JPG fourcc definition Sylwester Nawrocki
2012-08-23 9:51 ` [PATCH RFC 4/4] s5p-fimc: Add support for V4L2_PIX_FMT_S5C_UYVY_JPG fourcc Sylwester Nawrocki
2012-08-27 15:48 ` [PATCH RFC 0/4] V4L2: Vendor specific media bus formats/ frame size control Nicolas THERY
2012-08-29 18:41 ` sakari.ailus
2012-08-30 8:03 ` Nicolas THERY
2012-08-29 21:51 ` Sylwester Nawrocki
2012-08-30 8:06 ` Nicolas THERY
2012-09-14 15:00 ` how to crop/scale in mono-subdev camera sensor driver? Nicolas THERY
2012-09-14 21:02 ` Sylwester Nawrocki
2012-09-15 12:21 ` Sakari Ailus
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).