* [PATCH/RFC v2.1 0/2] Mem-to-mem device framework
@ 2009-12-23 13:17 Pawel Osciak
2009-12-23 13:17 ` [PATCH v2.1 1/2] V4L: Add memory-to-memory device helper framework for V4L2 Pawel Osciak
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Pawel Osciak @ 2009-12-23 13:17 UTC (permalink / raw)
To: linux-arm-kernel
Hello,
this is the second version of the proposed implementation for mem-to-mem memory
device framework. Your comments are very welcome.
In v2.1:
I am very sorry for the resend, but somehow an orphaned endif found its way to
Kconfig during the rebase.
Changes since v1:
- v4l2_m2m_buf_queue() now requires m2m_ctx as its argument
- video_queue private data stores driver private data
- a new submenu in kconfig for mem-to-mem devices
- minor rebase leftovers cleanup
A second patch series followed v2 with a new driver for a real device -
Samsung S3C/S5P image rotator, utilizing this framework.
This series contains:
[PATCH v2.1 1/2] V4L: Add memory-to-memory device helper framework for V4L2.
[PATCH v2.1 2/2] V4L: Add a mem-to-mem V4L2 framework test device.
[EXAMPLE v2] Mem-to-mem userspace test application.
Previous discussion and RFC on this topic:
http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/10668
A mem-to-mem device is a device that uses memory buffers passed by
userspace applications for both source and destination. This is
different from existing drivers that use memory buffers for only one
of those at once.
In terms of V4L2 such a device would be both of OUTPUT and CAPTURE type.
Although no such devices are present in the V4L2 framework, a demand for such
a model exists, e.g. for 'resizer devices'.
-------------------------------------------------------------------------------
Mem-to-mem devices
-------------------------------------------------------------------------------
In the previous discussion we concluded that we should use one video node with
two queues, an output (V4L2_BUF_TYPE_VIDEO_OUTPUT) queue for source buffers and
a capture queue (V4L2_BUF_TYPE_VIDEO_CAPTURE) for destination buffers.
Each instance has its own set of queues: 2 videobuf_queues, each with a ready
buffer queue, managed by the framework. Everything is encapsulated in the
queue context struct:
struct v4l2_m2m_queue_ctx {
struct videobuf_queue q;
/* ... */
/* Queue for buffers ready to be processed as soon as this
* instance receives access to the device */
struct list_head rdy_queue;
/* ... */
};
struct v4l2_m2m_ctx {
/* ... */
/* Capture (output to memory) queue context */
struct v4l2_m2m_queue_ctx cap_q_ctx;
/* Output (input from memory) queue context */
struct v4l2_m2m_queue_ctx out_q_ctx;
/* ... */
};
Streamon can be called for all instances and will not sleep if another instance
is streaming.
vidioc_querycap() should report V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_VIDEO_OUTPUT.
-------------------------------------------------------------------------------
Queuing and dequeuing buffers
-------------------------------------------------------------------------------
Applications can queue as many buffers as they want and it is not required to
queue an equal number of source and destination buffers. If there is not enough
buffers of any type, a new transaction will simply not be scheduled.
-------------------------------------------------------------------------------
Source and destination formats
-------------------------------------------------------------------------------
Should be set per queue. A helper function to access queues depending on the
passed type - v4l2_m2m_get_vq() - is supplied. Most of the format-handling code
is normally located in drivers anyway. The only exception is the "field" member
of the videobuf_queue struct, which has to be set directly. It breaks
encapsulation a little bit, but nothing can be done with it.
-------------------------------------------------------------------------------
Scheduling
-------------------------------------------------------------------------------
Requirements/assumptions:
1. More than one instance can be open at the same time.
2. Each instance periodically receives exclusive access to the device, performs
an operation (operations) and yields back the device in a state that allows
other instances to use it.
3. When an instance gets access to the device, it performs a
"transaction"/"job". A transaction/job is defined as the shortest operation
that cannot/should not be further divided without having to restart it from
scratch, or without having to perform expensive reconfiguration of a device,
etc.
4. Transactions can use multiple source/destination buffers.
5. Only a driver can tell when it is ready to perform a transaction, so
a optional callback is provided for that purpose (job_ready()).
There are three common requirements for a transaction to be ready to run:
- at least one source buffer ready
- at least one destination buffer ready
- streaming on
- (optional) driver-specific requirements (driver-specific callback function)
So when buffers are queued by qbuf() or streaming is turned on with
streamon(), the framework calls v4l2_m2m_try_schedule().
v4l2_m2m_try_schedule()
1. Checks for the above conditions.
2. Checks for driver-specific conditions by calling job_ready() callback, if
supplied.
3. If all the checks succeed, it calls v4l2_m2m_schedule() to schedule the
transaction.
v4l2_m2m_schedule()
1. Checks whether the transaction is already on job queue and schedules it
if not (by adding it to the job queue).
2. Calls v4l2_m2m_try_run().
v4l2_m2m_try_run()
1. Runs a job if and is pending and none is currently running by calling
device_run() callback.
When the device_run() callback is called, the driver has to begin the
transaction. When it is finished, the driver has to call v4l2_m2m_job_finish().
v4l2_m2m_job_finish()
1. Removes the currently running transaction from the job queue and calls
v4l2_m2m_try_run to (possibly) run the next pending transaction.
There is also support for forced transaction aborting (when an application
gets killed). The framework calls job_abort() callback and the driver has
to abort the transaction as soon as possible and call v4l2_m2m_job_finish()
to indicate that the transaction has been aborted.
Additionally, some kind of timeout for transactions could be added to prevent
instances from claiming the device for too long.
-------------------------------------------------------------------------------
Acquiring ready buffers to process
-------------------------------------------------------------------------------
Ready buffers can be acquired using v4l2_m2m_next_src_buf()/
v4l2_m2m_next_dst_buf(). After the transaction they are removed from the queues
with v4l2_m2m_dst_buf_remove()/v4l2_m2m_src_buf_remove(). This is not
multi-buffer-transaction-safe. It will have to be modified, but ideally after
we decide how to handle multi-buffer transactions in videobuf core.
-------------------------------------------------------------------------------
poll()
-------------------------------------------------------------------------------
We cannot have poll() for multiple queues on one node, so we use poll() for the
destination queue only.
-------------------------------------------------------------------------------
mmap()
-------------------------------------------------------------------------------
Requirements:
- allow mapping buffers from different queues
- retain "magic" offset values so videobuf can still match buffers by offsets
The proposed solution involves a querybuf() and mmap() multiplexers:
a) When a driver calls querybuf(), we have access to the type and we can
detect which queue to call videobuf_querybuf() on:
vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
ret = videobuf_querybuf(vq, buf);
The offsets returned from videobuf_querybuf() for one of the queues are further
offset by a predefined constant (DST_QUEUE_OFF_BASE). This way the driver
(and applications) receive different offsets for the same buffer indexes of
each queue:
if (buf->memory == V4L2_MEMORY_MMAP
&& vq->type == V4L2_BUF_TYPE_VIDEO_CAPTURE) {
buf->m.offset += DST_QUEUE_OFF_BASE;
}
b) When the application (driver) calls mmap(), the offsets which were modified
in querybuf() are detected and the proper queue for them chosen based on that.
Finally, the modified offsets are passed to videobuf_mmap_mapper() for proper
queues with their offsets changed back to values recognizable by videobuf:
unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
struct videobuf_queue *vq;
if (offset < DST_QUEUE_OFF_BASE) {
vq = v4l2_m2m_get_src_vq(m2m_ctx);
} else {
vq = v4l2_m2m_get_dst_vq(m2m_ctx);
vma->vm_pgoff -= (DST_QUEUE_OFF_BASE >> PAGE_SHIFT);
}
return videobuf_mmap_mapper(vq, vma);
-------------------------------------------------------------------------------
Test device and a userspace application
-------------------------------------------------------------------------------
mem2mem_testdev.c is a test driver for the framework. It uses timers for fake
interrupts and allows testing transaction with different number of buffers
and transaction durations simultaneously.
process-vmalloc.c is a capture+output test application for the test device.
-------------------------------------------------------------------------------
Future work
-------------------------------------------------------------------------------
- read/write support
- transaction/abort timeouts
- extracting more common code to the framework? (e.g. per-queue format details,
transaction length, etc.)
Best regards
--
Pawel Osciak
Linux Platform Group
Samsung Poland R&D Center
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v2.1 1/2] V4L: Add memory-to-memory device helper framework for V4L2.
2009-12-23 13:17 [PATCH/RFC v2.1 0/2] Mem-to-mem device framework Pawel Osciak
@ 2009-12-23 13:17 ` Pawel Osciak
2009-12-24 2:53 ` Andy Walls
2009-12-23 13:17 ` [PATCH v2.1 2/2] V4L: Add a mem-to-mem V4L2 framework test device Pawel Osciak
` (2 subsequent siblings)
3 siblings, 1 reply; 9+ messages in thread
From: Pawel Osciak @ 2009-12-23 13:17 UTC (permalink / raw)
To: linux-arm-kernel
A mem-to-mem device is a device that uses memory buffers passed by
userspace applications for both source and destination data. This is
different from existing drivers, which use memory buffers for only one
of those at once.
In terms of V4L2 such a device would be both of OUTPUT and CAPTURE type.
Although no such devices are present in the V4L2 framework, a demand for such
a model exists, e.g. for 'resizer devices'.
This patch also adds a separate kconfig submenu for mem-to-mem V4L devices.
Signed-off-by: Pawel Osciak <p.osciak@samsung.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com>
---
drivers/media/video/Kconfig | 14 +
drivers/media/video/Makefile | 2 +
drivers/media/video/v4l2-mem2mem.c | 671 ++++++++++++++++++++++++++++++++++++
include/media/v4l2-mem2mem.h | 153 ++++++++
4 files changed, 840 insertions(+), 0 deletions(-)
create mode 100644 drivers/media/video/v4l2-mem2mem.c
create mode 100644 include/media/v4l2-mem2mem.h
diff --git a/drivers/media/video/Kconfig b/drivers/media/video/Kconfig
index 2f83be7..4e97dcf 100644
--- a/drivers/media/video/Kconfig
+++ b/drivers/media/video/Kconfig
@@ -45,6 +45,10 @@ config VIDEO_TUNER
tristate
depends on MEDIA_TUNER
+config V4L2_MEM2MEM_DEV
+ tristate
+ depends on VIDEOBUF_GEN
+
#
# Multimedia Video device configuration
#
@@ -1075,3 +1079,13 @@ config USB_S2255
endif # V4L_USB_DRIVERS
endif # VIDEO_CAPTURE_DRIVERS
+
+menuconfig V4L_MEM2MEM_DRIVERS
+ bool "Memory-to-memory multimedia devices"
+ depends on VIDEO_V4L2
+ default n
+ ---help---
+ Say Y here to enable selecting drivers for V4L devices that
+ use system memory for both source and destination buffers, as opposed
+ to capture and output drivers, which use memory buffers for just
+ one of those.
diff --git a/drivers/media/video/Makefile b/drivers/media/video/Makefile
index 2af68ee..9fe7d40 100644
--- a/drivers/media/video/Makefile
+++ b/drivers/media/video/Makefile
@@ -115,6 +115,8 @@ obj-$(CONFIG_VIDEOBUF_VMALLOC) += videobuf-vmalloc.o
obj-$(CONFIG_VIDEOBUF_DVB) += videobuf-dvb.o
obj-$(CONFIG_VIDEO_BTCX) += btcx-risc.o
+obj-$(CONFIG_V4L2_MEM2MEM_DEV) += v4l2-mem2mem.o
+
obj-$(CONFIG_VIDEO_M32R_AR_M64278) += arv.o
obj-$(CONFIG_VIDEO_CX2341X) += cx2341x.o
diff --git a/drivers/media/video/v4l2-mem2mem.c b/drivers/media/video/v4l2-mem2mem.c
new file mode 100644
index 0000000..417ee2c
--- /dev/null
+++ b/drivers/media/video/v4l2-mem2mem.c
@@ -0,0 +1,671 @@
+/*
+ * Memory-to-memory device framework for Video for Linux 2.
+ *
+ * Helper functions for devices that use memory buffers for both source
+ * and destination.
+ *
+ * Copyright (c) 2009 Samsung Electronics Co., Ltd.
+ * Pawel Osciak, <p.osciak@samsung.com>
+ * Marek Szyprowski, <m.szyprowski@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version
+ */
+
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <media/videobuf-core.h>
+#include <media/v4l2-mem2mem.h>
+
+MODULE_DESCRIPTION("Mem to mem device framework for V4L2");
+MODULE_AUTHOR("Pawel Osciak, <p.osciak@samsung.com>");
+MODULE_LICENSE("GPL");
+
+static int debug;
+module_param(debug, int, 0644);
+
+#define dprintk(fmt, arg...) do {\
+ if (debug >= 1)\
+ printk(KERN_DEBUG "%s: " fmt, __func__, ## arg); } while (0)
+
+
+/* The instance is already queued on the jobqueue */
+#define TRANS_QUEUED (1 << 0)
+/* The instance is currently running in hardware */
+#define TRANS_RUNNING (1 << 1)
+
+
+/* Offset base for buffers on the destination queue - used to distinguish
+ * between source and destination buffers when mmapping - they receive the same
+ * offsets but for different queues */
+#define DST_QUEUE_OFF_BASE (TASK_SIZE / 2)
+
+
+struct v4l2_m2m_dev {
+ /* Currently running instance */
+ struct v4l2_m2m_ctx *curr_ctx;
+ /* Instances queued to run */
+ struct list_head jobqueue;
+ spinlock_t job_spinlock;
+
+ struct v4l2_m2m_ops *m2m_ops;
+};
+
+static inline
+struct v4l2_m2m_queue_ctx *get_queue_ctx(struct v4l2_m2m_ctx *m2m_ctx,
+ enum v4l2_buf_type type)
+{
+ switch (type) {
+ case V4L2_BUF_TYPE_VIDEO_CAPTURE:
+ return &m2m_ctx->cap_q_ctx;
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT:
+ return &m2m_ctx->out_q_ctx;
+ default:
+ printk(KERN_ERR "Invalid buffer type\n");
+ return NULL;
+ }
+}
+
+/**
+ * v4l2_m2m_get_vq() - return videobuf_queue for the given type
+ */
+struct videobuf_queue *v4l2_m2m_get_vq(struct v4l2_m2m_ctx *m2m_ctx,
+ enum v4l2_buf_type type)
+{
+ struct v4l2_m2m_queue_ctx *q_ctx;
+
+ q_ctx = get_queue_ctx(m2m_ctx, type);
+ if (!q_ctx)
+ return NULL;
+
+ return &q_ctx->q;
+}
+EXPORT_SYMBOL(v4l2_m2m_get_vq);
+
+/**
+ * v4l2_m2m_get_src_vq() - return videobuf_queue for source buffers
+ */
+struct videobuf_queue *v4l2_m2m_get_src_vq(struct v4l2_m2m_ctx *m2m_ctx)
+{
+ return v4l2_m2m_get_vq(m2m_ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
+}
+EXPORT_SYMBOL(v4l2_m2m_get_src_vq);
+
+/**
+ * v4l2_m2m_get_dst_vq() - return videobuf_queue for destination buffers
+ */
+struct videobuf_queue *v4l2_m2m_get_dst_vq(struct v4l2_m2m_ctx *m2m_ctx)
+{
+ return v4l2_m2m_get_vq(m2m_ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
+}
+EXPORT_SYMBOL(v4l2_m2m_get_dst_vq);
+
+/**
+ * v4l2_m2m_next_buf() - return next buffer from the list of ready buffers
+ */
+static void *v4l2_m2m_next_buf(struct v4l2_m2m_ctx *m2m_ctx,
+ enum v4l2_buf_type type)
+{
+ struct v4l2_m2m_queue_ctx *q_ctx;
+ struct videobuf_buffer *vb = NULL;
+
+ q_ctx = get_queue_ctx(m2m_ctx, type);
+
+ vb = list_entry(q_ctx->rdy_queue.next, struct videobuf_buffer, queue);
+ vb->state = VIDEOBUF_ACTIVE;
+
+ return vb;
+}
+
+/**
+ * v4l2_m2m_next_src_buf() - return next source buffer from the list of ready
+ * buffers
+ */
+inline void *v4l2_m2m_next_src_buf(struct v4l2_m2m_ctx *m2m_ctx)
+{
+ return v4l2_m2m_next_buf(m2m_ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
+}
+EXPORT_SYMBOL(v4l2_m2m_next_src_buf);
+
+/**
+ * v4l2_m2m_next_dst_buf() - return next destination buffer from the list of
+ * ready buffers
+ */
+inline void *v4l2_m2m_next_dst_buf(struct v4l2_m2m_ctx *m2m_ctx)
+{
+ return v4l2_m2m_next_buf(m2m_ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
+}
+EXPORT_SYMBOL(v4l2_m2m_next_dst_buf);
+
+/**
+ * v4l2_m2m_buf_remove() - take off a buffer from the list of ready buffers and
+ * return it
+ */
+static void *v4l2_m2m_buf_remove(struct v4l2_m2m_ctx *m2m_ctx,
+ enum v4l2_buf_type type)
+{
+ struct v4l2_m2m_queue_ctx *q_ctx;
+ struct videobuf_buffer *vb = NULL;
+ unsigned long flags = 0;
+
+ q_ctx = get_queue_ctx(m2m_ctx, type);
+
+ spin_lock_irqsave(q_ctx->q.irqlock, flags);
+ vb = list_entry(q_ctx->rdy_queue.next, struct videobuf_buffer, queue);
+ list_del(&vb->queue);
+ q_ctx->num_rdy--;
+ spin_unlock_irqrestore(q_ctx->q.irqlock, flags);
+
+ return vb;
+}
+
+/**
+ * v4l2_m2m_src_buf_remove() - take off a srouce buffer from the list of ready
+ * buffers and return it
+ */
+void *v4l2_m2m_src_buf_remove(struct v4l2_m2m_ctx *m2m_ctx)
+{
+ return v4l2_m2m_buf_remove(m2m_ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
+}
+EXPORT_SYMBOL(v4l2_m2m_src_buf_remove);
+
+/**
+ * v4l2_m2m_dst_buf_remove() - take off a destination buffer from the list of
+ * ready buffers and return it
+ */
+void *v4l2_m2m_dst_buf_remove(struct v4l2_m2m_ctx *m2m_ctx)
+{
+ return v4l2_m2m_buf_remove(m2m_ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
+
+}
+EXPORT_SYMBOL(v4l2_m2m_dst_buf_remove);
+
+
+/*
+ * Scheduling handlers
+ */
+
+/**
+ * v4l2_m2m_get_curr_priv() - return driver private data for the currently
+ * running instance or NULL if no instance is running
+ */
+void *v4l2_m2m_get_curr_priv(struct v4l2_m2m_dev *m2m_dev)
+{
+ if (!m2m_dev->curr_ctx)
+ return NULL;
+ else
+ return m2m_dev->curr_ctx->priv;
+}
+EXPORT_SYMBOL(v4l2_m2m_get_curr_priv);
+
+/**
+ * v4l2_m2m_try_run() - select next job to perform and run it if possible
+ *
+ * Get next transaction (if present) from the waiting jobs list and run it.
+ */
+static void v4l2_m2m_try_run(struct v4l2_m2m_dev *m2m_dev)
+{
+ unsigned long flags = 0;
+
+ spin_lock_irqsave(&m2m_dev->job_spinlock, flags);
+ if (NULL != m2m_dev->curr_ctx) {
+ spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
+ dprintk("Another instance is running, won't run now\n");
+ return;
+ }
+
+ if (list_empty(&m2m_dev->jobqueue)) {
+ spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
+ dprintk("No job pending\n");
+ return;
+ }
+
+ m2m_dev->curr_ctx = list_entry(m2m_dev->jobqueue.next,
+ struct v4l2_m2m_ctx, queue);
+ m2m_dev->curr_ctx->job_flags |= TRANS_RUNNING;
+ spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
+
+ m2m_dev->m2m_ops->device_run(m2m_dev->curr_ctx->priv);
+
+ return;
+}
+
+/**
+ * v4l2_m2m_schedule() - add an instance to the pending job queue
+ * @m2m_ctx: The instance to be added to the pending job queue
+ *
+ * Called when an instance is fully prepared to run a transaction, i.e. the
+ * instance will not sleep before finishing the transaction when run.
+ * If an instance is already on the queue, it will not be added for the second
+ * time and it is the responsibility of the instance to retry this at a later
+ * time.
+ */
+static void v4l2_m2m_schedule(struct v4l2_m2m_ctx *m2m_ctx)
+{
+ struct v4l2_m2m_dev *dev = m2m_ctx->m2m_dev;
+ unsigned long flags = 0;
+
+ spin_lock_irqsave(&dev->job_spinlock, flags);
+ if (!(m2m_ctx->job_flags & TRANS_QUEUED)) {
+ list_add_tail(&m2m_ctx->queue, &dev->jobqueue);
+ m2m_ctx->job_flags |= TRANS_QUEUED;
+ }
+ spin_unlock_irqrestore(&dev->job_spinlock, flags);
+
+ v4l2_m2m_try_run(dev);
+}
+
+/**
+ * v4l2_m2m_try_schedule() - check whether an instance is ready to be added to
+ * the pending job queue and add it if so.
+ * @m2m_ctx: m2m context assigned to the instance to be checked
+ *
+ * There are three basic requirements an instance has to meet to be able to run:
+ * 1) at least one source buffer has to be queued,
+ * 2) at least one destination buffer has to be queued,
+ * 3) streaming has to be on.
+ *
+ * There can also be additional, custom requirements. In such case the driver
+ * should supply a custom method (job_ready in v4l2_m2m_ops) that should
+ * return 1 * if the instance is ready.
+ * An example of the above could be an instance that requires more than one
+ * src/dst buffer per transaction.
+ */
+static void v4l2_m2m_try_schedule(struct v4l2_m2m_ctx *m2m_ctx)
+{
+ struct v4l2_m2m_dev *m2m_dev;
+ unsigned long flags = 0;
+
+ m2m_dev = m2m_ctx->m2m_dev;
+ dprintk("Trying to schedule a job for m2m_ctx: %p\n", m2m_ctx);
+
+ spin_lock_irqsave(&m2m_dev->job_spinlock, flags);
+ if (m2m_ctx->job_flags & TRANS_QUEUED) {
+ spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
+ dprintk("On job queue already\n");
+ return;
+ }
+ spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
+
+ /* Checking only one queue is enough, we always turn on both */
+ if (!m2m_ctx->out_q_ctx.q.streaming) {
+ dprintk("Streaming not on, will not schedule\n");
+ return;
+ }
+
+ if (list_empty(&m2m_ctx->out_q_ctx.rdy_queue)) {
+ dprintk("No input buffers available\n");
+ return;
+ }
+ if (list_empty(&m2m_ctx->cap_q_ctx.rdy_queue)) {
+ dprintk("No output buffers available\n");
+ return;
+ }
+
+ if (m2m_dev->m2m_ops->job_ready
+ && (!m2m_dev->m2m_ops->job_ready(m2m_ctx->priv))) {
+ dprintk("Driver not ready\n");
+ return;
+ }
+
+ dprintk("Instance ready to be scheduled\n");
+ v4l2_m2m_schedule(m2m_ctx);
+}
+
+/**
+ * v4l2_m2m_job_finish() - inform the framework that a job has been finished
+ * and have it clean up
+ *
+ * Called by a driver to yield back the device after it has finished with it.
+ * Should be called as soon as possible after reaching a state which allows
+ * other instances to take control of the device.
+ *
+ * TODO: An instance that fails to give back the device before a predefined
+ * amount of time may have its device ownership taken away forcibly.
+ */
+void v4l2_m2m_job_finish(struct v4l2_m2m_dev *m2m_dev,
+ struct v4l2_m2m_ctx *m2m_ctx)
+{
+ unsigned long flags = 0;
+
+ spin_lock_irqsave(&m2m_dev->job_spinlock, flags);
+ if (!m2m_dev->curr_ctx || m2m_dev->curr_ctx != m2m_ctx) {
+ spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
+ dprintk("Called by an instance not currently running\n");
+ return;
+ }
+
+ /*mutex_lock(&m2m_dev->dev_mutex);*/
+ list_del(&m2m_dev->curr_ctx->queue);
+ m2m_dev->curr_ctx->job_flags &= ~(TRANS_QUEUED | TRANS_RUNNING);
+ m2m_dev->curr_ctx = NULL;
+
+ spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
+
+ v4l2_m2m_try_run(m2m_dev);
+}
+EXPORT_SYMBOL(v4l2_m2m_job_finish);
+
+/**
+ * v4l2_m2m_reqbufs() - multi-queue-aware REQBUFS multiplexer
+ */
+int v4l2_m2m_reqbufs(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ struct v4l2_requestbuffers *reqbufs)
+{
+ struct videobuf_queue *vq;
+
+ vq = v4l2_m2m_get_vq(m2m_ctx, reqbufs->type);
+ return videobuf_reqbufs(vq, reqbufs);
+}
+EXPORT_SYMBOL_GPL(v4l2_m2m_reqbufs);
+
+/**
+ * v4l2_m2m_querybuf() - multi-queue-aware QUERYBUF multiplexer
+ *
+ * See v4l2_m2m_mmap() documentation for details.
+ */
+int v4l2_m2m_querybuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ struct v4l2_buffer *buf)
+{
+ struct videobuf_queue *vq;
+ int ret = 0;
+
+ vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
+ ret = videobuf_querybuf(vq, buf);
+
+ if (buf->memory == V4L2_MEMORY_MMAP
+ && vq->type == V4L2_BUF_TYPE_VIDEO_CAPTURE) {
+ buf->m.offset += DST_QUEUE_OFF_BASE;
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(v4l2_m2m_querybuf);
+
+/**
+ * v4l2_m2m_qbuf() - enqueue a source or destination buffer, depending on
+ * the type
+ */
+int v4l2_m2m_qbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ struct v4l2_buffer *buf)
+{
+ struct videobuf_queue *vq;
+
+ vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
+ return videobuf_qbuf(vq, buf);
+}
+EXPORT_SYMBOL_GPL(v4l2_m2m_qbuf);
+
+/**
+ * v4l2_m2m_dqbuf() - dequeue a source or destination buffer, depending on
+ * the type
+ */
+int v4l2_m2m_dqbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ struct v4l2_buffer *buf)
+{
+ struct videobuf_queue *vq;
+
+ vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
+ return videobuf_dqbuf(vq, buf, file->f_flags & O_NONBLOCK);
+}
+EXPORT_SYMBOL_GPL(v4l2_m2m_dqbuf);
+
+/**
+ * v4l2_M2m_streamon() - start streaming
+ */
+int v4l2_m2m_streamon(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ enum v4l2_buf_type type)
+{
+ int ret = 0;
+
+ /* These can fail only if the queues are in use,
+ * but they shouldn't be as we are managing instances manually */
+ ret = videobuf_streamon(&m2m_ctx->out_q_ctx.q);
+ if (ret) {
+ printk(KERN_ERR "Streamon on output queue failed\n");
+ return ret;
+ }
+
+ ret = videobuf_streamon(&m2m_ctx->cap_q_ctx.q);
+ if (ret) {
+ printk(KERN_ERR "Streamon on capture queue failed\n");
+ return ret;
+ }
+
+ v4l2_m2m_try_schedule(m2m_ctx);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(v4l2_m2m_streamon);
+
+/**
+ * v4l2_m2m_streamoff() - stop streaming
+ */
+int v4l2_m2m_streamoff(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ enum v4l2_buf_type type)
+{
+ /* streamoff() fails only when we are not streaming */
+ if (videobuf_streamoff(&m2m_ctx->out_q_ctx.q)
+ || videobuf_streamoff(&m2m_ctx->cap_q_ctx.q))
+ return -EINVAL;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(v4l2_m2m_streamoff);
+
+/**
+ * v4l2_m2m_poll() - poll replacement, for destination buffers only
+ *
+ * Call from driver's poll() function. Will poll the destination queue only.
+ */
+unsigned int v4l2_m2m_poll(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ struct poll_table_struct *wait)
+{
+ struct videobuf_queue *dst_q = NULL;
+ struct videobuf_buffer *vb = NULL;
+ unsigned int rc = 0;
+
+ dst_q = v4l2_m2m_get_dst_vq(m2m_ctx);
+
+ mutex_lock(&dst_q->vb_lock);
+
+ if (dst_q->streaming) {
+ if (!list_empty(&dst_q->stream))
+ vb = list_entry(dst_q->stream.next,
+ struct videobuf_buffer, stream);
+ }
+
+ if (!vb)
+ rc = POLLERR;
+
+ if (0 == rc) {
+ poll_wait(file, &vb->done, wait);
+ if (vb->state == VIDEOBUF_DONE || vb->state == VIDEOBUF_ERROR)
+ rc = POLLOUT | POLLRDNORM;
+ }
+
+ mutex_unlock(&dst_q->vb_lock);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(v4l2_m2m_poll);
+
+/**
+ * v4l2_m2m_mmap() - source and destination queues-aware mmap multiplexer
+ *
+ * Call from driver's mmap() function. Will handle mmap() for both queues
+ * seamlessly for videobuffer, which will receive normal per-queue offsets and
+ * proper videobuf queue pointers. The differentation is made outside videobuf
+ * by adding a predefined offset to buffers from one of the queues and
+ * subtracting it before passing it back to videobuf. Only drivers (and
+ * thus applications) receive modified offsets.
+ */
+int v4l2_m2m_mmap(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ struct vm_area_struct *vma)
+{
+ unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
+ struct videobuf_queue *vq;
+
+ if (offset < DST_QUEUE_OFF_BASE) {
+ vq = v4l2_m2m_get_src_vq(m2m_ctx);
+ } else {
+ vq = v4l2_m2m_get_dst_vq(m2m_ctx);
+ vma->vm_pgoff -= (DST_QUEUE_OFF_BASE >> PAGE_SHIFT);
+ }
+
+ return videobuf_mmap_mapper(vq, vma);
+}
+EXPORT_SYMBOL(v4l2_m2m_mmap);
+
+/**
+ * v4l2_m2m_init() - initialize per-driver m2m data
+ *
+ * Usually called from driver's probe() function.
+ */
+struct v4l2_m2m_dev *v4l2_m2m_init(struct v4l2_m2m_ops *m2m_ops)
+{
+ struct v4l2_m2m_dev *m2m_dev;
+
+ if (!m2m_ops)
+ return ERR_PTR(-EINVAL);
+
+ /*BUG_ON(!m2m_ops->job_ready);*/
+ BUG_ON(!m2m_ops->device_run);
+ BUG_ON(!m2m_ops->job_abort);
+
+ m2m_dev = kzalloc(sizeof *m2m_dev, GFP_KERNEL);
+ if (!m2m_dev)
+ return ERR_PTR(-ENOMEM);
+
+ m2m_dev->curr_ctx = NULL;
+ m2m_dev->m2m_ops = m2m_ops;
+ INIT_LIST_HEAD(&m2m_dev->jobqueue);
+ spin_lock_init(&m2m_dev->job_spinlock);
+
+ return m2m_dev;
+}
+EXPORT_SYMBOL_GPL(v4l2_m2m_init);
+
+/**
+ * v4l2_m2m_release() - cleans up and frees a m2m_dev structure
+ *
+ * Usually called from driver's remove() function.
+ */
+void v4l2_m2m_release(struct v4l2_m2m_dev *m2m_dev)
+{
+ kfree(m2m_dev);
+}
+EXPORT_SYMBOL_GPL(v4l2_m2m_release);
+
+/**
+ * v4l2_m2m_ctx_init() - allocate and initialize a m2m context
+ * @priv - driver's instance private data
+ * @m2m_dev - a previously initialized m2m_dev struct
+ * @vq_init - a callback for queue type-specific initialization function to be
+ * used for initializing videobuf_queues
+ *
+ * Usually called from driver's open() function.
+ */
+struct v4l2_m2m_ctx *v4l2_m2m_ctx_init(void *priv, struct v4l2_m2m_dev *m2m_dev,
+ void (*vq_init)(void *priv, struct videobuf_queue *,
+ enum v4l2_buf_type))
+{
+ struct v4l2_m2m_ctx *m2m_ctx;
+ struct v4l2_m2m_queue_ctx *out_q_ctx;
+ struct v4l2_m2m_queue_ctx *cap_q_ctx;
+
+ if (!vq_init)
+ return ERR_PTR(-EINVAL);
+
+ m2m_ctx = kzalloc(sizeof *m2m_ctx, GFP_KERNEL);
+ if (!m2m_ctx)
+ return ERR_PTR(-ENOMEM);
+
+ m2m_ctx->priv = priv;
+ m2m_ctx->m2m_dev = m2m_dev;
+
+ out_q_ctx = get_queue_ctx(m2m_ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
+ cap_q_ctx = get_queue_ctx(m2m_ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
+
+ INIT_LIST_HEAD(&out_q_ctx->rdy_queue);
+ INIT_LIST_HEAD(&cap_q_ctx->rdy_queue);
+
+ /*spin_lock_init(&m2m_ctx->queue_lock);*/
+ INIT_LIST_HEAD(&m2m_ctx->queue);
+
+ vq_init(priv, &out_q_ctx->q, V4L2_BUF_TYPE_VIDEO_OUTPUT);
+ vq_init(priv, &cap_q_ctx->q, V4L2_BUF_TYPE_VIDEO_CAPTURE);
+ out_q_ctx->q.priv_data = cap_q_ctx->q.priv_data = priv;
+
+ return m2m_ctx;
+}
+EXPORT_SYMBOL_GPL(v4l2_m2m_ctx_init);
+
+/**
+ * v4l2_m2m_ctx_release() - release m2m context
+ *
+ * Usually called from driver's release() function.
+ */
+void v4l2_m2m_ctx_release(struct v4l2_m2m_ctx *m2m_ctx)
+{
+ struct v4l2_m2m_dev *m2m_dev;
+ struct videobuf_buffer *vb;
+ unsigned long flags = 0;
+
+ m2m_dev = m2m_ctx->m2m_dev;
+
+ spin_lock_irqsave(&m2m_dev->job_spinlock, flags);
+ if (m2m_ctx->job_flags & TRANS_RUNNING) {
+ spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
+ m2m_dev->m2m_ops->job_abort(m2m_ctx->priv);
+ dprintk("m2m_ctx %p running, will wait to complete", m2m_ctx);
+ vb = v4l2_m2m_next_dst_buf(m2m_ctx);
+ BUG_ON(NULL == vb);
+ wait_event(vb->done, vb->state != VIDEOBUF_ACTIVE
+ && vb->state != VIDEOBUF_QUEUED);
+ } else if (m2m_ctx->job_flags & TRANS_QUEUED) {
+ list_del(&m2m_ctx->queue);
+ m2m_ctx->job_flags &= ~(TRANS_QUEUED | TRANS_RUNNING);
+ spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
+ dprintk("m2m_ctx: %p had been on queue and was removed\n",
+ m2m_ctx);
+ } else {
+ /* Do nothing, was not on queue/running */
+ spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
+ }
+
+ videobuf_stop(&m2m_ctx->cap_q_ctx.q);
+ videobuf_stop(&m2m_ctx->out_q_ctx.q);
+
+ videobuf_mmap_free(&m2m_ctx->cap_q_ctx.q);
+ videobuf_mmap_free(&m2m_ctx->out_q_ctx.q);
+
+ kfree(m2m_ctx);
+}
+EXPORT_SYMBOL_GPL(v4l2_m2m_ctx_release);
+
+/**
+ * v4l2_m2m_buf_queue() - add a buffer to the proper ready buffers list.
+ *
+ * Call from withing buf_queue() videobuf_queue_ops callback.
+ */
+/* Locking: Caller holds q->irqlock */
+void v4l2_m2m_buf_queue(struct v4l2_m2m_ctx *m2m_ctx, struct videobuf_queue *vq,
+ struct videobuf_buffer *vb)
+{
+ struct v4l2_m2m_queue_ctx *q_ctx;
+
+ q_ctx = get_queue_ctx(m2m_ctx, vq->type);
+ if (!q_ctx)
+ return;
+
+ list_add_tail(&vb->queue, &q_ctx->rdy_queue);
+ q_ctx->num_rdy++;
+
+ vb->state = VIDEOBUF_QUEUED;
+
+ v4l2_m2m_try_schedule(m2m_ctx);
+}
+EXPORT_SYMBOL_GPL(v4l2_m2m_buf_queue);
+
diff --git a/include/media/v4l2-mem2mem.h b/include/media/v4l2-mem2mem.h
new file mode 100644
index 0000000..a5ac3ec
--- /dev/null
+++ b/include/media/v4l2-mem2mem.h
@@ -0,0 +1,153 @@
+/*
+ * Memory-to-memory device framework for Video for Linux 2.
+ *
+ * Helper functions for devices that use memory buffers for both source
+ * and destination.
+ *
+ * Copyright (c) 2009 Samsung Electronics Co., Ltd.
+ * Pawel Osciak, <p.osciak@samsung.com>
+ * Marek Szyprowski, <m.szyprowski@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version
+ */
+
+#ifndef _MEDIA_V4L2_MEM2MEM_H
+#define _MEDIA_V4L2_MEM2MEM_H
+
+#include <media/videobuf-core.h>
+
+/**
+ * struct v4l2_m2m_ops - mem-to-mem device driver callbacks
+ * @device_run: required. Begin the actual job (transaction) inside this
+ * callback.
+ * The job does NOT have to end before this callback returns
+ * (and it will be the usual case). When the job finishes,
+ * v4l2_m2m_job_finish() has to be called.
+ * @job_ready: optional. Should return 0 if the driver does not have a job
+ * fully prepared to run yet (i.e. it will not be able to finish a
+ * transactio without sleeping). If not provided, it will be
+ * assumed that one source and one destination buffer are all
+ * that is required for the driver to perform one full transaction.
+ * @job_abort: required. Informs the driver that it has to abort the currently
+ * running transaction as soon as possible (i.e. as soon as it can
+ * stop the device safely; e.g. in the next interrupt handler),
+ * even if the transaction would not have been finished by then.
+ * After the driver performs the necessary steps, it has to call
+ * v4l2_m2m_job_finish() (as if the transaction ended normally).
+ * This function does not have to (and will usually not) wait
+ * until the device enters a state when it can be stopped.
+ */
+struct v4l2_m2m_ops {
+ void (*device_run)(void *priv);
+ int (*job_ready)(void *priv);
+ void (*job_abort)(void *priv);
+};
+
+struct v4l2_m2m_dev;
+
+struct v4l2_m2m_queue_ctx {
+/* private: internal use only */
+ struct videobuf_queue q;
+
+ /* Base value for offsets of mmaped buffers on this queue */
+ unsigned long offset_base;
+
+ /* Queue for buffers ready to be processed as soon as this
+ * instance receives access to the device */
+ struct list_head rdy_queue;
+ u8 num_rdy;
+};
+
+struct v4l2_m2m_ctx {
+/* private: internal use only */
+ struct v4l2_m2m_dev *m2m_dev;
+
+ /* Capture (output to memory) queue context */
+ struct v4l2_m2m_queue_ctx cap_q_ctx;
+
+ /* Output (input from memory) queue context */
+ struct v4l2_m2m_queue_ctx out_q_ctx;
+
+ /* For device job queue */
+ struct list_head queue;
+ unsigned long job_flags;
+
+ /* Instance private data */
+ void *priv;
+};
+
+void *v4l2_m2m_get_curr_priv(struct v4l2_m2m_dev *m2m_dev);
+
+struct videobuf_queue *v4l2_m2m_get_src_vq(struct v4l2_m2m_ctx *m2m_ctx);
+struct videobuf_queue *v4l2_m2m_get_dst_vq(struct v4l2_m2m_ctx *m2m_ctx);
+struct videobuf_queue *v4l2_m2m_get_vq(struct v4l2_m2m_ctx *m2m_ctx,
+ enum v4l2_buf_type type);
+
+void v4l2_m2m_job_finish(struct v4l2_m2m_dev *m2m_dev,
+ struct v4l2_m2m_ctx *m2m_ctx);
+
+void *v4l2_m2m_next_src_buf(struct v4l2_m2m_ctx *m2m_ctx);
+void *v4l2_m2m_next_dst_buf(struct v4l2_m2m_ctx *m2m_ctx);
+
+void *v4l2_m2m_src_buf_remove(struct v4l2_m2m_ctx *m2m_ctx);
+void *v4l2_m2m_dst_buf_remove(struct v4l2_m2m_ctx *m2m_ctx);
+
+
+int v4l2_m2m_reqbufs(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ struct v4l2_requestbuffers *reqbufs);
+
+int v4l2_m2m_querybuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ struct v4l2_buffer *buf);
+
+int v4l2_m2m_qbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ struct v4l2_buffer *buf);
+int v4l2_m2m_dqbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ struct v4l2_buffer *buf);
+
+int v4l2_m2m_streamon(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ enum v4l2_buf_type type);
+int v4l2_m2m_streamoff(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ enum v4l2_buf_type type);
+
+unsigned int v4l2_m2m_poll(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ struct poll_table_struct *wait);
+
+int v4l2_m2m_mmap(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ struct vm_area_struct *vma);
+
+struct v4l2_m2m_dev *v4l2_m2m_init(struct v4l2_m2m_ops *m2m_ops);
+void v4l2_m2m_release(struct v4l2_m2m_dev *m2m_dev);
+
+struct v4l2_m2m_ctx *v4l2_m2m_ctx_init(void *priv, struct v4l2_m2m_dev *m2m_dev,
+ void (*vq_init)(void *priv, struct videobuf_queue *,
+ enum v4l2_buf_type));
+void v4l2_m2m_ctx_release(struct v4l2_m2m_ctx *m2m_ctx);
+
+void v4l2_m2m_buf_queue(struct v4l2_m2m_ctx *m2m_ctx, struct videobuf_queue *vq,
+ struct videobuf_buffer *vb);
+
+/**
+ * v4l2_m2m_num_src_bufs_ready() - return the number of source buffers ready for
+ * use
+ */
+static inline
+unsigned int v4l2_m2m_num_src_bufs_ready(struct v4l2_m2m_ctx *m2m_ctx)
+{
+ return m2m_ctx->cap_q_ctx.num_rdy;
+}
+
+/**
+ * v4l2_m2m_num_src_bufs_ready() - return the number of destination buffers
+ * ready for use
+ */
+static inline
+unsigned int v4l2_m2m_num_dst_bufs_ready(struct v4l2_m2m_ctx *m2m_ctx)
+{
+ return m2m_ctx->out_q_ctx.num_rdy;
+}
+
+#endif /* _MEDIA_V4L2_MEM2MEM_H */
+
--
1.6.4.2.253.g0b1fac
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v2.1 2/2] V4L: Add a mem-to-mem V4L2 framework test device.
2009-12-23 13:17 [PATCH/RFC v2.1 0/2] Mem-to-mem device framework Pawel Osciak
2009-12-23 13:17 ` [PATCH v2.1 1/2] V4L: Add memory-to-memory device helper framework for V4L2 Pawel Osciak
@ 2009-12-23 13:17 ` Pawel Osciak
2009-12-23 13:17 ` [EXAMPLE v2] Mem-to-mem userspace test application Pawel Osciak
2009-12-23 15:05 ` [PATCH/RFC v2.1 0/2] Mem-to-mem device framework Hans Verkuil
3 siblings, 0 replies; 9+ messages in thread
From: Pawel Osciak @ 2009-12-23 13:17 UTC (permalink / raw)
To: linux-arm-kernel
This is a virtual device driver for testing the mem-to-mem V4L2 framework.
It simulates a device that uses memory buffers for both source and
destination, processes the data and issues an "IRQ" (simulated by a timer).
The device is capable of multi-instance, multi-buffer-per-transaction
operation (via the mem2mem framework).
Signed-off-by: Pawel Osciak <p.osciak@samsung.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com>
---
drivers/media/video/Kconfig | 14 +
drivers/media/video/Makefile | 1 +
drivers/media/video/mem2mem_testdev.c | 1052 +++++++++++++++++++++++++++++++++
3 files changed, 1067 insertions(+), 0 deletions(-)
create mode 100644 drivers/media/video/mem2mem_testdev.c
diff --git a/drivers/media/video/Kconfig b/drivers/media/video/Kconfig
index 4e97dcf..4e7d703 100644
--- a/drivers/media/video/Kconfig
+++ b/drivers/media/video/Kconfig
@@ -1089,3 +1089,17 @@ menuconfig V4L_MEM2MEM_DRIVERS
use system memory for both source and destination buffers, as opposed
to capture and output drivers, which use memory buffers for just
one of those.
+
+if V4L_MEM2MEM_DRIVERS
+
+config VIDEO_MEM2MEM_TESTDEV
+ tristate "Virtual test device for mem2mem framework"
+ depends on VIDEO_DEV && VIDEO_V4L2
+ select VIDEOBUF_VMALLOC
+ select V4L2_MEM2MEM_DEV
+ default n
+ ---help---
+ This is a virtual test device for the memory-to-memory driver
+ framework.
+
+endif # V4L_MEM2MEM_DRIVERS
diff --git a/drivers/media/video/Makefile b/drivers/media/video/Makefile
index 9fe7d40..8667f1c 100644
--- a/drivers/media/video/Makefile
+++ b/drivers/media/video/Makefile
@@ -149,6 +149,7 @@ obj-$(CONFIG_VIDEO_IVTV) += ivtv/
obj-$(CONFIG_VIDEO_CX18) += cx18/
obj-$(CONFIG_VIDEO_VIVI) += vivi.o
+obj-$(CONFIG_VIDEO_MEM2MEM_TESTDEV) += mem2mem_testdev.o
obj-$(CONFIG_VIDEO_CX23885) += cx23885/
obj-$(CONFIG_VIDEO_OMAP2) += omap2cam.o
diff --git a/drivers/media/video/mem2mem_testdev.c b/drivers/media/video/mem2mem_testdev.c
new file mode 100644
index 0000000..ea54a68
--- /dev/null
+++ b/drivers/media/video/mem2mem_testdev.c
@@ -0,0 +1,1052 @@
+/*
+ * A virtual v4l2-mem2mem example device.
+ *
+ * This is a virtual device driver for testing the mem-to-mem V4L2 framework.
+ * It simulates a device that uses memory buffers for both source and
+ * destination, processes the data and issues an "IRQ" (simulated by a timer).
+ * The device is capable of multi-instance, multi-buffer-per-transaction
+ * operation (via the mem2mem framework).
+ *
+ * Copyright (c) 2009 Samsung Electronics Co., Ltd.
+ * Pawel Osciak, <p.osciak@samsung.com>
+ * Marek Szyprowski, <m.szyprowski@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version
+ */
+#include <linux/module.h>
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/version.h>
+#include <linux/timer.h>
+#include <linux/sched.h>
+
+#include <linux/platform_device.h>
+#include <media/v4l2-mem2mem.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-ioctl.h>
+#include <media/videobuf-vmalloc.h>
+
+#define MEM2MEM_TEST_MODULE_NAME "mem2mem-testdev"
+
+MODULE_DESCRIPTION("Virtual device for mem2mem framework testing");
+MODULE_AUTHOR("Pawel Osciak, <p.osciak@samsung.com>");
+MODULE_LICENSE("GPL");
+
+
+#define MIN_W 32
+#define MIN_H 32
+#define MAX_W 640
+#define MAX_H 480
+#define DIM_ALIGN_MASK 0x08 /* 8-alignment for dimensions */
+
+/* Flags that indicate a format can be used for capture/output */
+#define MEM2MEM_CAPTURE (1 << 0)
+#define MEM2MEM_OUTPUT (1 << 1)
+
+#define MEM2MEM_MAX_INSTANCES 10
+#define MEM2MEM_NAME "m2m-testdev"
+
+/* Per queue */
+#define MEM2MEM_DEF_NUM_BUFS 32
+/* In bytes, per queue */
+#define MEM2MEM_VID_MEM_LIMIT (16 * 1024 * 1024)
+
+/* Default transaction time in msec */
+#define MEM2MEM_DEF_TRANSTIME 1000
+/* Default number of buffers per transaction */
+#define MEM2MEM_DEF_TRANSLEN 1
+#define MEM2MEM_COLOR_STEP (0xff >> 4)
+#define MEM2MEM_NUM_TILES 10
+
+#define dprintk(dev, fmt, arg...) \
+ v4l2_dbg(1, 1, &dev->v4l2_dev, "%s: " fmt, __func__, ## arg)
+
+
+void m2mtest_dev_release(struct device *dev)
+{}
+
+static struct platform_device m2mtest_pdev = {
+ .name = MEM2MEM_NAME,
+ .dev.release = m2mtest_dev_release,
+};
+
+struct m2mtest_fmt {
+ char *name;
+ u32 fourcc; /* v4l2 format id */
+ int depth;
+ /* Types the format can be used for */
+ u32 types;
+};
+
+static struct m2mtest_fmt formats[] = {
+ {
+ .name = "RGB565 (BE)",
+ .fourcc = V4L2_PIX_FMT_RGB565X, /* rrrrrggg gggbbbbb */
+ .depth = 16,
+ /* Both capture and output format */
+ .types = MEM2MEM_CAPTURE | MEM2MEM_OUTPUT,
+ },
+ {
+ .name = "4:2:2, packed, YUYV",
+ .fourcc = V4L2_PIX_FMT_YUYV,
+ .depth = 16,
+ /* Output-only format */
+ .types = MEM2MEM_OUTPUT,
+ },
+};
+
+/* Per-queue, driver-specific private data */
+struct m2mtest_q_data
+{
+ unsigned int width;
+ unsigned int height;
+ unsigned int sizeimage;
+ struct m2mtest_fmt *fmt;
+};
+
+enum {
+ V4L2_M2M_SRC = 0,
+ V4L2_M2M_DST = 1,
+};
+
+/* Source and destination queue data */
+static struct m2mtest_q_data q_data[2];
+
+static struct m2mtest_q_data *get_q_data(enum v4l2_buf_type type)
+{
+ switch (type) {
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT:
+ return &q_data[V4L2_M2M_SRC];
+ case V4L2_BUF_TYPE_VIDEO_CAPTURE:
+ return &q_data[V4L2_M2M_DST];
+ default:
+ BUG();
+ return NULL;
+ }
+}
+
+
+#define V4L2_CID_TRANS_TIME_MSEC V4L2_CID_PRIVATE_BASE
+#define V4L2_CID_TRANS_NUM_BUFS (V4L2_CID_PRIVATE_BASE + 1)
+
+static struct v4l2_queryctrl m2mtest_ctrls[] = {
+ {
+ .id = V4L2_CID_TRANS_TIME_MSEC,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "Transaction time (msec)",
+ .minimum = 1,
+ .maximum = 10000,
+ .step = 100,
+ .default_value = 1000,
+ .flags = 0,
+ }, {
+ .id = V4L2_CID_TRANS_NUM_BUFS,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "Buffers per transaction",
+ .minimum = 1,
+ .maximum = MEM2MEM_DEF_NUM_BUFS,
+ .step = 1,
+ .default_value = 1,
+ .flags = 0,
+ },
+};
+
+#define NUM_FORMATS ARRAY_SIZE(formats)
+
+static struct m2mtest_fmt *find_format(struct v4l2_format *f)
+{
+ struct m2mtest_fmt *fmt;
+ unsigned int k;
+
+ for (k = 0; k < NUM_FORMATS; k++) {
+ fmt = &formats[k];
+ if (fmt->fourcc == f->fmt.pix.pixelformat)
+ break;
+ }
+
+ if (k == NUM_FORMATS)
+ return NULL;
+
+ return &formats[k];
+}
+
+struct m2mtest_dev {
+ struct v4l2_device v4l2_dev;
+ struct video_device *vfd;
+
+ atomic_t num_inst;
+ struct mutex dev_mutex;
+ spinlock_t irqlock;
+
+ struct timer_list timer;
+
+ struct v4l2_m2m_dev *m2m_dev;
+};
+
+struct m2mtest_ctx {
+ struct m2mtest_dev *dev;
+
+ /* Processed buffers in this transaction */
+ u8 num_processed;
+
+ /* Transaction length (i.e. how many buffers per transaction) */
+ u32 translen;
+ /* Transaction time (i.e. simulated processing time) in miliseconds */
+ u32 transtime;
+
+ /* Abort requested by m2m */
+ int aborting;
+
+ struct v4l2_m2m_ctx *m2m_ctx;
+};
+
+struct m2mtest_buffer {
+ /* vb must be first! */
+ struct videobuf_buffer vb;
+};
+
+static struct v4l2_queryctrl *get_ctrl(int id)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(m2mtest_ctrls); ++i) {
+ if (id == m2mtest_ctrls[i].id) {
+ return &m2mtest_ctrls[i];
+ }
+ }
+
+ return NULL;
+}
+
+static int device_process(struct m2mtest_ctx *ctx,
+ struct m2mtest_buffer *in_buf,
+ struct m2mtest_buffer *out_buf)
+{
+ struct m2mtest_dev *dev = ctx->dev;
+ u8 *p_in, *p_out;
+ int x, y, t, w;
+ int tile_w, bytes_left;
+ struct videobuf_queue *src_q;
+ struct videobuf_queue *dst_q;
+
+ src_q = v4l2_m2m_get_src_vq(ctx->m2m_ctx);
+ dst_q = v4l2_m2m_get_dst_vq(ctx->m2m_ctx);
+ p_in = videobuf_queue_to_vmalloc(src_q, &in_buf->vb);
+ p_out = videobuf_queue_to_vmalloc(dst_q, &out_buf->vb);
+ if (!p_in || !p_out) {
+ v4l2_err(&dev->v4l2_dev,
+ "Acquiring kernel pointers to buffers failed\n");
+ return 1;
+ }
+
+ if (in_buf->vb.size < out_buf->vb.size) {
+ v4l2_err(&dev->v4l2_dev, "Output buffer is too small\n");
+ return 1;
+ }
+
+ tile_w = (in_buf->vb.width * (q_data[V4L2_M2M_DST].fmt->depth >> 3))
+ / MEM2MEM_NUM_TILES;
+ bytes_left = in_buf->vb.bytesperline - tile_w * MEM2MEM_NUM_TILES;
+ w = 0;
+
+ for (y = 0; y < in_buf->vb.height; ++y) {
+ for (t = 0; t < MEM2MEM_NUM_TILES; ++t) {
+ if (w & 0x1) {
+ for (x = 0; x < tile_w; ++x)
+ *p_out++ = *p_in++ + MEM2MEM_COLOR_STEP;
+ } else {
+ for (x = 0; x < tile_w; ++x)
+ *p_out++ = *p_in++ - MEM2MEM_COLOR_STEP;
+ }
+ ++w;
+ }
+ p_in += bytes_left;
+ p_out += bytes_left;
+ }
+
+ return 0;
+}
+
+static void schedule_irq(struct m2mtest_dev *dev, int msec_timeout)
+{
+ dprintk(dev, "Scheduling a fake irq\n");
+ mod_timer(&dev->timer, jiffies + msecs_to_jiffies(msec_timeout));
+}
+
+/*
+ * mem2mem callbacks
+ */
+
+/**
+ * job_ready - check whether an instance is ready to be scheduled to run
+ */
+static int job_ready(void *priv)
+{
+ struct m2mtest_ctx *ctx = priv;
+
+ if (v4l2_m2m_num_src_bufs_ready(ctx->m2m_ctx) < ctx->translen
+ || v4l2_m2m_num_dst_bufs_ready(ctx->m2m_ctx) < ctx->translen) {
+ dprintk(ctx->dev, "Not enough buffers available\n");
+ return 0;
+ }
+
+ return 1;
+}
+
+static void job_abort(void *priv)
+{
+ struct m2mtest_ctx *ctx = priv;
+
+ /* Will cancel the transaction in the next interrupt handler */
+ ctx->aborting = 1;
+}
+
+/* device_run() - prepares and starts the device
+ *
+ * This simulates all the immediate preparations required
+ * before starting a device.
+ * This should be called by the framework when it devides to
+ * schedule a particular instance.
+ */
+static void device_run(void *priv)
+{
+ struct m2mtest_ctx *ctx = priv;
+ struct m2mtest_dev *dev = ctx->dev;
+ struct m2mtest_buffer *src_buf, *dst_buf;
+
+ src_buf = v4l2_m2m_next_src_buf(ctx->m2m_ctx);
+ dst_buf = v4l2_m2m_next_dst_buf(ctx->m2m_ctx);
+
+ device_process(ctx, src_buf, dst_buf);
+
+ /* Run a timer, which simulates a hardware irq */
+ schedule_irq(dev, ctx->transtime);
+}
+
+
+static void device_isr(unsigned long priv)
+{
+ struct m2mtest_dev *m2mtest_dev = (struct m2mtest_dev *)priv;
+ struct m2mtest_ctx *curr_ctx;
+ struct m2mtest_buffer *src_buf, *dst_buf;
+
+ curr_ctx = v4l2_m2m_get_curr_priv(m2mtest_dev->m2m_dev);
+
+ if (NULL == curr_ctx) {
+ printk(KERN_ERR
+ "Instance released before end of transaction\n");
+ return;
+ }
+
+ src_buf = v4l2_m2m_src_buf_remove(curr_ctx->m2m_ctx);
+ dst_buf = v4l2_m2m_dst_buf_remove(curr_ctx->m2m_ctx);
+ curr_ctx->num_processed++;
+
+ if (curr_ctx->num_processed == curr_ctx->translen
+ || curr_ctx->aborting) {
+ dprintk(curr_ctx->dev, "Finishing transaction\n");
+ curr_ctx->num_processed = 0;
+ v4l2_m2m_job_finish(m2mtest_dev->m2m_dev, curr_ctx->m2m_ctx);
+ src_buf->vb.state = dst_buf->vb.state = VIDEOBUF_DONE;
+ wake_up(&src_buf->vb.done);
+ wake_up(&dst_buf->vb.done);
+ } else {
+ src_buf->vb.state = dst_buf->vb.state = VIDEOBUF_DONE;
+ wake_up(&src_buf->vb.done);
+ wake_up(&dst_buf->vb.done);
+ device_run(curr_ctx);
+ }
+
+ return;
+}
+
+
+/*
+ * video ioctls
+ */
+
+static int vidioc_querycap(struct file *file, void *priv,
+ struct v4l2_capability *cap)
+{
+ strncpy(cap->driver, MEM2MEM_NAME, sizeof(cap->driver) - 1);
+ strncpy(cap->card, MEM2MEM_NAME, sizeof(cap->card) - 1);
+ cap->bus_info[0] = 0;
+ cap->version = KERNEL_VERSION(0, 1, 0);
+ cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_VIDEO_OUTPUT
+ | V4L2_CAP_STREAMING;
+
+ return 0;
+}
+
+static int enum_fmt(struct v4l2_fmtdesc *f, u32 type)
+{
+ int i, num;
+ struct m2mtest_fmt *fmt;
+
+ num = 0;
+
+ for (i = 0; i < NUM_FORMATS; ++i) {
+ if (formats[i].types & type) {
+ /* index-th format of type type found ? */
+ if (num == f->index)
+ break;
+ /* Correct type but haven't reached our index yet,
+ * just increment per-type index */
+ ++num;
+ }
+ }
+
+ if (i < NUM_FORMATS) {
+ /* Format found */
+ fmt = &formats[i];
+ strncpy(f->description, fmt->name, sizeof(f->description) - 1);
+ f->pixelformat = fmt->fourcc;
+ return 0;
+ }
+
+ /* Format not found */
+ return -EINVAL;
+}
+
+static int vidioc_enum_fmt_vid_cap(struct file *file, void *priv,
+ struct v4l2_fmtdesc *f)
+{
+ return enum_fmt(f, MEM2MEM_CAPTURE);
+}
+
+static int vidioc_enum_fmt_vid_out(struct file *file, void *priv,
+ struct v4l2_fmtdesc *f)
+{
+ return enum_fmt(f, MEM2MEM_OUTPUT);
+}
+
+static int vidioc_g_fmt(struct m2mtest_ctx *ctx, struct v4l2_format *f)
+{
+ struct videobuf_queue *vq;
+ struct m2mtest_q_data *q_data;
+
+ vq = v4l2_m2m_get_vq(ctx->m2m_ctx, f->type);
+ q_data = get_q_data(f->type);
+
+ f->fmt.pix.width = q_data->width;
+ f->fmt.pix.height = q_data->height;
+ f->fmt.pix.field = vq->field;
+ f->fmt.pix.pixelformat = q_data->fmt->fourcc;
+ f->fmt.pix.bytesperline = (q_data->width * q_data->fmt->depth) >> 3;
+ f->fmt.pix.sizeimage = q_data->sizeimage;
+
+ return 0;
+}
+
+static int vidioc_g_fmt_vid_out(struct file *file, void *priv,
+ struct v4l2_format *f)
+{
+ return vidioc_g_fmt(priv, f);
+}
+
+static int vidioc_g_fmt_vid_cap(struct file *file, void *priv,
+ struct v4l2_format *f)
+{
+ return vidioc_g_fmt(priv, f);
+}
+
+static int vidioc_try_fmt(struct v4l2_format *f, struct m2mtest_fmt *fmt)
+{
+ enum v4l2_field field;
+
+ field = f->fmt.pix.field;
+
+ if (field == V4L2_FIELD_ANY)
+ field = V4L2_FIELD_NONE;
+ else if (V4L2_FIELD_NONE != field)
+ return -EINVAL;
+
+ /* V4L2 specification suggests the driver corrects the format struct
+ * if any of the dimensions is unsupported */
+ f->fmt.pix.field = field;
+
+ if (f->fmt.pix.height < MIN_H)
+ f->fmt.pix.height = MIN_H;
+ else if (f->fmt.pix.height > MAX_H)
+ f->fmt.pix.height = MAX_H;
+
+ if (f->fmt.pix.width < MIN_W)
+ f->fmt.pix.width = MIN_W;
+ else if (f->fmt.pix.width > MAX_W)
+ f->fmt.pix.width = MAX_W;
+
+ f->fmt.pix.width &= ~DIM_ALIGN_MASK;
+ f->fmt.pix.bytesperline = (f->fmt.pix.width * fmt->depth) >> 3;
+ f->fmt.pix.sizeimage = f->fmt.pix.height * f->fmt.pix.bytesperline;
+
+ return 0;
+}
+
+static int vidioc_try_fmt_vid_cap(struct file *file, void *priv,
+ struct v4l2_format *f)
+{
+ struct m2mtest_fmt *fmt;
+ struct m2mtest_ctx *ctx = priv;
+
+ fmt = find_format(f);
+ if (!fmt || !(fmt->types & MEM2MEM_CAPTURE)) {
+ v4l2_err(&ctx->dev->v4l2_dev,
+ "Fourcc format (0x%08x) invalid.\n",
+ f->fmt.pix.pixelformat);
+ return -EINVAL;
+ }
+
+ return vidioc_try_fmt(f, fmt);
+}
+
+static int vidioc_try_fmt_vid_out(struct file *file, void *priv,
+ struct v4l2_format *f)
+{
+ struct m2mtest_fmt *fmt;
+ struct m2mtest_ctx *ctx = priv;
+
+ fmt = find_format(f);
+ if (!fmt || !(fmt->types & MEM2MEM_OUTPUT)) {
+ v4l2_err(&ctx->dev->v4l2_dev,
+ "Fourcc format (0x%08x) invalid.\n",
+ f->fmt.pix.pixelformat);
+ return -EINVAL;
+ }
+
+ return vidioc_try_fmt(f, fmt);
+}
+
+static int vidioc_s_fmt(struct m2mtest_ctx *ctx, struct v4l2_format *f)
+{
+ struct m2mtest_q_data *q_data;
+ struct videobuf_queue *vq;
+ int ret = 0;
+
+ vq = v4l2_m2m_get_vq(ctx->m2m_ctx, f->type);
+ q_data = get_q_data(f->type);
+ if (!q_data)
+ return -EINVAL;
+
+ mutex_lock(&vq->vb_lock);
+
+ if (videobuf_queue_is_busy(vq)) {
+ v4l2_err(&ctx->dev->v4l2_dev,
+ "%s queue busy\n", __func__);
+ ret = -EBUSY;
+ goto out;
+ }
+
+ q_data->fmt = find_format(f);
+ q_data->width = f->fmt.pix.width;
+ q_data->height = f->fmt.pix.height;
+ q_data->sizeimage = q_data->width * q_data->height
+ * q_data->fmt->depth >> 3;
+ vq->field = f->fmt.pix.field;
+
+ dprintk(ctx->dev,
+ "Setting format for type %d, wxh: %dx%d, fmt: %d\n",
+ f->type, q_data->width, q_data->height, q_data->fmt->fourcc);
+
+out:
+ mutex_unlock(&vq->vb_lock);
+ return ret;
+}
+
+static int vidioc_s_fmt_vid_cap(struct file *file, void *priv,
+ struct v4l2_format *f)
+{
+ int ret;
+
+ ret = vidioc_try_fmt_vid_cap(file, priv, f);
+ if (ret)
+ return ret;
+
+ return vidioc_s_fmt(priv, f);
+}
+
+static int vidioc_s_fmt_vid_out(struct file *file, void *priv,
+ struct v4l2_format *f)
+{
+ int ret;
+
+ ret = vidioc_try_fmt_vid_out(file, priv, f);
+ if (ret)
+ return ret;
+
+ return vidioc_s_fmt(priv, f);
+}
+
+static int vidioc_reqbufs(struct file *file, void *priv,
+ struct v4l2_requestbuffers *reqbufs)
+{
+ struct m2mtest_ctx *ctx = priv;
+
+ return v4l2_m2m_reqbufs(file, ctx->m2m_ctx, reqbufs);
+}
+
+static int vidioc_querybuf(struct file *file, void *priv,
+ struct v4l2_buffer *buf)
+{
+ struct m2mtest_ctx *ctx = priv;
+
+ return v4l2_m2m_querybuf(file, ctx->m2m_ctx, buf);
+}
+
+static int vidioc_qbuf(struct file *file, void *priv, struct v4l2_buffer *buf)
+{
+ struct m2mtest_ctx *ctx = priv;
+
+ return v4l2_m2m_qbuf(file, ctx->m2m_ctx, buf);
+}
+
+static int vidioc_dqbuf(struct file *file, void *priv, struct v4l2_buffer *buf)
+{
+ struct m2mtest_ctx *ctx = priv;
+
+ return v4l2_m2m_dqbuf(file, ctx->m2m_ctx, buf);
+}
+
+static int vidioc_streamon(struct file *file, void *priv,
+ enum v4l2_buf_type type)
+{
+ struct m2mtest_ctx *ctx = priv;
+
+ return v4l2_m2m_streamon(file, ctx->m2m_ctx, type);
+}
+
+static int vidioc_streamoff(struct file *file, void *priv,
+ enum v4l2_buf_type type)
+{
+ struct m2mtest_ctx *ctx = priv;
+
+ return v4l2_m2m_streamoff(file, ctx->m2m_ctx, type);
+}
+
+static int vidioc_queryctrl(struct file *file, void *priv,
+ struct v4l2_queryctrl *qc)
+{
+ struct v4l2_queryctrl *c;
+
+ c = get_ctrl(qc->id);
+ if (!c)
+ return -EINVAL;
+
+ *qc = *c;
+ return 0;
+}
+
+static int vidioc_g_ctrl(struct file *file, void *priv,
+ struct v4l2_control *ctrl)
+{
+ struct m2mtest_ctx *ctx = priv;
+
+ switch (ctrl->id) {
+ case V4L2_CID_TRANS_TIME_MSEC:
+ ctrl->value = ctx->transtime;
+ break;
+
+ case V4L2_CID_TRANS_NUM_BUFS:
+ ctrl->value = ctx->translen;
+ break;
+
+ default:
+ v4l2_err(&ctx->dev->v4l2_dev, "Invalid control\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int check_ctrl_val(struct m2mtest_ctx *ctx, struct v4l2_control *ctrl)
+{
+ struct v4l2_queryctrl *c;
+
+ c = get_ctrl(ctrl->id);
+ if (!c)
+ return -EINVAL;
+
+ if (ctrl->value < c->minimum
+ || ctrl->value > c->maximum) {
+ v4l2_err(&ctx->dev->v4l2_dev, "Value out of range\n");
+ return -ERANGE;
+ }
+
+ return 0;
+}
+
+static int vidioc_s_ctrl(struct file *file, void *priv,
+ struct v4l2_control *ctrl)
+{
+ struct m2mtest_ctx *ctx = priv;
+ int ret = 0;
+
+ ret = check_ctrl_val(ctx, ctrl);
+ if (ret != 0)
+ return ret;
+
+ switch (ctrl->id) {
+ case V4L2_CID_TRANS_TIME_MSEC:
+ ctx->transtime = ctrl->value;
+ break;
+
+ case V4L2_CID_TRANS_NUM_BUFS:
+ ctx->translen = ctrl->value;
+ break;
+
+ default:
+ v4l2_err(&ctx->dev->v4l2_dev, "Invalid control\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+
+static const struct v4l2_ioctl_ops m2mtest_ioctl_ops = {
+ .vidioc_querycap = vidioc_querycap,
+
+ .vidioc_enum_fmt_vid_cap = vidioc_enum_fmt_vid_cap,
+ .vidioc_g_fmt_vid_cap = vidioc_g_fmt_vid_cap,
+ .vidioc_try_fmt_vid_cap = vidioc_try_fmt_vid_cap,
+ .vidioc_s_fmt_vid_cap = vidioc_s_fmt_vid_cap,
+
+ .vidioc_enum_fmt_vid_out = vidioc_enum_fmt_vid_out,
+ .vidioc_g_fmt_vid_out = vidioc_g_fmt_vid_out,
+ .vidioc_try_fmt_vid_out = vidioc_try_fmt_vid_out,
+ .vidioc_s_fmt_vid_out = vidioc_s_fmt_vid_out,
+
+ .vidioc_reqbufs = vidioc_reqbufs,
+ .vidioc_querybuf = vidioc_querybuf,
+
+ .vidioc_qbuf = vidioc_qbuf,
+ .vidioc_dqbuf = vidioc_dqbuf,
+
+ .vidioc_streamon = vidioc_streamon,
+ .vidioc_streamoff = vidioc_streamoff,
+
+ .vidioc_queryctrl = vidioc_queryctrl,
+ .vidioc_g_ctrl = vidioc_g_ctrl,
+ .vidioc_s_ctrl = vidioc_s_ctrl,
+};
+
+
+/*
+ * Queue operations
+ */
+
+static void m2mtest_buf_release(struct videobuf_queue *vq,
+ struct videobuf_buffer *vb)
+{
+ struct m2mtest_ctx *ctx = vq->priv_data;
+
+ dprintk(ctx->dev, "type: %d, index: %d, state: %d\n",
+ vq->type, vb->i, vb->state);
+
+ videobuf_vmalloc_free(vb);
+ vb->state = VIDEOBUF_NEEDS_INIT;
+}
+
+static int m2mtest_buf_setup(struct videobuf_queue *vq, unsigned int *count,
+ unsigned int *size)
+{
+ struct m2mtest_ctx *ctx = vq->priv_data;
+ struct m2mtest_q_data *q_data;
+
+ q_data = get_q_data(vq->type);
+
+ *size = q_data->width * q_data->height * q_data->fmt->depth >> 3;
+ dprintk(ctx->dev, "size:%d, w/h %d/%d, depth: %d\n",
+ *size, q_data->width, q_data->height, q_data->fmt->depth);
+
+ if (0 == *count)
+ *count = MEM2MEM_DEF_NUM_BUFS;
+
+ while (*size * *count > MEM2MEM_VID_MEM_LIMIT)
+ (*count)--;
+
+ v4l2_info(&ctx->dev->v4l2_dev,
+ "%d buffers of size %d set up.\n", *count, *size);
+
+ return 0;
+}
+
+static int m2mtest_buf_prepare(struct videobuf_queue *vq,
+ struct videobuf_buffer *vb,
+ enum v4l2_field field)
+{
+ struct m2mtest_ctx *ctx = vq->priv_data;
+ struct m2mtest_q_data *q_data;
+ int ret;
+
+ dprintk(ctx->dev, "type: %d, index: %d, state: %d\n",
+ vq->type, vb->i, vb->state);
+
+ q_data = get_q_data(vq->type);
+
+ if (vb->baddr) {
+ /* User-provided buffer */
+ if (vb->bsize < q_data->sizeimage) {
+ /* Buffer too small to fit a frame */
+ v4l2_err(&ctx->dev->v4l2_dev,
+ "User-provided buffer too small (%d < %d)\n",
+ q_data->sizeimage, vb->bsize);
+ return -EINVAL;
+ }
+ } else if (vb->state != VIDEOBUF_NEEDS_INIT
+ && vb->bsize < q_data->sizeimage) {
+ /* We provide the buffer, but it's already been inited
+ * and is too small */
+ return -EINVAL;
+ }
+
+ vb->width = q_data->width;
+ vb->height = q_data->height;
+ vb->bytesperline = (q_data->width * q_data->fmt->depth) >> 3;
+ vb->size = q_data->sizeimage;
+ vb->field = field;
+
+ if (VIDEOBUF_NEEDS_INIT == vb->state) {
+ ret = videobuf_iolock(vq, vb, NULL);
+ if (ret) {
+ v4l2_err(&ctx->dev->v4l2_dev,
+ "Iolock failed\n");
+ goto fail;
+ }
+ }
+
+ vb->state = VIDEOBUF_PREPARED;
+
+ return 0;
+fail:
+ m2mtest_buf_release(vq, vb);
+ return ret;
+}
+
+static void m2mtest_buf_queue(struct videobuf_queue *vq,
+ struct videobuf_buffer *vb)
+{
+ struct m2mtest_ctx *ctx = vq->priv_data;
+
+ v4l2_m2m_buf_queue(ctx->m2m_ctx, vq, vb);
+}
+
+static struct videobuf_queue_ops m2mtest_qops = {
+ .buf_setup = m2mtest_buf_setup,
+ .buf_prepare = m2mtest_buf_prepare,
+ .buf_queue = m2mtest_buf_queue,
+ .buf_release = m2mtest_buf_release,
+};
+
+static void queue_init(void *priv, struct videobuf_queue *vq,
+ enum v4l2_buf_type type)
+{
+ struct m2mtest_ctx *ctx = priv;
+
+ videobuf_queue_vmalloc_init(vq, &m2mtest_qops, ctx->dev->v4l2_dev.dev,
+ &ctx->dev->irqlock, type, V4L2_FIELD_NONE,
+ sizeof(struct m2mtest_buffer), priv);
+}
+
+
+/*
+ * File operations
+ */
+static int m2mtest_open(struct file *file)
+{
+ struct m2mtest_dev *dev = video_drvdata(file);
+ struct m2mtest_ctx *ctx = NULL;
+
+ atomic_inc(&dev->num_inst);
+
+ ctx = kzalloc(sizeof *ctx, GFP_KERNEL);
+ if (!ctx) {
+ atomic_dec(&dev->num_inst);
+ return -ENOMEM;
+ }
+
+ file->private_data = ctx;
+ ctx->dev = dev;
+ ctx->translen = MEM2MEM_DEF_TRANSLEN;
+ ctx->transtime = MEM2MEM_DEF_TRANSTIME;
+ ctx->num_processed = 0;
+
+ ctx->m2m_ctx = v4l2_m2m_ctx_init(ctx, dev->m2m_dev, queue_init);
+
+ if (IS_ERR(ctx->m2m_ctx)) {
+ kfree(ctx);
+ atomic_dec(&dev->num_inst);
+ return PTR_ERR(ctx->m2m_ctx);
+ }
+
+ dprintk(dev, "Created instance %p, m2m_ctx: %p\n", ctx, ctx->m2m_ctx);
+
+ return 0;
+}
+
+static int m2mtest_release(struct file *file)
+{
+ struct m2mtest_dev *dev = video_drvdata(file);
+ struct m2mtest_ctx *ctx = file->private_data;
+
+ dprintk(dev, "Releasing instance %p\n", ctx);
+
+ v4l2_m2m_ctx_release(ctx->m2m_ctx);
+ kfree(ctx);
+
+ atomic_dec(&dev->num_inst);
+
+ return 0;
+}
+
+static unsigned int m2mtest_poll(struct file *file,
+ struct poll_table_struct *wait)
+{
+ struct m2mtest_ctx *ctx = (struct m2mtest_ctx *)file->private_data;
+
+ return v4l2_m2m_poll(file, ctx->m2m_ctx, wait);
+}
+
+static int m2mtest_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ struct m2mtest_ctx *ctx = (struct m2mtest_ctx *)file->private_data;
+
+ return v4l2_m2m_mmap(file, ctx->m2m_ctx, vma);
+}
+
+static const struct v4l2_file_operations m2mtest_fops = {
+ .owner = THIS_MODULE,
+ .open = m2mtest_open,
+ .release = m2mtest_release,
+ .poll = m2mtest_poll,
+ .ioctl = video_ioctl2,
+ .mmap = m2mtest_mmap,
+};
+
+static struct video_device m2mtest_videodev = {
+ .name = MEM2MEM_NAME,
+ .fops = &m2mtest_fops,
+ .ioctl_ops = &m2mtest_ioctl_ops,
+ .minor = -1,
+ .release = video_device_release,
+};
+
+static struct v4l2_m2m_ops m2m_ops = {
+ .device_run = device_run,
+ .job_ready = job_ready,
+ .job_abort = job_abort,
+};
+
+static int m2mtest_probe(struct platform_device *pdev)
+{
+ struct m2mtest_dev *dev;
+ struct video_device *vfd;
+ int ret;
+
+ dev = kzalloc(sizeof *dev, GFP_KERNEL);
+ if (!dev)
+ return -ENOMEM;
+
+ spin_lock_init(&dev->irqlock);
+
+ ret = v4l2_device_register(&pdev->dev, &dev->v4l2_dev);
+ if (ret)
+ goto free_dev;
+
+ atomic_set(&dev->num_inst, 0);
+ mutex_init(&dev->dev_mutex);
+
+ vfd = video_device_alloc();
+ if (!vfd) {
+ v4l2_err(&dev->v4l2_dev, "Failed to allocate video device\n");
+ goto unreg_dev;
+ }
+
+ *vfd = m2mtest_videodev;
+
+ ret = video_register_device(vfd, VFL_TYPE_GRABBER, 0);
+ if (ret) {
+ v4l2_err(&dev->v4l2_dev, "Failed to register video device\n");
+ goto rel_vdev;
+ }
+
+ video_set_drvdata(vfd, dev);
+ snprintf(vfd->name, sizeof(vfd->name), "%s", m2mtest_videodev.name);
+ dev->vfd = vfd;
+ v4l2_info(&dev->v4l2_dev, MEM2MEM_TEST_MODULE_NAME
+ "Device registered as /dev/video%d\n", vfd->num);
+
+ setup_timer(&dev->timer, device_isr, (long)dev);
+ platform_set_drvdata(pdev, dev);
+
+ dev->m2m_dev = v4l2_m2m_init(&m2m_ops);
+ if (IS_ERR(dev->m2m_dev)) {
+ v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem device\n");
+ ret = PTR_ERR(dev->m2m_dev);
+ goto err_m2m;
+ }
+
+ return 0;
+
+err_m2m:
+ video_unregister_device(dev->vfd);
+rel_vdev:
+ video_device_release(vfd);
+unreg_dev:
+ v4l2_device_unregister(&dev->v4l2_dev);
+free_dev:
+ kfree(dev);
+
+ return ret;
+}
+
+static int m2mtest_remove(struct platform_device *pdev)
+{
+ struct m2mtest_dev *dev =
+ (struct m2mtest_dev *)platform_get_drvdata(pdev);
+
+ v4l2_info(&dev->v4l2_dev, "Removing " MEM2MEM_TEST_MODULE_NAME);
+ v4l2_m2m_release(dev->m2m_dev);
+ del_timer_sync(&dev->timer);
+ video_unregister_device(dev->vfd);
+ v4l2_device_unregister(&dev->v4l2_dev);
+ kfree(dev);
+
+ return 0;
+}
+
+static struct platform_driver m2mtest_pdrv = {
+ .probe = m2mtest_probe,
+ .remove = m2mtest_remove,
+ .driver = {
+ .name = MEM2MEM_NAME,
+ .owner = THIS_MODULE,
+ },
+};
+
+static void __exit m2mtest_exit(void)
+{
+ platform_driver_unregister(&m2mtest_pdrv);
+ platform_device_unregister(&m2mtest_pdev);
+}
+
+static int __init m2mtest_init(void)
+{
+ int ret;
+
+ ret = platform_device_register(&m2mtest_pdev);
+ if (ret)
+ return ret;
+
+ ret = platform_driver_register(&m2mtest_pdrv);
+ if (ret)
+ platform_device_unregister(&m2mtest_pdev);
+
+ return 0;
+}
+
+module_init(m2mtest_init);
+module_exit(m2mtest_exit);
+
--
1.6.4.2.253.g0b1fac
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [EXAMPLE v2] Mem-to-mem userspace test application.
2009-12-23 13:17 [PATCH/RFC v2.1 0/2] Mem-to-mem device framework Pawel Osciak
2009-12-23 13:17 ` [PATCH v2.1 1/2] V4L: Add memory-to-memory device helper framework for V4L2 Pawel Osciak
2009-12-23 13:17 ` [PATCH v2.1 2/2] V4L: Add a mem-to-mem V4L2 framework test device Pawel Osciak
@ 2009-12-23 13:17 ` Pawel Osciak
2009-12-23 15:05 ` [PATCH/RFC v2.1 0/2] Mem-to-mem device framework Hans Verkuil
3 siblings, 0 replies; 9+ messages in thread
From: Pawel Osciak @ 2009-12-23 13:17 UTC (permalink / raw)
To: linux-arm-kernel
This is an example application for testing mem-to-mem framework using
mem2mem-testdev device.
It is intended to be executed multiple times in parallel to test multi-instance
operation and scheduling. Each process can be configured differently using
command-line arguments.
The application opens video test device and framebuffer, sets up params,
queues src/dst buffers and displays processed results on the framebuffer.
Configurable parameters: starting point on the framebuffer, width/height of
buffers, transaction length (in buffers), transaction duration, total number
of frames to be processed.
Tested on a 800x480 framebuffer with the following script:
#!/bin/bash
for i in {0..3}
do
((x=$i * 100))
./process-vmalloc 0 $(($i + 1)) $((2000 - $i * 500)) $((($i+1) * 4)) \
$x $x 100 100 &
done
Signed-off-by: Pawel Osciak <p.osciak@samsung.com>
Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com>
---
--- /dev/null 2009-11-17 07:51:25.574927259 +0100
+++ process-vmalloc.c 2009-11-26 11:00:26.000000000 +0100
@@ -0,0 +1,420 @@
+/**
+ * process-vmalloc.c
+ * Capture+output (process) V4L2 device tester.
+ *
+ * Pawel Osciak, p.osciak at samsung.com
+ * 2009, Samsung Electronics Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <assert.h>
+#include <time.h>
+#include <errno.h>
+
+#include <fcntl.h>
+#include <unistd.h>
+#include <sys/ioctl.h>
+#include <sys/types.h>
+#include <stdint.h>
+
+#include <linux/fb.h>
+#include <linux/videodev2.h>
+
+#include <sys/mman.h>
+
+#define V4L2_CID_TRANS_TIME_MSEC (V4L2_CID_PRIVATE_BASE)
+#define V4L2_CID_TRANS_NUM_BUFS (V4L2_CID_PRIVATE_BASE + 1)
+
+#define VIDEO_DEV_NAME "/dev/video0"
+#define FB_DEV_NAME "/dev/fb0"
+#define NUM_BUFS 4
+#define NUM_FRAMES 16
+
+#define perror_exit(cond, func)\
+ if (cond) {\
+ fprintf(stderr, "%s:%d: ", __func__, __LINE__);\
+ perror(func);\
+ exit(EXIT_FAILURE);\
+ }
+
+#define error_exit(cond, func)\
+ if (cond) {\
+ fprintf(stderr, "%s:%d: failed\n", func, __LINE__);\
+ exit(EXIT_FAILURE);\
+ }
+
+#define perror_ret(cond, func)\
+ if (cond) {\
+ fprintf(stderr, "%s:%d: ", __func__, __LINE__);\
+ perror(func);\
+ return ret;\
+ }
+
+#define memzero(x)\
+ memset(&(x), 0, sizeof (x));
+
+#define PROCESS_DEBUG 1
+#ifdef PROCESS_DEBUG
+#define debug(msg, ...)\
+ fprintf(stderr, "%s: " msg, __func__, ##__VA_ARGS__);
+#else
+#define debug(msg, ...)
+#endif
+
+static int vid_fd, fb_fd;
+static void *fb_addr;
+static char *p_src_buf[NUM_BUFS], *p_dst_buf[NUM_BUFS];
+static size_t src_buf_size[NUM_BUFS], dst_buf_size[NUM_BUFS];
+static uint32_t num_src_bufs = 0, num_dst_bufs = 0;
+
+/* Command-line params */
+int initial_delay = 0;
+int fb_x, fb_y, width, height;
+int translen = 1;
+/* For displaying multi-buffer transaction simulations, indicates current
+ * buffer in an ongoing transaction */
+int curr_buf = 0;
+int transtime = 1000;
+int num_frames = 0;
+off_t fb_off, fb_line_w, fb_buf_w;
+struct fb_var_screeninfo fbinfo;
+
+static void init_video_dev(void)
+{
+ int ret;
+ struct v4l2_capability cap;
+ struct v4l2_format fmt;
+ struct v4l2_control ctrl;
+
+ vid_fd = open(VIDEO_DEV_NAME, O_RDWR | O_NONBLOCK, 0);
+ perror_exit(vid_fd < 0, "open");
+
+ ctrl.id = V4L2_CID_TRANS_TIME_MSEC;
+ ctrl.value = transtime;
+ ret = ioctl(vid_fd, VIDIOC_S_CTRL, &ctrl);
+ perror_exit(ret != 0, "ioctl");
+
+ ctrl.id = V4L2_CID_TRANS_NUM_BUFS;
+ ctrl.value = translen;
+ ret = ioctl(vid_fd, VIDIOC_S_CTRL, &ctrl);
+ perror_exit(ret != 0, "ioctl");
+
+ ret = ioctl(vid_fd, VIDIOC_QUERYCAP, &cap);
+ perror_exit(ret != 0, "ioctl");
+
+ if (!(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE)) {
+ fprintf(stderr, "Device does not support capture\n");
+ exit(EXIT_FAILURE);
+ }
+ if (!(cap.capabilities & V4L2_CAP_VIDEO_OUTPUT)) {
+ fprintf(stderr, "Device does not support output\n");
+ exit(EXIT_FAILURE);
+ }
+
+ /* Set format for capture */
+ fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+ fmt.fmt.pix.width = width;
+ fmt.fmt.pix.height = height;
+ fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_RGB565X;
+ fmt.fmt.pix.field = V4L2_FIELD_ANY;
+
+ ret = ioctl(vid_fd, VIDIOC_S_FMT, &fmt);
+ perror_exit(ret != 0, "ioctl");
+
+ /* The same format for output */
+ fmt.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
+ fmt.fmt.pix.width = width;
+ fmt.fmt.pix.height = height;
+ fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_RGB565X;
+ fmt.fmt.pix.field = V4L2_FIELD_ANY;
+
+ ret = ioctl(vid_fd, VIDIOC_S_FMT, &fmt);
+ perror_exit(ret != 0, "ioctl");
+}
+
+static void gen_src_buf(void *p, size_t size)
+{
+ uint8_t val;
+
+ val = rand() % 256;
+ memset(p, val, size);
+}
+
+static void gen_dst_buf(void *p, size_t size)
+{
+ /* White */
+ memset(p, 255, 0);
+}
+
+static int read_frame(int last)
+{
+ struct v4l2_buffer buf;
+ int ret;
+ int j;
+ char * p_fb = fb_addr + fb_off;
+
+ memzero(buf);
+
+ buf.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
+ buf.memory = V4L2_MEMORY_MMAP;
+
+ ret = ioctl(vid_fd, VIDIOC_DQBUF, &buf);
+ debug("Dequeued source buffer, index: %d\n", buf.index);
+ if (ret) {
+ switch (errno) {
+ case EAGAIN:
+ debug("Got EAGAIN\n");
+ return 0;
+
+ case EIO:
+ debug("Got EIO\n");
+ return 0;
+
+ default:
+ perror("ioctl");
+ return 0;
+ }
+ }
+
+ /* Verify we've got a correct buffer */
+ assert(buf.index < num_src_bufs);
+
+ /* Enqueue back the buffer (note that the index is preserved) */
+ if (!last) {
+ gen_src_buf(p_src_buf[buf.index], src_buf_size[buf.index]);
+ buf.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
+ buf.memory = V4L2_MEMORY_MMAP;
+ ret = ioctl(vid_fd, VIDIOC_QBUF, &buf);
+ perror_ret(ret != 0, "ioctl");
+ }
+
+
+ buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+
+ debug("Dequeuing destination buffer\n");
+ ret = ioctl(vid_fd, VIDIOC_DQBUF, &buf);
+ if (ret) {
+ switch (errno) {
+ case EAGAIN:
+ debug("Got EAGAIN\n");
+ return 0;
+
+ case EIO:
+ debug("Got EIO\n");
+ return 0;
+
+ default:
+ perror("ioctl");
+ return 1;
+ }
+ }
+ debug("Dequeued dst buffer, index: %d\n", buf.index);
+ /* Verify we've got a correct buffer */
+ assert(buf.index < num_dst_bufs);
+
+ debug("Current buffer in the transaction: %d\n", curr_buf);
+ p_fb += curr_buf * (height / translen) * fb_line_w;
+ ++curr_buf;
+ if (curr_buf >= translen)
+ curr_buf = 0;
+
+ /* Display results */
+ for (j = 0; j < height / translen; ++j) {
+ memcpy(p_fb, (void *)p_dst_buf[buf.index], fb_buf_w);
+ p_fb += fb_line_w;
+ }
+
+ /* Enqueue back the buffer */
+ if (!last) {
+ gen_dst_buf(p_dst_buf[buf.index], dst_buf_size[buf.index]);
+ ret = ioctl(vid_fd, VIDIOC_QBUF, &buf);
+ perror_ret(ret != 0, "ioctl");
+ debug("Enqueued back dst buffer\n");
+ }
+
+ return 0;
+}
+
+void init_usage(int argc, char *argv[])
+{
+ if (argc != 9) {
+ printf("Usage: %s initial_delay bufs_per_transaction "
+ "trans_length_msec num_frames fb_offset_x fb_offset_y "
+ "width height\n", argv[0]);
+ exit(EXIT_FAILURE);
+ }
+
+ initial_delay = atoi(argv[1]);
+ translen = atoi(argv[2]);
+ transtime = atoi(argv[3]);
+ num_frames = atoi(argv[4]);
+ fb_x = atoi(argv[5]);
+ fb_y = atoi(argv[6]);
+ width = atoi(argv[7]);
+ height = atoi(argv[8]);
+ debug("NEW PROCESS: fb_x: %d, fb_y: %d, width: %d, height: %d, "
+ "translen: %d, transtime: %d, num_frames: %d\n",
+ fb_x, fb_y, width, height, translen, transtime, num_frames);
+}
+
+void init_fb(void)
+{
+ int ret;
+ size_t map_size;
+
+ fb_fd = open(FB_DEV_NAME, O_RDWR, 0);
+ perror_exit(fb_fd < 0, "open");
+
+ ret = ioctl(fb_fd, FBIOGET_VSCREENINFO, &fbinfo);
+ perror_exit(ret != 0, "ioctl");
+ debug("fbinfo: xres: %d, xres_virt: %d, yres: %d, yres_virt: %d\n",
+ fbinfo.xres, fbinfo.xres_virtual,
+ fbinfo.yres, fbinfo.yres_virtual);
+
+ fb_line_w= fbinfo.xres_virtual * (fbinfo.bits_per_pixel >> 3);
+ fb_off = fb_y * fb_line_w + fb_x * (fbinfo.bits_per_pixel >> 3);
+ fb_buf_w = width * (fbinfo.bits_per_pixel >> 3);
+ map_size = fb_line_w * fbinfo.yres_virtual;
+
+ fb_addr = mmap(0, map_size, PROT_WRITE | PROT_READ,
+ MAP_SHARED, fb_fd, 0);
+ perror_exit(fb_addr == MAP_FAILED, "mmap");
+}
+
+int main(int argc, char *argv[])
+{
+ int ret = 0;
+ int i;
+ struct v4l2_buffer buf;
+ struct v4l2_requestbuffers reqbuf;
+ enum v4l2_buf_type type;
+ int last = 0;
+
+ init_usage(argc, argv);
+ init_fb();
+
+ srand(time(NULL) ^ getpid());
+ sleep(initial_delay);
+
+ init_video_dev();
+
+ memzero(reqbuf);
+ reqbuf.count = NUM_BUFS;
+ reqbuf.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
+ type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
+ reqbuf.memory = V4L2_MEMORY_MMAP;
+ ret = ioctl(vid_fd, VIDIOC_REQBUFS, &reqbuf);
+ perror_exit(ret != 0, "ioctl");
+ num_src_bufs = reqbuf.count;
+ debug("Got %d src buffers\n", num_src_bufs);
+
+ reqbuf.count = NUM_BUFS;
+ reqbuf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+ type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+ ret = ioctl(vid_fd, VIDIOC_REQBUFS, &reqbuf);
+ perror_exit(ret != 0, "ioctl");
+ num_dst_bufs = reqbuf.count;
+ debug("Got %d dst buffers\n", num_dst_bufs);
+
+ for (i = 0; i < num_src_bufs; ++i) {
+ buf.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
+ buf.memory = V4L2_MEMORY_MMAP;
+ buf.index = i;
+
+ ret = ioctl(vid_fd, VIDIOC_QUERYBUF, &buf);
+ perror_exit(ret != 0, "ioctl");
+ debug("QUERYBUF returned offset: %x\n", buf.m.offset);
+
+ src_buf_size[i] = buf.length;
+ p_src_buf[i] = mmap(NULL, buf.length,
+ PROT_READ | PROT_WRITE, MAP_SHARED,
+ vid_fd, buf.m.offset);
+ perror_exit(MAP_FAILED == p_src_buf[i], "mmap");
+ }
+
+ for (i = 0; i < num_dst_bufs; ++i) {
+ buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+ buf.memory = V4L2_MEMORY_MMAP;
+ buf.index = i;
+
+ ret = ioctl(vid_fd, VIDIOC_QUERYBUF, &buf);
+ perror_exit(ret != 0, "ioctl");
+ debug("QUERYBUF returned offset: %x\n", buf.m.offset);
+
+ dst_buf_size[i] = buf.length;
+ p_dst_buf[i] = mmap(NULL, buf.length,
+ PROT_READ | PROT_WRITE, MAP_SHARED,
+ vid_fd, buf.m.offset);
+ perror_exit(MAP_FAILED == p_dst_buf[i], "mmap");
+ }
+
+ for (i = 0; i < num_src_bufs; ++i) {
+
+ gen_src_buf(p_src_buf[i], src_buf_size[i]);
+
+ memzero(buf);
+ buf.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
+ buf.memory = V4L2_MEMORY_MMAP;
+ buf.index = i;
+
+ ret = ioctl(vid_fd, VIDIOC_QBUF, &buf);
+ perror_exit(ret != 0, "ioctl");
+ }
+
+ for (i = 0; i < num_dst_bufs; ++i) {
+ memzero(buf);
+ buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+ buf.memory = V4L2_MEMORY_MMAP;
+ buf.index = i;
+
+ ret = ioctl(vid_fd, VIDIOC_QBUF, &buf);
+ perror_exit(ret != 0, "ioctl");
+ }
+
+ ret = ioctl(vid_fd, VIDIOC_STREAMON, &type);
+ debug("STREAMON (%d): %d\n", VIDIOC_STREAMON, ret);
+ perror_exit(ret != 0, "ioctl");
+
+ while (num_frames) {
+ fd_set read_fds;
+ int r;
+
+ FD_ZERO(&read_fds);
+ FD_SET(vid_fd, &read_fds);
+
+ debug("Before select");
+ r = select(vid_fd + 1, &read_fds, NULL, NULL, 0);
+ perror_exit(r < 0, "select");
+ debug("After select");
+
+ if (num_frames == 1)
+ last = 1;
+ if (read_frame(last)) {
+ fprintf(stderr, "Read frame failed\n");
+ break;
+ }
+ --num_frames;
+ printf("FRAMES LEFT: %d\n", num_frames);
+ }
+
+
+done:
+ close(vid_fd);
+ close(fb_fd);
+
+ for (i = 0; i < num_src_bufs; ++i)
+ munmap(p_src_buf[i], src_buf_size[i]);
+
+ for (i = 0; i < num_dst_bufs; ++i)
+ munmap(p_dst_buf[i], dst_buf_size[i]);
+
+ return ret;
+}
+
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH/RFC v2.1 0/2] Mem-to-mem device framework
2009-12-23 13:17 [PATCH/RFC v2.1 0/2] Mem-to-mem device framework Pawel Osciak
` (2 preceding siblings ...)
2009-12-23 13:17 ` [EXAMPLE v2] Mem-to-mem userspace test application Pawel Osciak
@ 2009-12-23 15:05 ` Hans Verkuil
2009-12-28 14:49 ` Pawel Osciak
3 siblings, 1 reply; 9+ messages in thread
From: Hans Verkuil @ 2009-12-23 15:05 UTC (permalink / raw)
To: linux-arm-kernel
On Wednesday 23 December 2009 14:17:32 Pawel Osciak wrote:
> Hello,
>
> this is the second version of the proposed implementation for mem-to-mem memory
> device framework. Your comments are very welcome.
Hi Pawel,
Thank you for working on this! It's much appreciated. Now I've noticed that
patches regarding memory-to-memory and memory pool tend to get very few comments.
I suspect that the main reason is that these are SoC-specific features that do
not occur in consumer-type products. So most v4l developers do not have the
interest and motivation (and time!) to look into this.
I'm CC-ing this reply to developers from Intel, TI, Nokia and Renesas in the
hope that they will find some time to review and think about this since this will
affect all of them.
One thing that I am missing is a high-level overview of what we want. Currently
there are patches/RFCs floating around for memory-to-memory support, multiplanar
support and memory-pool support.
What I would like to see is a RFC that ties this all together from the point of
view of the public API. I.e. what are the requirements? Possibly solutions? Open
questions? Forget about how to implement it for the moment, that will follow
from the chosen solutions.
Note that I would suggest though that the memory-pool part is split into two
parts: how to actually allocate the memory is pretty much separate from how v4l
will use it. The actual allocation part is probably quite complex and might
even be hardware dependent and should be discussed separately. But how to use
it is something that can be discussed without needing to know how it was
allocated.
The lack of discussion in this area does worry me a bit. IMHO this is a very
important area that needs a lot more work. The initiative should be with the
SoC companies and right now it seems only Samsung is active.
BTW, what is the status of the multiplanar RFC? I later realized that that RFC
might be very useful for adding meta-data to buffers. There are several cases
where that is useful: sensors that provide meta-data when capturing a frame and
imagepipelines (particularly in memory-to-memory cases) that want to have all
parameters as part of the meta-data associated with the image. There may well
be more of those.
Regards,
Hans
>
> In v2.1:
> I am very sorry for the resend, but somehow an orphaned endif found its way to
> Kconfig during the rebase.
>
> Changes since v1:
> - v4l2_m2m_buf_queue() now requires m2m_ctx as its argument
> - video_queue private data stores driver private data
> - a new submenu in kconfig for mem-to-mem devices
> - minor rebase leftovers cleanup
>
> A second patch series followed v2 with a new driver for a real device -
> Samsung S3C/S5P image rotator, utilizing this framework.
>
>
> This series contains:
>
> [PATCH v2.1 1/2] V4L: Add memory-to-memory device helper framework for V4L2.
> [PATCH v2.1 2/2] V4L: Add a mem-to-mem V4L2 framework test device.
> [EXAMPLE v2] Mem-to-mem userspace test application.
>
>
> Previous discussion and RFC on this topic:
> http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/10668
>
>
> A mem-to-mem device is a device that uses memory buffers passed by
> userspace applications for both source and destination. This is
> different from existing drivers that use memory buffers for only one
> of those at once.
> In terms of V4L2 such a device would be both of OUTPUT and CAPTURE type.
> Although no such devices are present in the V4L2 framework, a demand for such
> a model exists, e.g. for 'resizer devices'.
>
>
> -------------------------------------------------------------------------------
> Mem-to-mem devices
> -------------------------------------------------------------------------------
> In the previous discussion we concluded that we should use one video node with
> two queues, an output (V4L2_BUF_TYPE_VIDEO_OUTPUT) queue for source buffers and
> a capture queue (V4L2_BUF_TYPE_VIDEO_CAPTURE) for destination buffers.
>
>
> Each instance has its own set of queues: 2 videobuf_queues, each with a ready
> buffer queue, managed by the framework. Everything is encapsulated in the
> queue context struct:
>
> struct v4l2_m2m_queue_ctx {
> struct videobuf_queue q;
> /* ... */
> /* Queue for buffers ready to be processed as soon as this
> * instance receives access to the device */
> struct list_head rdy_queue;
> /* ... */
> };
>
> struct v4l2_m2m_ctx {
> /* ... */
> /* Capture (output to memory) queue context */
> struct v4l2_m2m_queue_ctx cap_q_ctx;
>
> /* Output (input from memory) queue context */
> struct v4l2_m2m_queue_ctx out_q_ctx;
> /* ... */
> };
>
> Streamon can be called for all instances and will not sleep if another instance
> is streaming.
>
> vidioc_querycap() should report V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_VIDEO_OUTPUT.
>
> -------------------------------------------------------------------------------
> Queuing and dequeuing buffers
> -------------------------------------------------------------------------------
> Applications can queue as many buffers as they want and it is not required to
> queue an equal number of source and destination buffers. If there is not enough
> buffers of any type, a new transaction will simply not be scheduled.
>
> -------------------------------------------------------------------------------
> Source and destination formats
> -------------------------------------------------------------------------------
> Should be set per queue. A helper function to access queues depending on the
> passed type - v4l2_m2m_get_vq() - is supplied. Most of the format-handling code
> is normally located in drivers anyway. The only exception is the "field" member
> of the videobuf_queue struct, which has to be set directly. It breaks
> encapsulation a little bit, but nothing can be done with it.
>
> -------------------------------------------------------------------------------
> Scheduling
> -------------------------------------------------------------------------------
> Requirements/assumptions:
> 1. More than one instance can be open at the same time.
> 2. Each instance periodically receives exclusive access to the device, performs
> an operation (operations) and yields back the device in a state that allows
> other instances to use it.
> 3. When an instance gets access to the device, it performs a
> "transaction"/"job". A transaction/job is defined as the shortest operation
> that cannot/should not be further divided without having to restart it from
> scratch, or without having to perform expensive reconfiguration of a device,
> etc.
> 4. Transactions can use multiple source/destination buffers.
> 5. Only a driver can tell when it is ready to perform a transaction, so
> a optional callback is provided for that purpose (job_ready()).
>
>
> There are three common requirements for a transaction to be ready to run:
> - at least one source buffer ready
> - at least one destination buffer ready
> - streaming on
> - (optional) driver-specific requirements (driver-specific callback function)
>
> So when buffers are queued by qbuf() or streaming is turned on with
> streamon(), the framework calls v4l2_m2m_try_schedule().
>
> v4l2_m2m_try_schedule()
> 1. Checks for the above conditions.
> 2. Checks for driver-specific conditions by calling job_ready() callback, if
> supplied.
> 3. If all the checks succeed, it calls v4l2_m2m_schedule() to schedule the
> transaction.
>
> v4l2_m2m_schedule()
> 1. Checks whether the transaction is already on job queue and schedules it
> if not (by adding it to the job queue).
> 2. Calls v4l2_m2m_try_run().
>
> v4l2_m2m_try_run()
> 1. Runs a job if and is pending and none is currently running by calling
> device_run() callback.
>
> When the device_run() callback is called, the driver has to begin the
> transaction. When it is finished, the driver has to call v4l2_m2m_job_finish().
>
> v4l2_m2m_job_finish()
> 1. Removes the currently running transaction from the job queue and calls
> v4l2_m2m_try_run to (possibly) run the next pending transaction.
>
> There is also support for forced transaction aborting (when an application
> gets killed). The framework calls job_abort() callback and the driver has
> to abort the transaction as soon as possible and call v4l2_m2m_job_finish()
> to indicate that the transaction has been aborted.
>
>
> Additionally, some kind of timeout for transactions could be added to prevent
> instances from claiming the device for too long.
>
> -------------------------------------------------------------------------------
> Acquiring ready buffers to process
> -------------------------------------------------------------------------------
> Ready buffers can be acquired using v4l2_m2m_next_src_buf()/
> v4l2_m2m_next_dst_buf(). After the transaction they are removed from the queues
> with v4l2_m2m_dst_buf_remove()/v4l2_m2m_src_buf_remove(). This is not
> multi-buffer-transaction-safe. It will have to be modified, but ideally after
> we decide how to handle multi-buffer transactions in videobuf core.
>
> -------------------------------------------------------------------------------
> poll()
> -------------------------------------------------------------------------------
> We cannot have poll() for multiple queues on one node, so we use poll() for the
> destination queue only.
>
> -------------------------------------------------------------------------------
> mmap()
> -------------------------------------------------------------------------------
> Requirements:
> - allow mapping buffers from different queues
> - retain "magic" offset values so videobuf can still match buffers by offsets
>
> The proposed solution involves a querybuf() and mmap() multiplexers:
>
> a) When a driver calls querybuf(), we have access to the type and we can
> detect which queue to call videobuf_querybuf() on:
>
> vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
> ret = videobuf_querybuf(vq, buf);
>
> The offsets returned from videobuf_querybuf() for one of the queues are further
> offset by a predefined constant (DST_QUEUE_OFF_BASE). This way the driver
> (and applications) receive different offsets for the same buffer indexes of
> each queue:
>
> if (buf->memory == V4L2_MEMORY_MMAP
> && vq->type == V4L2_BUF_TYPE_VIDEO_CAPTURE) {
> buf->m.offset += DST_QUEUE_OFF_BASE;
> }
>
>
> b) When the application (driver) calls mmap(), the offsets which were modified
> in querybuf() are detected and the proper queue for them chosen based on that.
> Finally, the modified offsets are passed to videobuf_mmap_mapper() for proper
> queues with their offsets changed back to values recognizable by videobuf:
>
> unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
> struct videobuf_queue *vq;
>
> if (offset < DST_QUEUE_OFF_BASE) {
> vq = v4l2_m2m_get_src_vq(m2m_ctx);
> } else {
> vq = v4l2_m2m_get_dst_vq(m2m_ctx);
> vma->vm_pgoff -= (DST_QUEUE_OFF_BASE >> PAGE_SHIFT);
> }
>
> return videobuf_mmap_mapper(vq, vma);
>
>
> -------------------------------------------------------------------------------
> Test device and a userspace application
> -------------------------------------------------------------------------------
> mem2mem_testdev.c is a test driver for the framework. It uses timers for fake
> interrupts and allows testing transaction with different number of buffers
> and transaction durations simultaneously.
>
> process-vmalloc.c is a capture+output test application for the test device.
>
> -------------------------------------------------------------------------------
> Future work
> -------------------------------------------------------------------------------
> - read/write support
> - transaction/abort timeouts
> - extracting more common code to the framework? (e.g. per-queue format details,
> transaction length, etc.)
>
>
> Best regards
> --
> Pawel Osciak
> Linux Platform Group
> Samsung Poland R&D Center
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
Hans Verkuil - video4linux developer - sponsored by TANDBERG
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v2.1 1/2] V4L: Add memory-to-memory device helper framework for V4L2.
2009-12-23 13:17 ` [PATCH v2.1 1/2] V4L: Add memory-to-memory device helper framework for V4L2 Pawel Osciak
@ 2009-12-24 2:53 ` Andy Walls
0 siblings, 0 replies; 9+ messages in thread
From: Andy Walls @ 2009-12-24 2:53 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, 2009-12-23 at 14:17 +0100, Pawel Osciak wrote:
> A mem-to-mem device is a device that uses memory buffers passed by
> userspace applications for both source and destination data. This is
> different from existing drivers, which use memory buffers for only one
> of those at once.
>
> In terms of V4L2 such a device would be both of OUTPUT and CAPTURE type.
> Although no such devices are present in the V4L2 framework, a demand for such
> a model exists, e.g. for 'resizer devices'.
>
> This patch also adds a separate kconfig submenu for mem-to-mem V4L devices.
>
> Signed-off-by: Pawel Osciak <p.osciak@samsung.com>
> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
> Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com>
Pawel,
I did find a few things that I want to mention. (If you think I'm wrong
on something feel free to say so, I was interrupted several times when
lpoking things over.)
> ---
> drivers/media/video/Kconfig | 14 +
> drivers/media/video/Makefile | 2 +
> drivers/media/video/v4l2-mem2mem.c | 671 ++++++++++++++++++++++++++++++++++++
> include/media/v4l2-mem2mem.h | 153 ++++++++
> 4 files changed, 840 insertions(+), 0 deletions(-)
> create mode 100644 drivers/media/video/v4l2-mem2mem.c
> create mode 100644 include/media/v4l2-mem2mem.h
>
> diff --git a/drivers/media/video/Kconfig b/drivers/media/video/Kconfig
> index 2f83be7..4e97dcf 100644
> --- a/drivers/media/video/Kconfig
> +++ b/drivers/media/video/Kconfig
> @@ -45,6 +45,10 @@ config VIDEO_TUNER
> tristate
> depends on MEDIA_TUNER
>
> +config V4L2_MEM2MEM_DEV
> + tristate
> + depends on VIDEOBUF_GEN
> +
> #
> # Multimedia Video device configuration
> #
> @@ -1075,3 +1079,13 @@ config USB_S2255
>
> endif # V4L_USB_DRIVERS
> endif # VIDEO_CAPTURE_DRIVERS
> +
> +menuconfig V4L_MEM2MEM_DRIVERS
> + bool "Memory-to-memory multimedia devices"
> + depends on VIDEO_V4L2
> + default n
> + ---help---
> + Say Y here to enable selecting drivers for V4L devices that
> + use system memory for both source and destination buffers, as opposed
> + to capture and output drivers, which use memory buffers for just
> + one of those.
> diff --git a/drivers/media/video/Makefile b/drivers/media/video/Makefile
> index 2af68ee..9fe7d40 100644
> --- a/drivers/media/video/Makefile
> +++ b/drivers/media/video/Makefile
> @@ -115,6 +115,8 @@ obj-$(CONFIG_VIDEOBUF_VMALLOC) += videobuf-vmalloc.o
> obj-$(CONFIG_VIDEOBUF_DVB) += videobuf-dvb.o
> obj-$(CONFIG_VIDEO_BTCX) += btcx-risc.o
>
> +obj-$(CONFIG_V4L2_MEM2MEM_DEV) += v4l2-mem2mem.o
> +
> obj-$(CONFIG_VIDEO_M32R_AR_M64278) += arv.o
>
> obj-$(CONFIG_VIDEO_CX2341X) += cx2341x.o
> diff --git a/drivers/media/video/v4l2-mem2mem.c b/drivers/media/video/v4l2-mem2mem.c
> new file mode 100644
> index 0000000..417ee2c
> --- /dev/null
> +++ b/drivers/media/video/v4l2-mem2mem.c
> @@ -0,0 +1,671 @@
> +/*
> + * Memory-to-memory device framework for Video for Linux 2.
> + *
> + * Helper functions for devices that use memory buffers for both source
> + * and destination.
> + *
> + * Copyright (c) 2009 Samsung Electronics Co., Ltd.
> + * Pawel Osciak, <p.osciak@samsung.com>
> + * Marek Szyprowski, <m.szyprowski@samsung.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by the
> + * Free Software Foundation; either version 2 of the
> + * License, or (at your option) any later version
> + */
> +
> +#include <linux/module.h>
> +#include <linux/sched.h>
> +#include <media/videobuf-core.h>
> +#include <media/v4l2-mem2mem.h>
> +
> +MODULE_DESCRIPTION("Mem to mem device framework for V4L2");
> +MODULE_AUTHOR("Pawel Osciak, <p.osciak@samsung.com>");
> +MODULE_LICENSE("GPL");
> +
> +static int debug;
> +module_param(debug, int, 0644);
> +
> +#define dprintk(fmt, arg...) do {\
> + if (debug >= 1)\
> + printk(KERN_DEBUG "%s: " fmt, __func__, ## arg); } while (0)
> +
> +
> +/* The instance is already queued on the jobqueue */
> +#define TRANS_QUEUED (1 << 0)
> +/* The instance is currently running in hardware */
> +#define TRANS_RUNNING (1 << 1)
> +
> +
> +/* Offset base for buffers on the destination queue - used to distinguish
> + * between source and destination buffers when mmapping - they receive the same
> + * offsets but for different queues */
> +#define DST_QUEUE_OFF_BASE (TASK_SIZE / 2)
> +
> +
> +struct v4l2_m2m_dev {
> + /* Currently running instance */
> + struct v4l2_m2m_ctx *curr_ctx;
> + /* Instances queued to run */
> + struct list_head jobqueue;
> + spinlock_t job_spinlock;
> +
> + struct v4l2_m2m_ops *m2m_ops;
> +};
> +
> +static inline
> +struct v4l2_m2m_queue_ctx *get_queue_ctx(struct v4l2_m2m_ctx *m2m_ctx,
> + enum v4l2_buf_type type)
> +{
> + switch (type) {
> + case V4L2_BUF_TYPE_VIDEO_CAPTURE:
> + return &m2m_ctx->cap_q_ctx;
> + case V4L2_BUF_TYPE_VIDEO_OUTPUT:
> + return &m2m_ctx->out_q_ctx;
> + default:
> + printk(KERN_ERR "Invalid buffer type\n");
> + return NULL;
> + }
> +}
The logic above is fine. I'm just surprised gcc doesn't gripe about
"control reaching the end of a non-void function". I guess gcc has
gotten smarter.
> +/**
> + * v4l2_m2m_get_vq() - return videobuf_queue for the given type
> + */
> +struct videobuf_queue *v4l2_m2m_get_vq(struct v4l2_m2m_ctx *m2m_ctx,
> + enum v4l2_buf_type type)
> +{
> + struct v4l2_m2m_queue_ctx *q_ctx;
> +
> + q_ctx = get_queue_ctx(m2m_ctx, type);
> + if (!q_ctx)
> + return NULL;
> +
> + return &q_ctx->q;
> +}
> +EXPORT_SYMBOL(v4l2_m2m_get_vq);
> +
> +/**
> + * v4l2_m2m_get_src_vq() - return videobuf_queue for source buffers
> + */
> +struct videobuf_queue *v4l2_m2m_get_src_vq(struct v4l2_m2m_ctx *m2m_ctx)
> +{
> + return v4l2_m2m_get_vq(m2m_ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
> +}
> +EXPORT_SYMBOL(v4l2_m2m_get_src_vq);
> +
> +/**
> + * v4l2_m2m_get_dst_vq() - return videobuf_queue for destination buffers
> + */
> +struct videobuf_queue *v4l2_m2m_get_dst_vq(struct v4l2_m2m_ctx *m2m_ctx)
> +{
> + return v4l2_m2m_get_vq(m2m_ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
> +}
> +EXPORT_SYMBOL(v4l2_m2m_get_dst_vq);
> +
> +/**
> + * v4l2_m2m_next_buf() - return next buffer from the list of ready buffers
> + */
> +static void *v4l2_m2m_next_buf(struct v4l2_m2m_ctx *m2m_ctx,
> + enum v4l2_buf_type type)
> +{
> + struct v4l2_m2m_queue_ctx *q_ctx;
> + struct videobuf_buffer *vb = NULL;
> +
> + q_ctx = get_queue_ctx(m2m_ctx, type);
> +
> + vb = list_entry(q_ctx->rdy_queue.next, struct videobuf_buffer, queue);
> + vb->state = VIDEOBUF_ACTIVE;
> +
> + return vb;
> +}
I have some questions/concerns on this:
1. There is no protection on access to q_ctx->rdy_queue.next and this
function can be indirectly called from outside this file. In other
places "rdy_queue.next" access is protected with the q.irqlock. I know
on x86 all reads are atomic, so just reading the "next" is probably
safe, but I suspect you won't be using x86.
2. There is no check if the list is empty. In the case of an empty
list, "next" will actually point back to the list head, and vb will not
point to a valid video_buffer object. The vb->state assignment will end
up corrupting something.
Maybe I'm missing something.
> +/**
> + * v4l2_m2m_next_src_buf() - return next source buffer from the list of ready
> + * buffers
> + */
> +inline void *v4l2_m2m_next_src_buf(struct v4l2_m2m_ctx *m2m_ctx)
> +{
> + return v4l2_m2m_next_buf(m2m_ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
> +}
> +EXPORT_SYMBOL(v4l2_m2m_next_src_buf);
> +
> +/**
> + * v4l2_m2m_next_dst_buf() - return next destination buffer from the list of
> + * ready buffers
> + */
> +inline void *v4l2_m2m_next_dst_buf(struct v4l2_m2m_ctx *m2m_ctx)
> +{
> + return v4l2_m2m_next_buf(m2m_ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
> +}
> +EXPORT_SYMBOL(v4l2_m2m_next_dst_buf);
> +
> +/**
> + * v4l2_m2m_buf_remove() - take off a buffer from the list of ready buffers and
> + * return it
> + */
> +static void *v4l2_m2m_buf_remove(struct v4l2_m2m_ctx *m2m_ctx,
> + enum v4l2_buf_type type)
> +{
> + struct v4l2_m2m_queue_ctx *q_ctx;
> + struct videobuf_buffer *vb = NULL;
> + unsigned long flags = 0;
> +
> + q_ctx = get_queue_ctx(m2m_ctx, type);
> +
> + spin_lock_irqsave(q_ctx->q.irqlock, flags);
> + vb = list_entry(q_ctx->rdy_queue.next, struct videobuf_buffer, queue);
> + list_del(&vb->queue);
> + q_ctx->num_rdy--;
> + spin_unlock_irqrestore(q_ctx->q.irqlock, flags);
> +
> + return vb;
> +}
Again, there is no check for an empty list here, and it can be
indirectly called by callers external to this file.
> +/**
> + * v4l2_m2m_src_buf_remove() - take off a srouce buffer from the list of ready
> + * buffers and return it
> + */
> +void *v4l2_m2m_src_buf_remove(struct v4l2_m2m_ctx *m2m_ctx)
> +{
> + return v4l2_m2m_buf_remove(m2m_ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
> +}
> +EXPORT_SYMBOL(v4l2_m2m_src_buf_remove);
> +
> +/**
> + * v4l2_m2m_dst_buf_remove() - take off a destination buffer from the list of
> + * ready buffers and return it
> + */
> +void *v4l2_m2m_dst_buf_remove(struct v4l2_m2m_ctx *m2m_ctx)
> +{
> + return v4l2_m2m_buf_remove(m2m_ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
> +
> +}
> +EXPORT_SYMBOL(v4l2_m2m_dst_buf_remove);
> +
> +
> +/*
> + * Scheduling handlers
> + */
> +
> +/**
> + * v4l2_m2m_get_curr_priv() - return driver private data for the currently
> + * running instance or NULL if no instance is running
> + */
> +void *v4l2_m2m_get_curr_priv(struct v4l2_m2m_dev *m2m_dev)
> +{
> + if (!m2m_dev->curr_ctx)
> + return NULL;
> + else
> + return m2m_dev->curr_ctx->priv;
> +}
> +EXPORT_SYMBOL(v4l2_m2m_get_curr_priv);
Every other access to m2m_dev->curr_ctx is protected by the spin lock,
but for this externally callable function it is not. The curr_ctx could
be set back to NULL between the if() and the return in the else clause.
> +
> +/**
> + * v4l2_m2m_try_run() - select next job to perform and run it if possible
> + *
> + * Get next transaction (if present) from the waiting jobs list and run it.
> + */
> +static void v4l2_m2m_try_run(struct v4l2_m2m_dev *m2m_dev)
> +{
> + unsigned long flags = 0;
> +
> + spin_lock_irqsave(&m2m_dev->job_spinlock, flags);
> + if (NULL != m2m_dev->curr_ctx) {
> + spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
> + dprintk("Another instance is running, won't run now\n");
> + return;
> + }
> +
> + if (list_empty(&m2m_dev->jobqueue)) {
> + spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
> + dprintk("No job pending\n");
> + return;
> + }
> +
> + m2m_dev->curr_ctx = list_entry(m2m_dev->jobqueue.next,
> + struct v4l2_m2m_ctx, queue);
> + m2m_dev->curr_ctx->job_flags |= TRANS_RUNNING;
> + spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
> +
> + m2m_dev->m2m_ops->device_run(m2m_dev->curr_ctx->priv);
> +
> + return;
> +}
> +
> +/**
> + * v4l2_m2m_schedule() - add an instance to the pending job queue
> + * @m2m_ctx: The instance to be added to the pending job queue
> + *
> + * Called when an instance is fully prepared to run a transaction, i.e. the
> + * instance will not sleep before finishing the transaction when run.
> + * If an instance is already on the queue, it will not be added for the second
> + * time and it is the responsibility of the instance to retry this at a later
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> + * time.
> + */
> +static void v4l2_m2m_schedule(struct v4l2_m2m_ctx *m2m_ctx)
> +{
> + struct v4l2_m2m_dev *dev = m2m_ctx->m2m_dev;
> + unsigned long flags = 0;
> +
> + spin_lock_irqsave(&dev->job_spinlock, flags);
> + if (!(m2m_ctx->job_flags & TRANS_QUEUED)) {
> + list_add_tail(&m2m_ctx->queue, &dev->jobqueue);
> + m2m_ctx->job_flags |= TRANS_QUEUED;
> + }
> + spin_unlock_irqrestore(&dev->job_spinlock, flags);
> +
> + v4l2_m2m_try_run(dev);
> +}
So how is the calling instance supposed to know to retry when the
function returns void?
> +/**
> + * v4l2_m2m_try_schedule() - check whether an instance is ready to be added to
> + * the pending job queue and add it if so.
> + * @m2m_ctx: m2m context assigned to the instance to be checked
> + *
> + * There are three basic requirements an instance has to meet to be able to run:
> + * 1) at least one source buffer has to be queued,
> + * 2) at least one destination buffer has to be queued,
> + * 3) streaming has to be on.
> + *
> + * There can also be additional, custom requirements. In such case the driver
> + * should supply a custom method (job_ready in v4l2_m2m_ops) that should
> + * return 1 * if the instance is ready.
> + * An example of the above could be an instance that requires more than one
> + * src/dst buffer per transaction.
> + */
> +static void v4l2_m2m_try_schedule(struct v4l2_m2m_ctx *m2m_ctx)
> +{
> + struct v4l2_m2m_dev *m2m_dev;
> + unsigned long flags = 0;
> +
> + m2m_dev = m2m_ctx->m2m_dev;
> + dprintk("Trying to schedule a job for m2m_ctx: %p\n", m2m_ctx);
> +
> + spin_lock_irqsave(&m2m_dev->job_spinlock, flags);
> + if (m2m_ctx->job_flags & TRANS_QUEUED) {
> + spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
> + dprintk("On job queue already\n");
> + return;
> + }
> + spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
> +
> + /* Checking only one queue is enough, we always turn on both */
> + if (!m2m_ctx->out_q_ctx.q.streaming) {
> + dprintk("Streaming not on, will not schedule\n");
> + return;
> + }
> +
> + if (list_empty(&m2m_ctx->out_q_ctx.rdy_queue)) {
> + dprintk("No input buffers available\n");
> + return;
> + }
> + if (list_empty(&m2m_ctx->cap_q_ctx.rdy_queue)) {
> + dprintk("No output buffers available\n");
> + return;
> + }
> +
> + if (m2m_dev->m2m_ops->job_ready
> + && (!m2m_dev->m2m_ops->job_ready(m2m_ctx->priv))) {
> + dprintk("Driver not ready\n");
> + return;
> + }
> +
> + dprintk("Instance ready to be scheduled\n");
> + v4l2_m2m_schedule(m2m_ctx);
> +}
Again, accesss to the rdy_queue objects are not protected.
Since you have a num_ready variable, how about changing that from a u8
to an atomic_t and checking num_ready for when you need to check if the
rdy_queue is empty or not?
> +/**
> + * v4l2_m2m_job_finish() - inform the framework that a job has been finished
> + * and have it clean up
> + *
> + * Called by a driver to yield back the device after it has finished with it.
> + * Should be called as soon as possible after reaching a state which allows
> + * other instances to take control of the device.
> + *
> + * TODO: An instance that fails to give back the device before a predefined
> + * amount of time may have its device ownership taken away forcibly.
> + */
> +void v4l2_m2m_job_finish(struct v4l2_m2m_dev *m2m_dev,
> + struct v4l2_m2m_ctx *m2m_ctx)
> +{
> + unsigned long flags = 0;
> +
> + spin_lock_irqsave(&m2m_dev->job_spinlock, flags);
> + if (!m2m_dev->curr_ctx || m2m_dev->curr_ctx != m2m_ctx) {
> + spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
> + dprintk("Called by an instance not currently running\n");
> + return;
> + }
> +
> + /*mutex_lock(&m2m_dev->dev_mutex);*/
This looks like it should be deleted: m2m_dev doesn't have a dev_mutex
member. I don't think you would want to sleep while holding a spinlock
anyway.
> + list_del(&m2m_dev->curr_ctx->queue);
> + m2m_dev->curr_ctx->job_flags &= ~(TRANS_QUEUED | TRANS_RUNNING);
> + m2m_dev->curr_ctx = NULL;
> +
> + spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
> +
> + v4l2_m2m_try_run(m2m_dev);
> +}
> +EXPORT_SYMBOL(v4l2_m2m_job_finish);
> +
> +/**
> + * v4l2_m2m_reqbufs() - multi-queue-aware REQBUFS multiplexer
> + */
> +int v4l2_m2m_reqbufs(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
> + struct v4l2_requestbuffers *reqbufs)
> +{
> + struct videobuf_queue *vq;
> +
> + vq = v4l2_m2m_get_vq(m2m_ctx, reqbufs->type);
> + return videobuf_reqbufs(vq, reqbufs);
> +}
> +EXPORT_SYMBOL_GPL(v4l2_m2m_reqbufs);
> +
> +/**
> + * v4l2_m2m_querybuf() - multi-queue-aware QUERYBUF multiplexer
> + *
> + * See v4l2_m2m_mmap() documentation for details.
> + */
> +int v4l2_m2m_querybuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
> + struct v4l2_buffer *buf)
> +{
> + struct videobuf_queue *vq;
> + int ret = 0;
> +
> + vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
> + ret = videobuf_querybuf(vq, buf);
> +
> + if (buf->memory == V4L2_MEMORY_MMAP
> + && vq->type == V4L2_BUF_TYPE_VIDEO_CAPTURE) {
> + buf->m.offset += DST_QUEUE_OFF_BASE;
> + }
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(v4l2_m2m_querybuf);
> +
> +/**
> + * v4l2_m2m_qbuf() - enqueue a source or destination buffer, depending on
> + * the type
> + */
> +int v4l2_m2m_qbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
> + struct v4l2_buffer *buf)
> +{
> + struct videobuf_queue *vq;
> +
> + vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
> + return videobuf_qbuf(vq, buf);
> +}
> +EXPORT_SYMBOL_GPL(v4l2_m2m_qbuf);
> +
> +/**
> + * v4l2_m2m_dqbuf() - dequeue a source or destination buffer, depending on
> + * the type
> + */
> +int v4l2_m2m_dqbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
> + struct v4l2_buffer *buf)
> +{
> + struct videobuf_queue *vq;
> +
> + vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
> + return videobuf_dqbuf(vq, buf, file->f_flags & O_NONBLOCK);
> +}
> +EXPORT_SYMBOL_GPL(v4l2_m2m_dqbuf);
> +
> +/**
> + * v4l2_M2m_streamon() - start streaming
> + */
> +int v4l2_m2m_streamon(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
> + enum v4l2_buf_type type)
> +{
> + int ret = 0;
> +
> + /* These can fail only if the queues are in use,
> + * but they shouldn't be as we are managing instances manually */
> + ret = videobuf_streamon(&m2m_ctx->out_q_ctx.q);
> + if (ret) {
> + printk(KERN_ERR "Streamon on output queue failed\n");
> + return ret;
> + }
> +
> + ret = videobuf_streamon(&m2m_ctx->cap_q_ctx.q);
> + if (ret) {
> + printk(KERN_ERR "Streamon on capture queue failed\n");
> + return ret;
> + }
> +
> + v4l2_m2m_try_schedule(m2m_ctx);
> +
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(v4l2_m2m_streamon);
> +
> +/**
> + * v4l2_m2m_streamoff() - stop streaming
> + */
> +int v4l2_m2m_streamoff(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
> + enum v4l2_buf_type type)
> +{
> + /* streamoff() fails only when we are not streaming */
> + if (videobuf_streamoff(&m2m_ctx->out_q_ctx.q)
> + || videobuf_streamoff(&m2m_ctx->cap_q_ctx.q))
> + return -EINVAL;
> +
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(v4l2_m2m_streamoff);
The second videobuf_streamoff() won't get called if the first one fails.
I don't know if that was intentional or not. It probably doesn't
matter.
> +/**
> + * v4l2_m2m_poll() - poll replacement, for destination buffers only
> + *
> + * Call from driver's poll() function. Will poll the destination queue only.
> + */
> +unsigned int v4l2_m2m_poll(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
> + struct poll_table_struct *wait)
> +{
> + struct videobuf_queue *dst_q = NULL;
> + struct videobuf_buffer *vb = NULL;
> + unsigned int rc = 0;
> +
> + dst_q = v4l2_m2m_get_dst_vq(m2m_ctx);
> +
> + mutex_lock(&dst_q->vb_lock);
> +
> + if (dst_q->streaming) {
> + if (!list_empty(&dst_q->stream))
> + vb = list_entry(dst_q->stream.next,
> + struct videobuf_buffer, stream);
> + }
> +
> + if (!vb)
> + rc = POLLERR;
> +
> + if (0 == rc) {
> + poll_wait(file, &vb->done, wait);
> + if (vb->state == VIDEOBUF_DONE || vb->state == VIDEOBUF_ERROR)
> + rc = POLLOUT | POLLRDNORM;
> + }
> +
> + mutex_unlock(&dst_q->vb_lock);
> + return rc;
> +}
> +EXPORT_SYMBOL_GPL(v4l2_m2m_poll);
> +
> +/**
> + * v4l2_m2m_mmap() - source and destination queues-aware mmap multiplexer
> + *
> + * Call from driver's mmap() function. Will handle mmap() for both queues
> + * seamlessly for videobuffer, which will receive normal per-queue offsets and
> + * proper videobuf queue pointers. The differentation is made outside videobuf
> + * by adding a predefined offset to buffers from one of the queues and
> + * subtracting it before passing it back to videobuf. Only drivers (and
> + * thus applications) receive modified offsets.
> + */
> +int v4l2_m2m_mmap(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
> + struct vm_area_struct *vma)
> +{
> + unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
> + struct videobuf_queue *vq;
> +
> + if (offset < DST_QUEUE_OFF_BASE) {
> + vq = v4l2_m2m_get_src_vq(m2m_ctx);
> + } else {
> + vq = v4l2_m2m_get_dst_vq(m2m_ctx);
> + vma->vm_pgoff -= (DST_QUEUE_OFF_BASE >> PAGE_SHIFT);
> + }
> +
> + return videobuf_mmap_mapper(vq, vma);
> +}
> +EXPORT_SYMBOL(v4l2_m2m_mmap);
> +
> +/**
> + * v4l2_m2m_init() - initialize per-driver m2m data
> + *
> + * Usually called from driver's probe() function.
> + */
> +struct v4l2_m2m_dev *v4l2_m2m_init(struct v4l2_m2m_ops *m2m_ops)
> +{
> + struct v4l2_m2m_dev *m2m_dev;
> +
> + if (!m2m_ops)
> + return ERR_PTR(-EINVAL);
> +
> + /*BUG_ON(!m2m_ops->job_ready);*/
> + BUG_ON(!m2m_ops->device_run);
> + BUG_ON(!m2m_ops->job_abort);
> +
> + m2m_dev = kzalloc(sizeof *m2m_dev, GFP_KERNEL);
> + if (!m2m_dev)
> + return ERR_PTR(-ENOMEM);
> +
> + m2m_dev->curr_ctx = NULL;
> + m2m_dev->m2m_ops = m2m_ops;
> + INIT_LIST_HEAD(&m2m_dev->jobqueue);
> + spin_lock_init(&m2m_dev->job_spinlock);
> +
> + return m2m_dev;
> +}
> +EXPORT_SYMBOL_GPL(v4l2_m2m_init);
> +
> +/**
> + * v4l2_m2m_release() - cleans up and frees a m2m_dev structure
> + *
> + * Usually called from driver's remove() function.
> + */
> +void v4l2_m2m_release(struct v4l2_m2m_dev *m2m_dev)
> +{
> + kfree(m2m_dev);
> +}
> +EXPORT_SYMBOL_GPL(v4l2_m2m_release);
> +
> +/**
> + * v4l2_m2m_ctx_init() - allocate and initialize a m2m context
> + * @priv - driver's instance private data
> + * @m2m_dev - a previously initialized m2m_dev struct
> + * @vq_init - a callback for queue type-specific initialization function to be
> + * used for initializing videobuf_queues
> + *
> + * Usually called from driver's open() function.
> + */
> +struct v4l2_m2m_ctx *v4l2_m2m_ctx_init(void *priv, struct v4l2_m2m_dev *m2m_dev,
> + void (*vq_init)(void *priv, struct videobuf_queue *,
> + enum v4l2_buf_type))
> +{
> + struct v4l2_m2m_ctx *m2m_ctx;
> + struct v4l2_m2m_queue_ctx *out_q_ctx;
> + struct v4l2_m2m_queue_ctx *cap_q_ctx;
> +
> + if (!vq_init)
> + return ERR_PTR(-EINVAL);
> +
> + m2m_ctx = kzalloc(sizeof *m2m_ctx, GFP_KERNEL);
> + if (!m2m_ctx)
> + return ERR_PTR(-ENOMEM);
> +
> + m2m_ctx->priv = priv;
> + m2m_ctx->m2m_dev = m2m_dev;
> +
> + out_q_ctx = get_queue_ctx(m2m_ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
> + cap_q_ctx = get_queue_ctx(m2m_ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
> +
> + INIT_LIST_HEAD(&out_q_ctx->rdy_queue);
> + INIT_LIST_HEAD(&cap_q_ctx->rdy_queue);
> +
> + /*spin_lock_init(&m2m_ctx->queue_lock);*/
> + INIT_LIST_HEAD(&m2m_ctx->queue);
> +
> + vq_init(priv, &out_q_ctx->q, V4L2_BUF_TYPE_VIDEO_OUTPUT);
> + vq_init(priv, &cap_q_ctx->q, V4L2_BUF_TYPE_VIDEO_CAPTURE);
> + out_q_ctx->q.priv_data = cap_q_ctx->q.priv_data = priv;
> +
> + return m2m_ctx;
> +}
> +EXPORT_SYMBOL_GPL(v4l2_m2m_ctx_init);
> +
> +/**
> + * v4l2_m2m_ctx_release() - release m2m context
> + *
> + * Usually called from driver's release() function.
> + */
> +void v4l2_m2m_ctx_release(struct v4l2_m2m_ctx *m2m_ctx)
> +{
> + struct v4l2_m2m_dev *m2m_dev;
> + struct videobuf_buffer *vb;
> + unsigned long flags = 0;
> +
> + m2m_dev = m2m_ctx->m2m_dev;
> +
> + spin_lock_irqsave(&m2m_dev->job_spinlock, flags);
> + if (m2m_ctx->job_flags & TRANS_RUNNING) {
> + spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
> + m2m_dev->m2m_ops->job_abort(m2m_ctx->priv);
> + dprintk("m2m_ctx %p running, will wait to complete", m2m_ctx);
> + vb = v4l2_m2m_next_dst_buf(m2m_ctx);
> + BUG_ON(NULL == vb);
> + wait_event(vb->done, vb->state != VIDEOBUF_ACTIVE
> + && vb->state != VIDEOBUF_QUEUED);
> + } else if (m2m_ctx->job_flags & TRANS_QUEUED) {
> + list_del(&m2m_ctx->queue);
> + m2m_ctx->job_flags &= ~(TRANS_QUEUED | TRANS_RUNNING);
> + spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
> + dprintk("m2m_ctx: %p had been on queue and was removed\n",
> + m2m_ctx);
> + } else {
> + /* Do nothing, was not on queue/running */
> + spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags);
> + }
> +
> + videobuf_stop(&m2m_ctx->cap_q_ctx.q);
> + videobuf_stop(&m2m_ctx->out_q_ctx.q);
> +
> + videobuf_mmap_free(&m2m_ctx->cap_q_ctx.q);
> + videobuf_mmap_free(&m2m_ctx->out_q_ctx.q);
> +
> + kfree(m2m_ctx);
> +}
> +EXPORT_SYMBOL_GPL(v4l2_m2m_ctx_release);
> +
> +/**
> + * v4l2_m2m_buf_queue() - add a buffer to the proper ready buffers list.
> + *
> + * Call from withing buf_queue() videobuf_queue_ops callback.
> + */
> +/* Locking: Caller holds q->irqlock */
If that's a condition that applies to calling other functions that
access the rdy_queue then that needs to be indicated in their comments
too.
That's all for now.
Regards,
Andy
> +void v4l2_m2m_buf_queue(struct v4l2_m2m_ctx *m2m_ctx, struct videobuf_queue *vq,
> + struct videobuf_buffer *vb)
> +{
> + struct v4l2_m2m_queue_ctx *q_ctx;
> +
> + q_ctx = get_queue_ctx(m2m_ctx, vq->type);
> + if (!q_ctx)
> + return;
> +
> + list_add_tail(&vb->queue, &q_ctx->rdy_queue);
> + q_ctx->num_rdy++;
> +
> + vb->state = VIDEOBUF_QUEUED;
> +
> + v4l2_m2m_try_schedule(m2m_ctx);
> +}
> +EXPORT_SYMBOL_GPL(v4l2_m2m_buf_queue);
> +
> diff --git a/include/media/v4l2-mem2mem.h b/include/media/v4l2-mem2mem.h
> new file mode 100644
> index 0000000..a5ac3ec
> --- /dev/null
> +++ b/include/media/v4l2-mem2mem.h
> @@ -0,0 +1,153 @@
> +/*
> + * Memory-to-memory device framework for Video for Linux 2.
> + *
> + * Helper functions for devices that use memory buffers for both source
> + * and destination.
> + *
> + * Copyright (c) 2009 Samsung Electronics Co., Ltd.
> + * Pawel Osciak, <p.osciak@samsung.com>
> + * Marek Szyprowski, <m.szyprowski@samsung.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by the
> + * Free Software Foundation; either version 2 of the
> + * License, or (at your option) any later version
> + */
> +
> +#ifndef _MEDIA_V4L2_MEM2MEM_H
> +#define _MEDIA_V4L2_MEM2MEM_H
> +
> +#include <media/videobuf-core.h>
> +
> +/**
> + * struct v4l2_m2m_ops - mem-to-mem device driver callbacks
> + * @device_run: required. Begin the actual job (transaction) inside this
> + * callback.
> + * The job does NOT have to end before this callback returns
> + * (and it will be the usual case). When the job finishes,
> + * v4l2_m2m_job_finish() has to be called.
> + * @job_ready: optional. Should return 0 if the driver does not have a job
> + * fully prepared to run yet (i.e. it will not be able to finish a
> + * transactio without sleeping). If not provided, it will be
> + * assumed that one source and one destination buffer are all
> + * that is required for the driver to perform one full transaction.
> + * @job_abort: required. Informs the driver that it has to abort the currently
> + * running transaction as soon as possible (i.e. as soon as it can
> + * stop the device safely; e.g. in the next interrupt handler),
> + * even if the transaction would not have been finished by then.
> + * After the driver performs the necessary steps, it has to call
> + * v4l2_m2m_job_finish() (as if the transaction ended normally).
> + * This function does not have to (and will usually not) wait
> + * until the device enters a state when it can be stopped.
> + */
> +struct v4l2_m2m_ops {
> + void (*device_run)(void *priv);
> + int (*job_ready)(void *priv);
> + void (*job_abort)(void *priv);
> +};
> +
> +struct v4l2_m2m_dev;
> +
> +struct v4l2_m2m_queue_ctx {
> +/* private: internal use only */
> + struct videobuf_queue q;
> +
> + /* Base value for offsets of mmaped buffers on this queue */
> + unsigned long offset_base;
> +
> + /* Queue for buffers ready to be processed as soon as this
> + * instance receives access to the device */
> + struct list_head rdy_queue;
> + u8 num_rdy;
> +};
> +
> +struct v4l2_m2m_ctx {
> +/* private: internal use only */
> + struct v4l2_m2m_dev *m2m_dev;
> +
> + /* Capture (output to memory) queue context */
> + struct v4l2_m2m_queue_ctx cap_q_ctx;
> +
> + /* Output (input from memory) queue context */
> + struct v4l2_m2m_queue_ctx out_q_ctx;
> +
> + /* For device job queue */
> + struct list_head queue;
> + unsigned long job_flags;
> +
> + /* Instance private data */
> + void *priv;
> +};
> +
> +void *v4l2_m2m_get_curr_priv(struct v4l2_m2m_dev *m2m_dev);
> +
> +struct videobuf_queue *v4l2_m2m_get_src_vq(struct v4l2_m2m_ctx *m2m_ctx);
> +struct videobuf_queue *v4l2_m2m_get_dst_vq(struct v4l2_m2m_ctx *m2m_ctx);
> +struct videobuf_queue *v4l2_m2m_get_vq(struct v4l2_m2m_ctx *m2m_ctx,
> + enum v4l2_buf_type type);
> +
> +void v4l2_m2m_job_finish(struct v4l2_m2m_dev *m2m_dev,
> + struct v4l2_m2m_ctx *m2m_ctx);
> +
> +void *v4l2_m2m_next_src_buf(struct v4l2_m2m_ctx *m2m_ctx);
> +void *v4l2_m2m_next_dst_buf(struct v4l2_m2m_ctx *m2m_ctx);
> +
> +void *v4l2_m2m_src_buf_remove(struct v4l2_m2m_ctx *m2m_ctx);
> +void *v4l2_m2m_dst_buf_remove(struct v4l2_m2m_ctx *m2m_ctx);
> +
> +
> +int v4l2_m2m_reqbufs(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
> + struct v4l2_requestbuffers *reqbufs);
> +
> +int v4l2_m2m_querybuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
> + struct v4l2_buffer *buf);
> +
> +int v4l2_m2m_qbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
> + struct v4l2_buffer *buf);
> +int v4l2_m2m_dqbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
> + struct v4l2_buffer *buf);
> +
> +int v4l2_m2m_streamon(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
> + enum v4l2_buf_type type);
> +int v4l2_m2m_streamoff(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
> + enum v4l2_buf_type type);
> +
> +unsigned int v4l2_m2m_poll(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
> + struct poll_table_struct *wait);
> +
> +int v4l2_m2m_mmap(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
> + struct vm_area_struct *vma);
> +
> +struct v4l2_m2m_dev *v4l2_m2m_init(struct v4l2_m2m_ops *m2m_ops);
> +void v4l2_m2m_release(struct v4l2_m2m_dev *m2m_dev);
> +
> +struct v4l2_m2m_ctx *v4l2_m2m_ctx_init(void *priv, struct v4l2_m2m_dev *m2m_dev,
> + void (*vq_init)(void *priv, struct videobuf_queue *,
> + enum v4l2_buf_type));
> +void v4l2_m2m_ctx_release(struct v4l2_m2m_ctx *m2m_ctx);
> +
> +void v4l2_m2m_buf_queue(struct v4l2_m2m_ctx *m2m_ctx, struct videobuf_queue *vq,
> + struct videobuf_buffer *vb);
> +
> +/**
> + * v4l2_m2m_num_src_bufs_ready() - return the number of source buffers ready for
> + * use
> + */
> +static inline
> +unsigned int v4l2_m2m_num_src_bufs_ready(struct v4l2_m2m_ctx *m2m_ctx)
> +{
> + return m2m_ctx->cap_q_ctx.num_rdy;
> +}
> +
> +/**
> + * v4l2_m2m_num_src_bufs_ready() - return the number of destination buffers
> + * ready for use
> + */
> +static inline
> +unsigned int v4l2_m2m_num_dst_bufs_ready(struct v4l2_m2m_ctx *m2m_ctx)
> +{
> + return m2m_ctx->out_q_ctx.num_rdy;
> +}
> +
> +#endif /* _MEDIA_V4L2_MEM2MEM_H */
> +
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH/RFC v2.1 0/2] Mem-to-mem device framework
2009-12-23 15:05 ` [PATCH/RFC v2.1 0/2] Mem-to-mem device framework Hans Verkuil
@ 2009-12-28 14:49 ` Pawel Osciak
2009-12-31 4:50 ` Hiremath, Vaibhav
0 siblings, 1 reply; 9+ messages in thread
From: Pawel Osciak @ 2009-12-28 14:49 UTC (permalink / raw)
To: linux-arm-kernel
Hello Hans,
On Wednesday 23 December 2009 16:06:18 Hans Verkuil wrote:
> Thank you for working on this! It's much appreciated. Now I've noticed that
> patches regarding memory-to-memory and memory pool tend to get very few comments.
> I suspect that the main reason is that these are SoC-specific features that do
> not occur in consumer-type products. So most v4l developers do not have the
> interest and motivation (and time!) to look into this.
Thank you very much for your response. We were a bit surprised with the lack of
responses as there seemed to be a good number of people interested in this area.
I'm hoping that everybody interested would take a look at the test device posted
along with the patches. It's virtual, no specific hardware required, but it
demonstrates the concepts behind the framework, including transactions.
> One thing that I am missing is a high-level overview of what we want. Currently
> there are patches/RFCs floating around for memory-to-memory support, multiplanar
> support and memory-pool support.
>
> What I would like to see is a RFC that ties this all together from the point of
> view of the public API. I.e. what are the requirements? Possibly solutions? Open
> questions? Forget about how to implement it for the moment, that will follow
> from the chosen solutions.
Yes, that's true, sorry about that. We've been so into it after the memory pool
discussion and the V4L2 mini-summit that I neglected describing the big picture
behind this.
So to give a more high-level description, from the point of view of applications
and the V4L2 API:
---------------
Requirements:
---------------
(Some of the following were first posted by Laurent in:
http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/10204).
1. Support for devices that take input data in a source buffer, take a separate
destination buffer, process the source data and put it in the destination buffer.
2. Allow sharing buffers between devices, effectively chaining them to form
video pipelines. An example of this could be a video decoder, fed with video
stream which returns raw frames, which then have to be postprocessed by another
device and displayed. This is the main scenario we need to have for our S3C/S5P
series SoCs. Of course, we'd like zero-copy.
3. Allow using more than one buffer by the device at the same time. This is not
supported by videobuffer (e.g. we have to choose on which buffer we'd like
to sleep, and we do not always know that). This is not really a requirement
from the V4L2 API point of view, but has direct influence on how poll() and
blocking I/O works.
4. Multiplanar buffers. Our devices require them (see the RFC for more details:
http://article.gmane.org/gmane.linux.drivers.video-input-infrastructure/11212).
5. Solve problems with cache coherency on non-x86 architectures, especially in
videobuf for OUTPUT buffers. We need to flush the cache before starting the
transaction.
6. Reduce buffer queuing latency, e.g.: move operations such as locking, out
of qbuf.
Applications would like to queue a buffer and be able to fire up the device
as fast as possible.
7. Large buffer allocations, buffer preallocation, etc.
---------------
Solutions:
---------------
1. After a detailed discussion, we agreed in:
http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/10668,
that we'd like the application to be able to queue/dequeue both OUTPUT (as source)
and CAPTURE (as destination) buffers on one video node. Activating the device
(after streamon) would take effect only if there are both types of buffers
available. The application would put source data into OUTPUT buffers and expect
to find it processed in dequeued CAPTURE buffers. Addressed by mem2mem framework.
2. I don't see anything to do here from the API's point of view. The application
would open two video nodes, e.g. video decoder and video postprocessor and queue
buffers dequeued from decoder on the postprocessor. To get the best performance,
this requires the buffers to be marked as non cached somehow to avoid unneeded
cache syncs.
3. Mem2mem addresses this partially by adding a "transaction" concept. It's
not bullet-proof though, as it assumes the buffers will be returned in the same
order as passed. Some videobuffer limitations will have to be addressed here.
4. See my RFC. Patches in progress.
5. We have narrowed it down to an additional sync() before the operation
(i.e. in qbuf), but more issues may exist here. I have already added sync()
support for qbuf with minimal changes to videobuf and will be posting the
proposal soon. This also requires identifying the direction of the sync, but
we have found a way to do this without adding anything new (videobuf flags
are enough).
6. Later. We haven't done anything in this field.
7. We use our own allocator (see
http://thread.gmane.org/gmane.linux.ports.arm.kernel/56879), but we have a new
concept for that which we'd like to discuss separately later.
> Note that I would suggest though that the memory-pool part is split into two
> parts: how to actually allocate the memory is pretty much separate from how v4l
> will use it. The actual allocation part is probably quite complex and might
> even be hardware dependent and should be discussed separately. But how to use
> it is something that can be discussed without needing to know how it was
> allocated.
Exactly, this is the approach we have assumed right now. We'd like to introduce
each part of the infrastructure incrementally. The plan was to do exactly as
you said: leave the allocator-specific parts for later and for a separate
discussion.
We intend to follow with multi-planar buffers and then to focus on dma-contig,
as this is what our hardware requires.
> BTW, what is the status of the multiplanar RFC? I later realized that that RFC
> might be very useful for adding meta-data to buffers. There are several cases
> where that is useful: sensors that provide meta-data when capturing a frame and
> imagepipelines (particularly in memory-to-memory cases) that want to have all
> parameters as part of the meta-data associated with the image. There may well
> be more of those.
This got pushed back but now after m2m, it's become next task on my list. I
expect to be posting patches in a week or two, hopefully.
I understand that you'd like to make the pointer in the union and the helper
struct more generic to use it to pass different types of information?
Best regards
--
Pawel Osciak
Linux Platform Group
Samsung Poland R&D Center
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH/RFC v2.1 0/2] Mem-to-mem device framework
2009-12-28 14:49 ` Pawel Osciak
@ 2009-12-31 4:50 ` Hiremath, Vaibhav
2010-02-03 8:44 ` Mauro Carvalho Chehab
0 siblings, 1 reply; 9+ messages in thread
From: Hiremath, Vaibhav @ 2009-12-31 4:50 UTC (permalink / raw)
To: linux-arm-kernel
> -----Original Message-----
> From: Pawel Osciak [mailto:p.osciak at samsung.com]
> Sent: Monday, December 28, 2009 8:19 PM
> To: 'Hans Verkuil'
> Cc: linux-media at vger.kernel.org; linux-samsung-soc at vger.kernel.org;
> linux-arm-kernel at lists.infradead.org; Marek Szyprowski;
> kyungmin.park at samsung.com; Hiremath, Vaibhav; Karicheri,
> Muralidharan; 'Guru Raj'; 'Xiaolin Zhang'; 'Magnus Damm'; 'Sakari
> Ailus'
> Subject: RE: [PATCH/RFC v2.1 0/2] Mem-to-mem device framework
>
> Hello Hans,
>
>
> On Wednesday 23 December 2009 16:06:18 Hans Verkuil wrote:
> > Thank you for working on this! It's much appreciated. Now I've
> noticed that
> > patches regarding memory-to-memory and memory pool tend to get
> very few comments.
> > I suspect that the main reason is that these are SoC-specific
> features that do
> > not occur in consumer-type products. So most v4l developers do not
> have the
> > interest and motivation (and time!) to look into this.
>
> Thank you very much for your response. We were a bit surprised with
> the lack of
> responses as there seemed to be a good number of people interested
> in this area.
>
> I'm hoping that everybody interested would take a look at the test
> device posted
> along with the patches. It's virtual, no specific hardware required,
> but it
> demonstrates the concepts behind the framework, including
> transactions.
>
[Hiremath, Vaibhav] I was on vacation and resumed today itself, I will go through these patch series this weekend and will get back to you.
I just had cursory look and I would say it should be really good starting point for us to support mem-to-mem devices.
Thanks,
Vaibhav
> > One thing that I am missing is a high-level overview of what we
> want. Currently
> > there are patches/RFCs floating around for memory-to-memory
> support, multiplanar
> > support and memory-pool support.
> >
> > What I would like to see is a RFC that ties this all together from
> the point of
> > view of the public API. I.e. what are the requirements? Possibly
> solutions? Open
> > questions? Forget about how to implement it for the moment, that
> will follow
> > from the chosen solutions.
>
> Yes, that's true, sorry about that. We've been so into it after the
> memory pool
> discussion and the V4L2 mini-summit that I neglected describing the
> big picture
> behind this.
>
> So to give a more high-level description, from the point of view of
> applications
> and the V4L2 API:
>
> ---------------
> Requirements:
> ---------------
> (Some of the following were first posted by Laurent in:
> http://thread.gmane.org/gmane.linux.drivers.video-input-
> infrastructure/10204).
>
> 1. Support for devices that take input data in a source buffer, take
> a separate
> destination buffer, process the source data and put it in the
> destination buffer.
>
> 2. Allow sharing buffers between devices, effectively chaining them
> to form
> video pipelines. An example of this could be a video decoder, fed
> with video
> stream which returns raw frames, which then have to be postprocessed
> by another
> device and displayed. This is the main scenario we need to have for
> our S3C/S5P
> series SoCs. Of course, we'd like zero-copy.
>
> 3. Allow using more than one buffer by the device at the same time.
> This is not
> supported by videobuffer (e.g. we have to choose on which buffer
> we'd like
> to sleep, and we do not always know that). This is not really a
> requirement
> from the V4L2 API point of view, but has direct influence on how
> poll() and
> blocking I/O works.
>
> 4. Multiplanar buffers. Our devices require them (see the RFC for
> more details:
> http://article.gmane.org/gmane.linux.drivers.video-input-
> infrastructure/11212).
>
> 5. Solve problems with cache coherency on non-x86 architectures,
> especially in
> videobuf for OUTPUT buffers. We need to flush the cache before
> starting the
> transaction.
>
> 6. Reduce buffer queuing latency, e.g.: move operations such as
> locking, out
> of qbuf.
> Applications would like to queue a buffer and be able to fire up the
> device
> as fast as possible.
>
> 7. Large buffer allocations, buffer preallocation, etc.
>
>
> ---------------
> Solutions:
> ---------------
> 1. After a detailed discussion, we agreed in:
> http://thread.gmane.org/gmane.linux.drivers.video-input-
> infrastructure/10668,
> that we'd like the application to be able to queue/dequeue both
> OUTPUT (as source)
> and CAPTURE (as destination) buffers on one video node. Activating
> the device
> (after streamon) would take effect only if there are both types of
> buffers
> available. The application would put source data into OUTPUT buffers
> and expect
> to find it processed in dequeued CAPTURE buffers. Addressed by
> mem2mem framework.
>
> 2. I don't see anything to do here from the API's point of view. The
> application
> would open two video nodes, e.g. video decoder and video
> postprocessor and queue
> buffers dequeued from decoder on the postprocessor. To get the best
> performance,
> this requires the buffers to be marked as non cached somehow to
> avoid unneeded
> cache syncs.
>
> 3. Mem2mem addresses this partially by adding a "transaction"
> concept. It's
> not bullet-proof though, as it assumes the buffers will be returned
> in the same
> order as passed. Some videobuffer limitations will have to be
> addressed here.
>
> 4. See my RFC. Patches in progress.
>
> 5. We have narrowed it down to an additional sync() before the
> operation
> (i.e. in qbuf), but more issues may exist here. I have already added
> sync()
> support for qbuf with minimal changes to videobuf and will be
> posting the
> proposal soon. This also requires identifying the direction of the
> sync, but
> we have found a way to do this without adding anything new (videobuf
> flags
> are enough).
>
> 6. Later. We haven't done anything in this field.
>
> 7. We use our own allocator (see
> http://thread.gmane.org/gmane.linux.ports.arm.kernel/56879), but we
> have a new
> concept for that which we'd like to discuss separately later.
>
>
> > Note that I would suggest though that the memory-pool part is
> split into two
> > parts: how to actually allocate the memory is pretty much separate
> from how v4l
> > will use it. The actual allocation part is probably quite complex
> and might
> > even be hardware dependent and should be discussed separately. But
> how to use
> > it is something that can be discussed without needing to know how
> it was
> > allocated.
>
> Exactly, this is the approach we have assumed right now. We'd like
> to introduce
> each part of the infrastructure incrementally. The plan was to do
> exactly as
> you said: leave the allocator-specific parts for later and for a
> separate
> discussion.
> We intend to follow with multi-planar buffers and then to focus on
> dma-contig,
> as this is what our hardware requires.
>
> > BTW, what is the status of the multiplanar RFC? I later realized
> that that RFC
> > might be very useful for adding meta-data to buffers. There are
> several cases
> > where that is useful: sensors that provide meta-data when
> capturing a frame and
> > imagepipelines (particularly in memory-to-memory cases) that want
> to have all
> > parameters as part of the meta-data associated with the image.
> There may well
> > be more of those.
>
> This got pushed back but now after m2m, it's become next task on my
> list. I
> expect to be posting patches in a week or two, hopefully.
> I understand that you'd like to make the pointer in the union and
> the helper
> struct more generic to use it to pass different types of
> information?
>
>
> Best regards
> --
> Pawel Osciak
> Linux Platform Group
> Samsung Poland R&D Center
>
>
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH/RFC v2.1 0/2] Mem-to-mem device framework
2009-12-31 4:50 ` Hiremath, Vaibhav
@ 2010-02-03 8:44 ` Mauro Carvalho Chehab
0 siblings, 0 replies; 9+ messages in thread
From: Mauro Carvalho Chehab @ 2010-02-03 8:44 UTC (permalink / raw)
To: linux-arm-kernel
Hiremath, Vaibhav wrote:
>> -----Original Message-----
>> From: Pawel Osciak [mailto:p.osciak at samsung.com]
>> Sent: Monday, December 28, 2009 8:19 PM
>> To: 'Hans Verkuil'
>> Cc: linux-media at vger.kernel.org; linux-samsung-soc at vger.kernel.org;
>> linux-arm-kernel at lists.infradead.org; Marek Szyprowski;
>> kyungmin.park at samsung.com; Hiremath, Vaibhav; Karicheri,
>> Muralidharan; 'Guru Raj'; 'Xiaolin Zhang'; 'Magnus Damm'; 'Sakari
>> Ailus'
>> Subject: RE: [PATCH/RFC v2.1 0/2] Mem-to-mem device framework
>>
>> Hello Hans,
>>
>>
>> On Wednesday 23 December 2009 16:06:18 Hans Verkuil wrote:
>>> Thank you for working on this! It's much appreciated. Now I've
>> noticed that
>>> patches regarding memory-to-memory and memory pool tend to get
>> very few comments.
>>> I suspect that the main reason is that these are SoC-specific
>> features that do
>>> not occur in consumer-type products. So most v4l developers do not
>> have the
>>> interest and motivation (and time!) to look into this.
>> Thank you very much for your response. We were a bit surprised with
>> the lack of
>> responses as there seemed to be a good number of people interested
>> in this area.
>>
>> I'm hoping that everybody interested would take a look at the test
>> device posted
>> along with the patches. It's virtual, no specific hardware required,
>> but it
>> demonstrates the concepts behind the framework, including
>> transactions.
>>
> [Hiremath, Vaibhav] I was on vacation and resumed today itself, I will go through these patch series this weekend and will get back to you.
>
> I just had cursory look and I would say it should be really good starting point for us to support mem-to-mem devices.\
Hmm... it seems to me that those patches are still under discussion/analysis.
I'll mark them as RFC at the Patchwork.
Please let me know after you, SoC guys, go into a consensus about it. Then,
please submit me the final version.
Cheers,
Mauro
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2010-02-03 8:44 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-12-23 13:17 [PATCH/RFC v2.1 0/2] Mem-to-mem device framework Pawel Osciak
2009-12-23 13:17 ` [PATCH v2.1 1/2] V4L: Add memory-to-memory device helper framework for V4L2 Pawel Osciak
2009-12-24 2:53 ` Andy Walls
2009-12-23 13:17 ` [PATCH v2.1 2/2] V4L: Add a mem-to-mem V4L2 framework test device Pawel Osciak
2009-12-23 13:17 ` [EXAMPLE v2] Mem-to-mem userspace test application Pawel Osciak
2009-12-23 15:05 ` [PATCH/RFC v2.1 0/2] Mem-to-mem device framework Hans Verkuil
2009-12-28 14:49 ` Pawel Osciak
2009-12-31 4:50 ` Hiremath, Vaibhav
2010-02-03 8:44 ` Mauro Carvalho Chehab
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).