* [PATCH/RFC v3 00/19] Common Display Framework
@ 2013-08-09 23:02 Laurent Pinchart
0 siblings, 0 replies; 14+ messages in thread
From: Laurent Pinchart @ 2013-08-09 23:02 UTC (permalink / raw)
To: dri-devel, linux-fbdev, linux-media
Hi everybody,
Here's the third RFC of the Common Display Framework. This is a resent, the
series I've sent earlier seems not to have made it to the vger mailing lists,
possibly due to a too long list of CCs (the other explanation being that CDF
has been delayed for so long that vger considers it as spam, which I really
hope isn't the case :-)). I've thus dropped the CCs, sorry about that.
I won't repeat all the background information from the versions one and two
here, you can read it at http://lwn.net/Articles/512363/ and
http://lwn.net/Articles/526965/.
This RFC isn't final. Given the high interest in CDF and the urgent tasks that
kept delaying the next version of the patch set, I've decided to release v3
before completing all parts of the implementation. Known missing items are
- documentation: kerneldoc and this cover letter should provide basic
information, more extensive documentation will likely make it to v4.
- pipeline configuration and control: generic code to configure and control
display pipelines (in a nutshell, translating high-level mode setting and
DPMS calls to low-level entity operations) is missing. Video and stream
control operations have been carried over from v2, but will need to be
revised for v4.
- DSI support: I still have no DSI hardware I can easily test the code on.
Special thanks go to
- Renesas for inviting me to LinuxCon Japan 2013 where I had the opportunity
to validate the CDF v3 concepts with Alexandre Courbot (NVidia) and Tomasz
Figa (Samsung).
- Tomi Valkeinen (TI) for taking the time to deeply brainstorm v3 with me.
- Linaro for inviting me to Linaro Connect Europe 2013, the discussions we had
there greatly helped moving CDF forward.
- And of course all the developers who showed interest in CDF and spent time
sharing ideas, reviewing patches and testing code.
I have to confess I was a bit lost and discouraged after all the CDF-related
meetings during which we have discussed how to move from v2 to v3. With every
meeting I was hoping to run the implementation through use cases of various
interesting parties and narrow down the scope of the huge fuzzy beast that CDF
was. With every meeting the scope actually broadened, with no clear path at
sight anywhere.
Earlier this year I was about to drop one of the requirements on which I had
based CDF v2: sharing drivers between DRM/KMS and V4L2. With only two HDMI
transmitters as use cases for that feature (with only out-of-tree drivers so
far), I just thought the involved completely wasn't worth it and that I should
implement CDF v3 as a DRM/KMS-only helper framework. However, a seemingly
unrelated discussion with Xilinx developers showed me that hybrid SoC-FPGA
platforms such as the Xilinx Zynq 7000 have a larger library of IP cores that
can be used in camera capture pipelines and in display pipelines. The two use
cases suddenly became tens or even hundreds of use cases that I couldn't
ignore anymore.
CDF v3 is thus userspace API agnostic. It isn't tied to DRM/KMS or V4L2 and
can be used by any kernel subsystem, potentially including FBDEV (although I
won't personally wrote FBDEV support code, as I've already advocated for FBDEV
to be deprecated).
The code you are about to read is based on the concept of display entities
introduced in v2. Diagrams related to the explanations below are available at
http://ideasonboard.org/media/cdf/20130709-lce-cdf.pdf.
Display Entities
----------------
A display entity abstracts any hardware block that sources, processes or sinks
display-related video streams. It offers an abstract API, implemented by display
entity drivers, that is used by master drivers (such as the main display driver)
to query, configure and control display pipelines.
Display entities are connected to at least one video data bus, and optionally
to a control bus. The video data busses carry display-related video data out
of sources (such as a CRTC in a display controller) to sinks (such as a panel
or a monitor), optionally going through transmitters, encoders, decoders,
bridges or other similar devices. A CRTC or a panel will usually be connected
to a single data bus, while an encoder or a transmitter will be connected to
two data busses.
The simple linear display pipelines we find in most embedded platforms at the
moment are expected to grow more complex with time. CDF needs to accomodate
those needs from the start to be, if not future-proof, at least present-proof
at the time it will get merged in to mainline. For this reason display
entities have data ports through which video streams flow in or out, with link
objects representing the connections between those ports. A typical entity in
a linear display pipeline will have one (for video source and video sink
entities such as CRTCs or panels) or two ports (for video processing entities
such as encoders), but more ports are allowed, and entities can be linked in
complex non-linear pipelines.
Readers might think that this model if extremely similar to the media
controller graph model. They would be right, and given my background this is
most probably not a coincidence. The CDF v3 implementation uses the in-kernel
media controller framework to model the graph of display entities, with the
display entity data structure inheriting from the media entity structure. The
display pipeline graph topology will be automatically exposed to userspace
through the media controller API as an added bonus. However, ussage of the
media controller userspace API in applications is *not* mandatory, and the
current CDF implementation doesn't use the media controller link setup
userspace API to configure the display pipelines.
While some display entities don't require any configuration (DPI panels are a
good example), many of them are connected to a control bus accessible to the
CPU. Control requests can be sent on a dedicated control bus (such as I2C or
SPI) or multiplexed on a mixed control and data bus (such as DBI or DSI). To
support both options the CDF display entity model separates the control and
data busses in different APIs.
Display entities are abstract object that must be implemented by a real
device. The device sits on its control bus and is registered with the Linux
device core and matched with his driver using the control bus specific API.
The CDF doesn't create a display entity class or bus, display entity drivers
thus standard Linux kernel drivers using existing busses. A DBI bus is added
as part of this patch set, but strictly speaking this isn't part of CDF.
When a display entity driver probes a device it must create an instance of the
display_entity structure, initialize it and add it to the CDF core entities
pool. The display entity exposes abstract operations through function
pointers, and the entity driver must implement those operations. Those
operations can act on either the whole entity or on a given port, depending on
the operation. They are divided in two groups, control operations and video
operations.
Control Operations
------------------
Control operations are called by upper-level drivers, usually in response to a
request originating from userspace. They query or control the display entity
state and operation. Currently defined control operations are
- get_size(), to retrieve the entity physical size (applicable to panels only)
- get_modes(), to retrieve the video modes supported at an entity port
- get_params(), to retrieve the data bus parameters at an entity port
- set_state(), to control the state of the entity (off, standby or on)
- update(), to trigger a display update (for entities that implement manual
update, such as manual-update panels that store frames in their internal
frame buffer)
The last two operations have been carried from v2 and will be reworked.
Pipeline Control
----------------
The figure on page 4 shows the control model for a linear pipeline. This
differs significantly from CDF v2 where calls where forwarded from entity to
entity using a Russian dolls model. v3 removes the need of neighbour awareness
from entity drivers, simplifying the entity drivers. The complexity of pipeline
configuration is moved to a central location called a pipeline controller
instead of being spread out to all drivers.
Pipeline controllers provide library functions that display drivers can use to
control a pipeline. Several controllers can be implemented to accomodate the
needs of various pipeline topologies and complexities, and display drivers can
even implement their own pipeline control algorithm if needed. I'm working on a
linear pipeline controller for the next version of the patch set.
If pipeline controllers are responsible for propagating a pipeline configuration
on all entity ports in the pipeline, entity drivers are responsible for
propagating the configuration inside entities, from sink (input) to source
(output) ports as illustrated on page 5. The rationale behind this is that
knowledge of the entity internals is located in the entity driver, while
knowledge of the pipeline belongs to the pipeline controller. The controller
will thus configure the pipeline by performing the following steps:
- applying a configuration on sink ports of an entity
- read the configuration that has been propagated by the entity driver on its
source ports
- optionally, modify the source port configuration (to configure custom timings,
scaling or other parameters, if supported by the entity)
- propagate the source port configuration to the sink ports of the next entities
in the pipeline and start over
Beside modifying the active configuration, the entities API will allow trying
configurations without applying them to the hardware. As configuration of a port
possibly depend on the configurations of the other ports, trying a configuration
must be done at the entity level instead of the port level. The implementation
will be based on the concept of configuration store objects that will store the
configuration of all ports for a given entity. Each entity will have a single
active configuration store, and test configuration stores will be created
dynamically to try a configuration on an entity. The get and set operations
implemented by the entity will receive a configuration store pointer, and active
and test code paths in entity drivers will be identical, except for applying the
configuration to the hardware for the active code path.
Video Operations
----------------
Video operations control the video stream state on entity ports. The only
currently defined video operation is
- set_stream(), to start (in continuous or single-shot mode) the video stream
on an entity port
The call model for video operations differ from the control operations model
described above. The set_stream() operation is called directly by downstream
entities on upstream entities (from a video data bus point of view).
Terminating entities in a pipeline (such as panels) will usually call the
set_stream() operation in their set_state() handler, and intermediate entities
will forward the set_stream() call upstream.
Integration
-----------
The figure on page 8 describes how a panel driver, implemented using CDF as a
display entity, interacts with the other components in the system. The use case
is a simple pipeline made of a display controller and a panel.
The display controller driver receives control request from userspace through
DRM (or FBDEV) API calls. It processes the request and calls the panel driver
through the CDF control operations API. The panel driver will then issue
requests on its control bus (several possible control busses are shown on the
figure, panel drivers typically use one of them only) and call video operations
of the display controller on its left side to control the video stream.
Registration and Notification
-----------------------------
Due to possibly complex dependencies between entities we can't guarantee that
all entities part of the display pipeline will have been successfully probed
when the master display controller driver is probed. For instance a panel can
be a child of the DBI or DSI bus controlled by the display device, or use a
clock provided by that device. We can't defer the display device probe until
the panel is probed and also defer the panel device probe until the display
device is probed. For this reason we need a notification system that allows
entities to register themselves with the CDF core, and display controller
drivers to get notified when entities they need are available.
The notification system has been completely redesigned in v3. This version is
based on the V4L2 asynchronous probing notification code, with large parts of
the code shamelessly copied. This is an interim solution to let me play with
the notification code as needed by CDF. I'm not a fan of code duplication, and
will work on merging the CDF and V4L2 implementations in a later stage when
CDF will reach a mature enough state.
CDF manages a pool of entities and a list of notifiers. Notifiers are
registered by master display drivers with an array of entities match
descriptors. When an entity is added to the CDF entities pool, all notifiers
are searched for a match. If a match is found, the corresponding notifier is
called to notify the master display driver.
The two currently supported match methods are platform match, which uses
device names, and DT match, which uses DT node pointers. More match method
might be added later if needed. Two helper functions exist to build a notifier
from a list of platform device names (in the non-DT case) or a DT
representation of the display pipeline topology.
Once all required entities have been successfully found, the master display
driver is responsible for creating media controller links between all entities
in the pipeline. Two helper functions are also available to automate that
process, one for the non-DT case and one for the DT case. Once again some
DT-related code has been copied from the V4L2 DT code, I will work on merging
both in a future version.
Note that notification brings a different issue after registration, as display
controller and display entity drivers would take a reference to each other.
Those circular references would make driver unloading impossible. One possible
solution to this problem would be to simulate an unplug event for the display
entity, to force the display driver to release the dislay entities it uses. We
would need a userspace API for that though. Better solutions would of course
be welcome.
Device Tree Bindings
--------------------
CDF entities device tree bindings are not documented yet. They describe both
the graph topology and entity-specific information. The graph description uses
the V4L2 DT bindings (which are actually not V4L2-specific) specified at
Documentation/devicetree/bindings/media/video-interfaces.txt. Entity-specific
information will be described in individual DT bindings documentation. The DPI
panel driver uses the display timing bindings documented in
Documentation/devicetree/bindings/video/display-timing.txt.
Please note that most of the display entities on devices I own are just dumb
panels with no control bus, and are thus not the best candidates to design a
framework that needs to take complex panels' needs into account. This is why I
hope to see you using the CDF with your display device and tell me what needs to
be modified/improved/redesigned.
This patch set is split as follows:
- The first patch fixes a Kconfig namespace issue with the OMAP DSS panels. It
could be applied already independently of this series.
- Patches 02/19 to 07/19 add the CDF core, including the notification system
and the graph and OF helpers.
- Patch 08/19 adds a MIPI DBI bus. This isn't part of CDF strictly speaking,
but is needed for the DBI panel drivers.
- Patches 09/19 to 13/19 add panel drivers, a VGA DAC driver and a VGA
connector driver.
- Patches 14/19 to 18/19 add CDF-compliant reference board code and DT for the
Renesas Marzen and Lager boards.
- Patch 19/19 port the Renesas R-Car Display Unit driver to CDF.
The patches are available in my git tree at
git://linuxtv.org/pinchartl/fbdev.git cdf/v3
http://git.linuxtv.org/pinchartl/fbdev.git/shortlog/refs/heads/cdf/v3
For convenience I've included modifications to the Renesas R-Car Display Unit
driver to use the CDF. You can read the code to see how the driver uses CDF to
interface panels. Please note that the rcar-du-drm implementation is still
work in progress, its set_stream operation implementation doesn't enable and
disable the video stream yet as it should.
As already mentioned in v2, I will appreciate all reviews, comments,
criticisms, ideas, remarks, ... If you can find a clever way to solve the
cyclic references issue described above I'll buy you a beer at the next
conference we will both attend. If you think the proposed solution is too
complex, or too simple, I'm all ears, but I'll have more arguments this time
than I had with v2
Laurent Pinchart (19):
OMAPDSS: panels: Rename Kconfig options to OMAP2_DISPLAY_*
video: Add Common Display Framework core
video: display: Add video and stream control operations
video: display: Add display entity notifier
video: display: Graph helpers
video: display: OF support
video: display: Add pixel coding definitions
video: display: Add MIPI DBI bus support
video: panel: Add DPI panel support
video: panel: Add R61505 panel support
video: panel: Add R61517 panel support
video: display: Add VGA Digital to Analog Converter support
video: display: Add VGA connector support
ARM: shmobile: r8a7790: Add DU clocks for DT
ARM: shmobile: r8a7790: Add DU device node to device tree
ARM: shmobile: marzen: Port DU platform data to CDF
ARM: shmobile: lager: Port DU platform data to CDF
ARM: shmobile: lager-reference: Add display device nodes to device
tree
drm/rcar-du: Port to the Common Display Framework
arch/arm/boot/dts/r8a7790-lager-reference.dts | 92 ++++
arch/arm/boot/dts/r8a7790.dtsi | 33 ++
arch/arm/mach-shmobile/board-lager.c | 76 ++-
arch/arm/mach-shmobile/board-marzen.c | 77 ++-
arch/arm/mach-shmobile/clock-r8a7790.c | 5 +
drivers/gpu/drm/rcar-du/Kconfig | 3 +-
drivers/gpu/drm/rcar-du/Makefile | 7 +-
drivers/gpu/drm/rcar-du/rcar_du_connector.c | 164 ++++++
drivers/gpu/drm/rcar-du/rcar_du_connector.h | 36 ++
drivers/gpu/drm/rcar-du/rcar_du_crtc.h | 2 +-
drivers/gpu/drm/rcar-du/rcar_du_drv.c | 279 ++++++++--
drivers/gpu/drm/rcar-du/rcar_du_drv.h | 28 +-
drivers/gpu/drm/rcar-du/rcar_du_encoder.c | 87 ++-
drivers/gpu/drm/rcar-du/rcar_du_encoder.h | 22 +-
drivers/gpu/drm/rcar-du/rcar_du_kms.c | 116 +++-
drivers/gpu/drm/rcar-du/rcar_du_lvdscon.c | 131 -----
drivers/gpu/drm/rcar-du/rcar_du_lvdscon.h | 25 -
drivers/gpu/drm/rcar-du/rcar_du_vgacon.c | 96 ----
drivers/gpu/drm/rcar-du/rcar_du_vgacon.h | 23 -
drivers/video/Kconfig | 1 +
drivers/video/Makefile | 1 +
drivers/video/display/Kconfig | 62 +++
drivers/video/display/Makefile | 9 +
drivers/video/display/con-vga.c | 148 +++++
drivers/video/display/display-core.c | 759 ++++++++++++++++++++++++++
drivers/video/display/display-notifier.c | 542 ++++++++++++++++++
drivers/video/display/mipi-dbi-bus.c | 234 ++++++++
drivers/video/display/panel-dpi.c | 207 +++++++
drivers/video/display/panel-r61505.c | 567 +++++++++++++++++++
drivers/video/display/panel-r61517.c | 460 ++++++++++++++++
drivers/video/display/vga-dac.c | 152 ++++++
drivers/video/omap2/displays-new/Kconfig | 24 +-
drivers/video/omap2/displays-new/Makefile | 24 +-
include/linux/platform_data/rcar-du.h | 55 +-
include/video/display.h | 398 ++++++++++++++
include/video/mipi-dbi-bus.h | 125 +++++
include/video/panel-dpi.h | 24 +
include/video/panel-r61505.h | 27 +
include/video/panel-r61517.h | 28 +
39 files changed, 4615 insertions(+), 534 deletions(-)
create mode 100644 drivers/gpu/drm/rcar-du/rcar_du_connector.c
create mode 100644 drivers/gpu/drm/rcar-du/rcar_du_connector.h
delete mode 100644 drivers/gpu/drm/rcar-du/rcar_du_lvdscon.c
delete mode 100644 drivers/gpu/drm/rcar-du/rcar_du_lvdscon.h
delete mode 100644 drivers/gpu/drm/rcar-du/rcar_du_vgacon.c
delete mode 100644 drivers/gpu/drm/rcar-du/rcar_du_vgacon.h
create mode 100644 drivers/video/display/Kconfig
create mode 100644 drivers/video/display/Makefile
create mode 100644 drivers/video/display/con-vga.c
create mode 100644 drivers/video/display/display-core.c
create mode 100644 drivers/video/display/display-notifier.c
create mode 100644 drivers/video/display/mipi-dbi-bus.c
create mode 100644 drivers/video/display/panel-dpi.c
create mode 100644 drivers/video/display/panel-r61505.c
create mode 100644 drivers/video/display/panel-r61517.c
create mode 100644 drivers/video/display/vga-dac.c
create mode 100644 include/video/display.h
create mode 100644 include/video/mipi-dbi-bus.h
create mode 100644 include/video/panel-dpi.h
create mode 100644 include/video/panel-r61505.h
create mode 100644 include/video/panel-r61517.h
--
Regards,
Laurent Pinchart
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH/RFC v3 00/19] Common Display Framework
[not found] ` <52498146.4050600@ti.com>
@ 2013-10-02 12:23 ` Andrzej Hajda
2013-10-02 13:24 ` Tomi Valkeinen
0 siblings, 1 reply; 14+ messages in thread
From: Andrzej Hajda @ 2013-10-02 12:23 UTC (permalink / raw)
To: Tomi Valkeinen, Laurent Pinchart
Cc: linux-fbdev, dri-devel, Jesse Barnes, Benjamin Gaignard, Tom Gall,
Kyungmin Park, linux-media, Stephen Warren, Mark Zhang,
Alexandre Courbot, Ragesh Radhakrishnan, Thomas Petazzoni,
Sunil Joshi, Maxime Ripard, Vikas Sajjan, Marcus Lorentzon
Hi Tomi,
On 09/30/2013 03:48 PM, Tomi Valkeinen wrote:
> On 09/08/13 20:14, Laurent Pinchart wrote:
>> Hi everybody,
>>
>> Here's the third RFC of the Common Display Framework.
>
>
> Hi,
>
> I've been trying to adapt the latest CDF RFC for OMAP. I'm trying to gather
> some notes here about what I've discovered or how I see things. Some of these I
> have mentioned earlier, but I'm trying to collect them here nevertheless.
>
> I do have my branch with working DPI panel, TFP410 encoder, DVI-connector and
> DSI command mode panel drivers, and modifications to make omapdss work with
> CDF. However, it's such a big hack, that I'm not going to post it. I hope I
> will have time to work on it to get something publishable to have something
> more concrete to present. But for the time being I have to move to other tasks
> for a while, so I thought I'd better post some comments when I still remember
> something about this.
>
> Using Linux buses for DBI/DSI
> ==============>
> I still don't see how it would work. I've covered this multiple times in
> previous posts so I'm not going into more details now.
>
> I implemented DSI (just command mode for now) as a video bus but with bunch of
> extra ops for sending the control messages.
Could you post the list of ops you have to create.
I have posted some time ago my implementation of DSI bus:
http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/69358/focusi362
I needed three quite generic ops to make it working:
- set_power(on/off),
- set_stream(on/off),
- transfer(dsi_transaction_type, tx_buf, tx_len, rx_buf, rx_len)
I have recently replaced set_power by PM_RUNTIME callbacks,
but I had to add .initialize ops.
Regarding the discussion how and where to implement control bus I have
though about different alternatives:
1. Implement DSI-master as a parent dev which will create DSI-slave
platform dev in a similar way as for MFD devices (ssbi.c seems to me a
good example).
2. Create universal mipi-display-bus which will cover DSI, DBI and
possibly other buses - they have have few common things - for example
MIPI-DCS commands.
I am not really convinced to either solution all have some advantages
and disadvantages.
>
> Call model
> =====
>
> It may be that I just don't get how things are supposed to work with the RFC's
> ops, but I couldn't figure out how to use it in practice. I tried it for a few
> days, but got nowhere, and I then went with the proven model that's used in
> omapdss, where display entities handle calling the ops of the upstream
> entities.
>
> That's not to say the RFC's model doesn't work. I just didn't figure it out.
> And I guess it was more difficult to understand how to use it as the controller
> stuff is not implemented yet.
>
> It would be good to have a bit more complex cases in the RFC, like changing and
> verifying videomodes, fetching them via EDID, etc.
>
> Multiple inputs/outputs
> ===========>
> I think changing the model from single-input & single output to multiple inputs
> and outputs increases the difficulty of the implementation considerably. That's
> not a complaint as such, just an observation. I do think multiple inputs &
> outputs is a good feature. Then again, all the use cases I have only have
> single input/output, so I've been wondering if there's some middle road, in
> which we somehow allow multiple inputs & outputs, but only implement the
> support for single input & output.
>
> I've cut the corners in my tests by just looking at a single enabled input or
> output from an entity, and ignoring the rest (which I don't have in my use
> cases).
>
> Internal connections
> ==========
>
> The model currently only represents connections between entities. With multiple
> inputs & outputs I think it's important to maintain also connections inside the
> entity. Say, we have an entity with two inputs and two outputs. If one output
> is enabled, which one of the inputs needs to be enabled and configured also?
> The current model doesn't give any solution to that.
>
> I haven't implemented this, as in my use cases I have just single inputs and
> outputs, so I can follow the pipeline trivially.
>
> Central entity
> =======
>
> If I understand the RFC correctly, there's a "central" entity that manages all
> other entities connected to it. This central entity would normally be the
> display controller. I don't like this model, as it makes it difficult or
> impossible to manage situations where an entity is connected to two display
> controllers (even if only one of the display controllers would be connected at
> a time). It also makes this one display entity fundamentally different from the
> others, which I don't like.
>
> I think all the display entities should be similar. They would all register
> themselves to the CDF framework, which in turn would be used by somebody. This
> somebody could be the display controller driver, which is more or less how I've
> implemented it.
>
> Media entity/pads
> ========>
> Using media_entity and media_pad fits quite well for CDF, but... It is quite
> cumbersome to use. The constant switching between media_entity and
> display_entity needs quite a lot of code in total, as it has to be done almost
> everywhere.
>
> And somehow I'd really like to combine the entity and port into one struct so
> that it would be possible to just do:
>
> src->ops->foo(src, ...);
>
> instead of
>
> src->ops->foo(src, srcport, ...);
>
> One reason is that the latter is more verbose (not only the call, you also need
> to get srcport from somewhere), but also that as far as the caller is
> concerned, there's no reason to manage the entity and the port as separate
> things. You just want a particular video source/sink to do something, and
> whether that source/sink is port 5 of entity foo is irrelevant.
>
> The callee, of course, needs to check which port is being operated. However,
> if, say, 90% of the display entities have just one input and one output port,
> the port parameter can be ignored for those entities, simplifying the code.
>
> And while media_entity can be embedded into display_entity, media_pad and
> media_link cannot be embedded into anything. This is somewhat vague as I don't
> quite remember what my reason for needing the feature was, but I had some need
> for display_link or display_pad, to add some CDF related entries, which can't
> be done except by modifying the media_link or media_pad themselves.
>
> DT data & platform data
> ===========>
> I think the V4L2 style port/endpoint description in DT data should work well. I
> don't see a need for specifying the remote-endpoint in the upstream entity, but
> then again, it doesn't hurt either.
>
> The description does get pretty verbose, though, but I guess that can't be
> avoided.
>
> Describing similar things in the platform data works fine too. The RFC,
> however, contained somewhat lacking platform data examples which had to be
> extended to work with, for example, multiple instances of the same display
> entity. Also, the RFC relied on the central entity to parse the platform data,
> and in my model each display entity has its own platform data.
>
> Final thoughts
> =======
>
> So many of the comments above are somewhat gut-feelings. I don't have concrete
> evidence that my ideas are better, as I haven't been able to finalize the code
> (well, and the RFC is missing important things like the controller).
>
> I think there are areas where my model and the RFC are similar. I think one
> step would be to identify those parts, and perhaps have those parts as separate
> pieces of code. Say, the DT and platform data parts might be such that we could
> have display-of.c and display-pdata.c, having support code which works for the
> RFC and my model.
>
> This would make it easier to maintain and improve both versions, to see how
> they evolve and what are the pros and cons with both models. But this is just a
> thought, I'm not sure how much such code there would actually be.
>
> Tomi
>
>
>
>
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH/RFC v3 00/19] Common Display Framework
2013-10-02 12:23 ` [PATCH/RFC v3 00/19] Common Display Framework Andrzej Hajda
@ 2013-10-02 13:24 ` Tomi Valkeinen
2013-10-09 14:08 ` Andrzej Hajda
0 siblings, 1 reply; 14+ messages in thread
From: Tomi Valkeinen @ 2013-10-02 13:24 UTC (permalink / raw)
To: Andrzej Hajda
Cc: Laurent Pinchart, linux-fbdev, dri-devel, Jesse Barnes,
Benjamin Gaignard, Tom Gall, Kyungmin Park, linux-media,
Stephen Warren, Mark Zhang, Alexandre Courbot,
Ragesh Radhakrishnan, Thomas Petazzoni, Sunil Joshi,
Maxime Ripard, Vikas Sajjan, Marcus Lorentzon
[-- Attachment #1: Type: text/plain, Size: 3871 bytes --]
Hi Andrzej,
On 02/10/13 15:23, Andrzej Hajda wrote:
>> Using Linux buses for DBI/DSI
>> =============================
>>
>> I still don't see how it would work. I've covered this multiple times in
>> previous posts so I'm not going into more details now.
>>
>> I implemented DSI (just command mode for now) as a video bus but with bunch of
>> extra ops for sending the control messages.
>
> Could you post the list of ops you have to create.
I'd rather not post the ops I have in my prototype, as it's still a
total hack. However, they are very much based on the current OMAP DSS's
ops, so I'll describe them below. I hope I find time to polish my CDF
hacks more, so that I can publish them.
> I have posted some time ago my implementation of DSI bus:
> http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/69358/focus=69362
A note about the DT data on your series, as I've been stuggling to
figure out the DT data for OMAP: some of the DT properties look like
configuration, not hardware description. For example,
"samsung,bta-timeout" doesn't describe hardware.
> I needed three quite generic ops to make it working:
> - set_power(on/off),
> - set_stream(on/off),
> - transfer(dsi_transaction_type, tx_buf, tx_len, rx_buf, rx_len)
> I have recently replaced set_power by PM_RUNTIME callbacks,
> but I had to add .initialize ops.
We have a bit more on omap:
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/video/omapdss.h#n648
Some of those should be removed and some should be omap DSI's internal
matters, not part of the API. But it gives an idea of the ops we use.
Shortly about the ops:
- (dis)connect, which might be similar to your initialize. connect is
meant to "connect" the pipeline, reserving the video ports used, etc.
- enable/disable, enable the DSI bus. If the DSI peripheral requires a
continous DSI clock, it's also started at this point.
- set_config configures the DSI bus (like, command/video mode, etc.).
- configure_pins can be ignored, I think that function is not needed.
- enable_hs and enable_te, used to enable/disable HS mode and
tearing-elimination
- update, which does a single frame transfer
- bus_lock/unlock can be ignored
- enable_video_output starts the video stream, when using DSI video mode
- the request_vc, set_vc_id, release_vc can be ignored
- Bunch of transfer funcs. Perhaps a single func could be used, as you
do. We have sync write funcs, which do a BTA at the end of the write and
wait for reply, and nosync version, which just pushes the packet to the
TX buffers.
- bta_sync, which sends a BTA and waits for the peripheral to reply
- set_max_rx_packet_size, used to configure the max rx packet size.
> Regarding the discussion how and where to implement control bus I have
> though about different alternatives:
> 1. Implement DSI-master as a parent dev which will create DSI-slave
> platform dev in a similar way as for MFD devices (ssbi.c seems to me a
> good example).
> 2. Create universal mipi-display-bus which will cover DSI, DBI and
> possibly other buses - they have have few common things - for example
> MIPI-DCS commands.
>
> I am not really convinced to either solution all have some advantages
> and disadvantages.
I think a dedicated DSI bus and your alternatives all have the same
issues with splitting the DSI control into two. I've shared some of my
thoughts here:
http://article.gmane.org/gmane.comp.video.dri.devel/90651
http://article.gmane.org/gmane.comp.video.dri.devel/91269
http://article.gmane.org/gmane.comp.video.dri.devel/91272
I still think that it's best to consider DSI and DBI as a video bus (not
as a separate video bus and a control bus), and provide the packet
transfer methods as part of the video ops.
Tomi
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 901 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH/RFC v3 00/19] Common Display Framework
2013-10-02 13:24 ` Tomi Valkeinen
@ 2013-10-09 14:08 ` Andrzej Hajda
2013-10-11 6:37 ` Tomi Valkeinen
0 siblings, 1 reply; 14+ messages in thread
From: Andrzej Hajda @ 2013-10-09 14:08 UTC (permalink / raw)
To: Tomi Valkeinen
Cc: Laurent Pinchart, linux-fbdev, dri-devel, Jesse Barnes,
Benjamin Gaignard, Tom Gall, Kyungmin Park, linux-media,
Stephen Warren, Mark Zhang, Alexandre Courbot,
Ragesh Radhakrishnan, Thomas Petazzoni, Sunil Joshi,
Maxime Ripard, Vikas Sajjan, Marcus Lorentzon
On 10/02/2013 03:24 PM, Tomi Valkeinen wrote:
> Hi Andrzej,
>
> On 02/10/13 15:23, Andrzej Hajda wrote:
>
>>> Using Linux buses for DBI/DSI
>>> ==============>>>
>>> I still don't see how it would work. I've covered this multiple times in
>>> previous posts so I'm not going into more details now.
>>>
>>> I implemented DSI (just command mode for now) as a video bus but with bunch of
>>> extra ops for sending the control messages.
>> Could you post the list of ops you have to create.
> I'd rather not post the ops I have in my prototype, as it's still a
> total hack. However, they are very much based on the current OMAP DSS's
> ops, so I'll describe them below. I hope I find time to polish my CDF
> hacks more, so that I can publish them.
>
>> I have posted some time ago my implementation of DSI bus:
>> http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/69358/focusi362
> A note about the DT data on your series, as I've been stuggling to
> figure out the DT data for OMAP: some of the DT properties look like
> configuration, not hardware description. For example,
> "samsung,bta-timeout" doesn't describe hardware.
As I have adopted existing internal driver for MIPI-DSI bus, I did not
take too much
care for DT. You are right, 'bta-timeout' is a configuration parameter
(however its
minimal value is determined by characteristic of the DSI-slave). On the
other
side currently there is no good place for such configuration parameters
AFAIK.
>> I needed three quite generic ops to make it working:
>> - set_power(on/off),
>> - set_stream(on/off),
>> - transfer(dsi_transaction_type, tx_buf, tx_len, rx_buf, rx_len)
>> I have recently replaced set_power by PM_RUNTIME callbacks,
>> but I had to add .initialize ops.
> We have a bit more on omap:
>
> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/video/omapdss.h#n648
>
> Some of those should be removed and some should be omap DSI's internal
> matters, not part of the API. But it gives an idea of the ops we use.
> Shortly about the ops:
>
> - (dis)connect, which might be similar to your initialize. connect is
> meant to "connect" the pipeline, reserving the video ports used, etc.
>
> - enable/disable, enable the DSI bus. If the DSI peripheral requires a
> continous DSI clock, it's also started at this point.
>
> - set_config configures the DSI bus (like, command/video mode, etc.).
>
> - configure_pins can be ignored, I think that function is not needed.
>
> - enable_hs and enable_te, used to enable/disable HS mode and
> tearing-elimination
It seems there should be a way to synchronize TE signal with panel,
in case signal is provided only to dsi-master. Some callback I suppose?
Or transfer synchronization should be done by dsi-master.
>
> - update, which does a single frame transfer
>
> - bus_lock/unlock can be ignored
>
> - enable_video_output starts the video stream, when using DSI video mode
>
> - the request_vc, set_vc_id, release_vc can be ignored
>
> - Bunch of transfer funcs. Perhaps a single func could be used, as you
> do. We have sync write funcs, which do a BTA at the end of the write and
> wait for reply, and nosync version, which just pushes the packet to the
> TX buffers.
>
> - bta_sync, which sends a BTA and waits for the peripheral to reply
>
> - set_max_rx_packet_size, used to configure the max rx packet size.
Similar callbacks should be added to mipi-dsi-bus ops as well, to
make it complete/generic.
>
>> Regarding the discussion how and where to implement control bus I have
>> though about different alternatives:
>> 1. Implement DSI-master as a parent dev which will create DSI-slave
>> platform dev in a similar way as for MFD devices (ssbi.c seems to me a
>> good example).
>> 2. Create universal mipi-display-bus which will cover DSI, DBI and
>> possibly other buses - they have have few common things - for example
>> MIPI-DCS commands.
>>
>> I am not really convinced to either solution all have some advantages
>> and disadvantages.
> I think a dedicated DSI bus and your alternatives all have the same
> issues with splitting the DSI control into two. I've shared some of my
> thoughts here:
>
> http://article.gmane.org/gmane.comp.video.dri.devel/90651
> http://article.gmane.org/gmane.comp.video.dri.devel/91269
> http://article.gmane.org/gmane.comp.video.dri.devel/91272
>
> I still think that it's best to consider DSI and DBI as a video bus (not
> as a separate video bus and a control bus), and provide the packet
> transfer methods as part of the video ops.
I have read all posts regarding this issue and currently I tend
to solution where CDF is used to model only video streams,
with control bus implemented in different framework.
The only concerns I have if we should use Linux bus for that.
Andrzej
> Tomi
>
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH/RFC v3 00/19] Common Display Framework
2013-10-09 14:08 ` Andrzej Hajda
@ 2013-10-11 6:37 ` Tomi Valkeinen
2013-10-11 11:19 ` Andrzej Hajda
0 siblings, 1 reply; 14+ messages in thread
From: Tomi Valkeinen @ 2013-10-11 6:37 UTC (permalink / raw)
To: Andrzej Hajda, Laurent Pinchart
Cc: linux-fbdev, dri-devel, Jesse Barnes, Benjamin Gaignard, Tom Gall,
Kyungmin Park, linux-media, Stephen Warren, Mark Zhang,
Alexandre Courbot, Ragesh Radhakrishnan, Thomas Petazzoni,
Sunil Joshi, Maxime Ripard, Vikas Sajjan, Marcus Lorentzon
[-- Attachment #1: Type: text/plain, Size: 5336 bytes --]
On 09/10/13 17:08, Andrzej Hajda wrote:
> As I have adopted existing internal driver for MIPI-DSI bus, I did not
> take too much
> care for DT. You are right, 'bta-timeout' is a configuration parameter
> (however its
> minimal value is determined by characteristic of the DSI-slave). On the
> other
> side currently there is no good place for such configuration parameters
> AFAIK.
The minimum bta-timeout should be deducable from the DSI bus speed,
shouldn't it? Thus there's no need to define it anywhere.
>> - enable_hs and enable_te, used to enable/disable HS mode and
>> tearing-elimination
>
> It seems there should be a way to synchronize TE signal with panel,
> in case signal is provided only to dsi-master. Some callback I suppose?
> Or transfer synchronization should be done by dsi-master.
Hmm, can you explain a bit what you mean?
Do you mean that the panel driver should get a callback when DSI TE
trigger happens?
On OMAP, when using DSI TE trigger, the dsi-master does it all. So the
panel driver just calls update() on the dsi-master, and then the
dsi-master will wait for TE, and then start the transfer. There's also a
callback to the panel driver when the transfer has completed.
>> - set_max_rx_packet_size, used to configure the max rx packet size.
> Similar callbacks should be added to mipi-dsi-bus ops as well, to
> make it complete/generic.
Do you mean the same calls should exist both in the mipi-dbi-bus ops and
on the video ops? If they are called with different values, which one
"wins"?
>> http://article.gmane.org/gmane.comp.video.dri.devel/90651
>> http://article.gmane.org/gmane.comp.video.dri.devel/91269
>> http://article.gmane.org/gmane.comp.video.dri.devel/91272
>>
>> I still think that it's best to consider DSI and DBI as a video bus (not
>> as a separate video bus and a control bus), and provide the packet
>> transfer methods as part of the video ops.
> I have read all posts regarding this issue and currently I tend
> to solution where CDF is used to model only video streams,
> with control bus implemented in different framework.
> The only concerns I have if we should use Linux bus for that.
Ok. I have many other concerns, as I've expressed in the mails =). I
still don't see how it could work. So I'd very much like to see a more
detailed explanation how the separate control & video bus approach would
deal with different scenarios.
Let's consider a DSI-to-HDMI encoder chip. Version A of the chip is
controlled via DSI, version B is controlled via i2c. As the output of
the chip goes to HDMI connector, the DSI bus speed needs to be set
according to the resolution of the HDMI monitor.
So, with version A, the encoder driver would have some kind of pointers
to ctrl_ops and video_ops (or, pointers to dsi_bus instance and
video_bus instance), right? The ctrl_ops would need to have ops like
set_bus_speed, enable_hs, etc, to configure the DSI bus.
When the encoder driver is started, it'd probably set some safe bus
speed, configure the encoder a bit, read the EDID, enable HS,
re-configure the bus speed to match the monitor's video mode, configure
the encoder, and at last enable the video stream.
Version B would have i2c_client and video_ops. When the driver starts,
it'd probably do the same things as above, except the control messages
would go through i2c. That means that setting the bus speed, enabling
HS, etc, would happen through video_ops, as the i2c side has no
knowledge of the DSI side, right? Would there be identical ops on both
DSI ctrl and video ops?
That sounds very bad. What am I missing here? How would it work?
And, if we want to separate the video and control, I see no reason to
explicitly require the video side to be present. I.e. we could as well
have a DSI peripheral that has only the control bus used. How would that
reflect to, say, the DT presentation? Say, if we have a version A of the
encoder, we could have DT data like this (just a rough example):
soc-dsi {
encoder {
input: endpoint {
remote-endpoint = <&soc-dsi-ep>;
/* configuration for the DSI lanes */
dsi-lanes = <0 1 2 3 4 5>;
};
};
};
So the encoder would be places inside the SoC's DSI node, similar to how
an i2c device would be placed inside SoC's i2c node. DSI configuration
would be inside the video endpoint data.
Version B would be almost the same:
&i2c0 {
encoder {
input: endpoint {
remote-endpoint = <&soc-dsi-ep>;
/* configuration for the DSI lanes */
dsi-lanes = <0 1 2 3 4 5>;
};
};
};
Now, how would the video-bus-less device be defined? It'd be inside the
soc-dsi node, that's clear. Where would the DSI lane configuration be?
Not inside 'endpoint' node, as that's for video and wouldn't exist in
this case. Would we have the same lane configuration in two places, once
for video and once for control?
I agree that having DSI/DBI control and video separated would be
elegant. But I'd like to hear what is the technical benefit of that? At
least to me it's clearly more complex to separate them than to keep them
together (to the extent that I don't yet see how it is even possible),
so there must be a good reason for the separation. I don't understand
that reason. What is it?
Tomi
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 901 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH/RFC v3 00/19] Common Display Framework
2013-10-11 6:37 ` Tomi Valkeinen
@ 2013-10-11 11:19 ` Andrzej Hajda
2013-10-11 12:30 ` Tomi Valkeinen
0 siblings, 1 reply; 14+ messages in thread
From: Andrzej Hajda @ 2013-10-11 11:19 UTC (permalink / raw)
To: Tomi Valkeinen, Laurent Pinchart
Cc: linux-fbdev, dri-devel, Jesse Barnes, Benjamin Gaignard, Tom Gall,
Kyungmin Park, linux-media, Stephen Warren, Mark Zhang,
Alexandre Courbot, Ragesh Radhakrishnan, Thomas Petazzoni,
Sunil Joshi, Maxime Ripard, Vikas Sajjan, Marcus Lorentzon
On 10/11/2013 08:37 AM, Tomi Valkeinen wrote:
> On 09/10/13 17:08, Andrzej Hajda wrote:
>
>> As I have adopted existing internal driver for MIPI-DSI bus, I did not
>> take too much
>> care for DT. You are right, 'bta-timeout' is a configuration parameter
>> (however its
>> minimal value is determined by characteristic of the DSI-slave). On the
>> other
>> side currently there is no good place for such configuration parameters
>> AFAIK.
> The minimum bta-timeout should be deducable from the DSI bus speed,
> shouldn't it? Thus there's no need to define it anywhere.
Hmm, specification says "This specified period shall be longer then
the maximum possible turnaround delay for the unit to which the
turnaround request was sent".
>
>>> - enable_hs and enable_te, used to enable/disable HS mode and
>>> tearing-elimination
>> It seems there should be a way to synchronize TE signal with panel,
>> in case signal is provided only to dsi-master. Some callback I suppose?
>> Or transfer synchronization should be done by dsi-master.
> Hmm, can you explain a bit what you mean?
>
> Do you mean that the panel driver should get a callback when DSI TE
> trigger happens?
>
> On OMAP, when using DSI TE trigger, the dsi-master does it all. So the
> panel driver just calls update() on the dsi-master, and then the
> dsi-master will wait for TE, and then start the transfer. There's also a
> callback to the panel driver when the transfer has completed.
Yes I though about a callback, but approach with DSI-master taking care
of synchronization in fact better fits to exynos-dsi and I suspect to
omap also.
>
>>> - set_max_rx_packet_size, used to configure the max rx packet size.
>> Similar callbacks should be added to mipi-dsi-bus ops as well, to
>> make it complete/generic.
> Do you mean the same calls should exist both in the mipi-dbi-bus ops and
> on the video ops? If they are called with different values, which one
> "wins"?
No, I meant that if mipi-dbi-bus want to be complete it should have
similar ops.
I did not think about scenario with two overlaping APIs.
>
>>> http://article.gmane.org/gmane.comp.video.dri.devel/90651
>>> http://article.gmane.org/gmane.comp.video.dri.devel/91269
>>> http://article.gmane.org/gmane.comp.video.dri.devel/91272
>>>
>>> I still think that it's best to consider DSI and DBI as a video bus (not
>>> as a separate video bus and a control bus), and provide the packet
>>> transfer methods as part of the video ops.
>> I have read all posts regarding this issue and currently I tend
>> to solution where CDF is used to model only video streams,
>> with control bus implemented in different framework.
>> The only concerns I have if we should use Linux bus for that.
> Ok. I have many other concerns, as I've expressed in the mails =). I
> still don't see how it could work. So I'd very much like to see a more
> detailed explanation how the separate control & video bus approach would
> deal with different scenarios.
>
> Let's consider a DSI-to-HDMI encoder chip. Version A of the chip is
> controlled via DSI, version B is controlled via i2c. As the output of
> the chip goes to HDMI connector, the DSI bus speed needs to be set
> according to the resolution of the HDMI monitor.
>
> So, with version A, the encoder driver would have some kind of pointers
> to ctrl_ops and video_ops (or, pointers to dsi_bus instance and
> video_bus instance), right? The ctrl_ops would need to have ops like
> set_bus_speed, enable_hs, etc, to configure the DSI bus.
>
> When the encoder driver is started, it'd probably set some safe bus
> speed, configure the encoder a bit, read the EDID, enable HS,
> re-configure the bus speed to match the monitor's video mode, configure
> the encoder, and at last enable the video stream.
>
> Version B would have i2c_client and video_ops. When the driver starts,
> it'd probably do the same things as above, except the control messages
> would go through i2c. That means that setting the bus speed, enabling
> HS, etc, would happen through video_ops, as the i2c side has no
> knowledge of the DSI side, right? Would there be identical ops on both
> DSI ctrl and video ops?
>
> That sounds very bad. What am I missing here? How would it work?
If I undrestand correctly you think about CDF topology like below:
DispContr(SoC) ---> DSI-master(SoC) ---> encoder(DSI or I2C)
But I think with mipi-dsi-bus topology could look like:
DispContr(SoC) ---> encoder(DSI or I2C)
DSI-master will not have its own entity, in the graph it could be
represented
by the link(--->), as it really does not process the video, only
transports it.
In case of version A I think everything is clear.
In case of version B it does not seems so nice at the first sight, but
still seems quite straightforward to me - special plink in encoder's
node pointing
to DSI-master, driver will find the device in runtime and use ops as needed
(additional ops/helpers required).
This is also the way to support devices which can be controlled by DSI
and I2C
in the same time. Anyway I suspect such scenario will be quite rare.
>
> And, if we want to separate the video and control, I see no reason to
> explicitly require the video side to be present. I.e. we could as well
> have a DSI peripheral that has only the control bus used. How would that
> reflect to, say, the DT presentation? Say, if we have a version A of the
> encoder, we could have DT data like this (just a rough example):
>
> soc-dsi {
> encoder {
> input: endpoint {
> remote-endpoint = <&soc-dsi-ep>;
Here I would replace &soc-dsi-ep by phandle to display controller/crtc/....
> /* configuration for the DSI lanes */
> dsi-lanes = <0 1 2 3 4 5>;
Wow, quite advanced DSI.
> };
> };
> };
>
> So the encoder would be places inside the SoC's DSI node, similar to how
> an i2c device would be placed inside SoC's i2c node. DSI configuration
> would be inside the video endpoint data.
>
> Version B would be almost the same:
>
> &i2c0 {
> encoder {
> input: endpoint {
> remote-endpoint = <&soc-dsi-ep>;
&soc-dsi-ep => &disp-ctrl-ep
> /* configuration for the DSI lanes */
> dsi-lanes = <0 1 2 3 4 5>;
> };
> };
> };
>
> Now, how would the video-bus-less device be defined?
> It'd be inside the
> soc-dsi node, that's clear. Where would the DSI lane configuration be?
> Not inside 'endpoint' node, as that's for video and wouldn't exist in
> this case. Would we have the same lane configuration in two places, once
> for video and once for control?
I think it is control setting, so it should be put outside endpoint node.
Probably it could be placed in encoder node.
>
> I agree that having DSI/DBI control and video separated would be
> elegant. But I'd like to hear what is the technical benefit of that? At
> least to me it's clearly more complex to separate them than to keep them
> together (to the extent that I don't yet see how it is even possible),
> so there must be a good reason for the separation. I don't understand
> that reason. What is it?
Roughly speaking it is a question where is the more convenient place to
put bunch
of opses, technically both solutions can be somehow implemented.
Pros of mipi bus:
- no fake entity in CDF, with fake opses, I have to use similar entities
in MIPI-CSI
camera pipelines and it complicates life without any benefit(at least
from user side),
- CDF models only video buses, control bus is a domain of Linux buses,
- less platform_bus abusing,
- better device tree topology (at least for common cases),
- quite simple in case of typical devices.
Regards
Andrzej
>
> Tomi
>
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH/RFC v3 00/19] Common Display Framework
2013-10-11 11:19 ` Andrzej Hajda
@ 2013-10-11 12:30 ` Tomi Valkeinen
2013-10-11 14:16 ` Andrzej Hajda
0 siblings, 1 reply; 14+ messages in thread
From: Tomi Valkeinen @ 2013-10-11 12:30 UTC (permalink / raw)
To: Andrzej Hajda, Laurent Pinchart
Cc: linux-fbdev, dri-devel, Jesse Barnes, Benjamin Gaignard, Tom Gall,
Kyungmin Park, linux-media, Stephen Warren, Mark Zhang,
Alexandre Courbot, Ragesh Radhakrishnan, Thomas Petazzoni,
Sunil Joshi, Maxime Ripard, Vikas Sajjan, Marcus Lorentzon
[-- Attachment #1: Type: text/plain, Size: 7199 bytes --]
On 11/10/13 14:19, Andrzej Hajda wrote:
> On 10/11/2013 08:37 AM, Tomi Valkeinen wrote:
>> The minimum bta-timeout should be deducable from the DSI bus speed,
>> shouldn't it? Thus there's no need to define it anywhere.
> Hmm, specification says "This specified period shall be longer then
> the maximum possible turnaround delay for the unit to which the
> turnaround request was sent".
Ah, you're right. We can't know how long the peripheral will take
responding. I was thinking of something that only depends on the
bus-speed and the timings for that.
> If I undrestand correctly you think about CDF topology like below:
>
> DispContr(SoC) ---> DSI-master(SoC) ---> encoder(DSI or I2C)
>
> But I think with mipi-dsi-bus topology could look like:
>
> DispContr(SoC) ---> encoder(DSI or I2C)
>
> DSI-master will not have its own entity, in the graph it could be
> represented
> by the link(--->), as it really does not process the video, only
> transports it.
At least in OMAP, the SoC's DSI-master receives parallel RGB data from
DISPC, and encodes it to DSI. Isn't that processing? It's basically a
DPI-to-DSI encoder. And it's not a simple pass-through, the DSI video
timings could be considerably different than the DPI timings.
> In case of version A I think everything is clear.
> In case of version B it does not seems so nice at the first sight, but
> still seems quite straightforward to me - special plink in encoder's
> node pointing
> to DSI-master, driver will find the device in runtime and use ops as needed
> (additional ops/helpers required).
> This is also the way to support devices which can be controlled by DSI
> and I2C
> in the same time. Anyway I suspect such scenario will be quite rare.
Okay, so if I gather it right, you say there would be something like
'dsi_adapter' (like i2c_adapter), which represents the dsi-master. And a
driver could get pointer to this, regardless of whether it the linux
device is a DSI device.
At least one issue with this approach is the endpoint problem (see below).
>> And, if we want to separate the video and control, I see no reason to
>> explicitly require the video side to be present. I.e. we could as well
>> have a DSI peripheral that has only the control bus used. How would that
>> reflect to, say, the DT presentation? Say, if we have a version A of the
>> encoder, we could have DT data like this (just a rough example):
>>
>> soc-dsi {
>> encoder {
>> input: endpoint {
>> remote-endpoint = <&soc-dsi-ep>;
> Here I would replace &soc-dsi-ep by phandle to display controller/crtc/....
>
>> /* configuration for the DSI lanes */
>> dsi-lanes = <0 1 2 3 4 5>;
> Wow, quite advanced DSI.
Wha? That just means there is one clock lane and two datalanes, nothing
more =). We can select the polarity of a lane, so we describe both the
positive and negative lines there. So it says clk- is connected to pin
0, clk+ connected to pin 1, etc.
>> };
>> };
>> };
>>
>> So the encoder would be places inside the SoC's DSI node, similar to how
>> an i2c device would be placed inside SoC's i2c node. DSI configuration
>> would be inside the video endpoint data.
>>
>> Version B would be almost the same:
>>
>> &i2c0 {
>> encoder {
>> input: endpoint {
>> remote-endpoint = <&soc-dsi-ep>;
> &soc-dsi-ep => &disp-ctrl-ep
>> /* configuration for the DSI lanes */
>> dsi-lanes = <0 1 2 3 4 5>;
>> };
>> };
>> };
>>
>> Now, how would the video-bus-less device be defined?
>> It'd be inside the
>> soc-dsi node, that's clear. Where would the DSI lane configuration be?
>> Not inside 'endpoint' node, as that's for video and wouldn't exist in
>> this case. Would we have the same lane configuration in two places, once
>> for video and once for control?
> I think it is control setting, so it should be put outside endpoint node.
> Probably it could be placed in encoder node.
Well, one point of the endpoints is also to allow "switching" of video
devices.
For example, I could have a board with a SoC's DSI output, connected to
two DSI panels. There would be some kind of mux between, so that I can
select which of the panels is actually connected to the SoC.
Here the first panel could use 2 datalanes, the second one 4. Thus, the
DSI master would have two endpoints, the other one using 2 and the other
4 datalanes.
If we decide that kind of support is not needed, well, is there even
need for the V4L2 endpoints in the DT data at all?
>> I agree that having DSI/DBI control and video separated would be
>> elegant. But I'd like to hear what is the technical benefit of that? At
>> least to me it's clearly more complex to separate them than to keep them
>> together (to the extent that I don't yet see how it is even possible),
>> so there must be a good reason for the separation. I don't understand
>> that reason. What is it?
> Roughly speaking it is a question where is the more convenient place to
> put bunch
> of opses, technically both solutions can be somehow implemented.
Well, it's also about dividing a single physical bus into two separate
interfaces to it. It sounds to me that it would be much more complex
with locking. With a single API, we can just say "the caller handles
locking". With two separate interfaces, there must be locking at the
lower level.
> Pros of mipi bus:
> - no fake entity in CDF, with fake opses, I have to use similar entities
> in MIPI-CSI
> camera pipelines and it complicates life without any benefit(at least
> from user side),
You mean the DSI-master? I don't see how it's "fake", it's a video
processing unit that has to be configured. Even if we forget the control
side, and just think about plain video stream with DSI video mode,
there's are things to configure with it.
What kind of issues you have in the CSI side, then?
> - CDF models only video buses, control bus is a domain of Linux buses,
Yes, but in this case the buses are the same. It makes me a bit nervous
to have two separate ways (video and control) to use the same bus, in a
case like video where timing is critical.
So yes, we can consider video and control buses as "virtual" buses, and
the actual transport is the DSI bus. Maybe it can be done. It just makes
me a bit nervous =).
> - less platform_bus abusing,
Well, platform.txt says
"This pseudo-bus
is used to connect devices on busses with minimal infrastructure,
like those used to integrate peripherals on many system-on-chip
processors, or some "legacy" PC interconnects; as opposed to large
formally specified ones like PCI or USB."
I don't think DSI and DBI as platform bus is that far from the
description. They are "simple", no probing point-to-point (in practice)
buses. There's not much "bus" to speak of, just a point-to-point link.
> - better device tree topology (at least for common cases),
Even if we use platform devices for DSI peripherals, we can have them
described under the DSI master node.
> - quite simple in case of typical devices.
Still more complex than single API for both video and control =).
Tomi
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 901 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH/RFC v3 00/19] Common Display Framework
2013-10-11 12:30 ` Tomi Valkeinen
@ 2013-10-11 14:16 ` Andrzej Hajda
2013-10-11 14:45 ` Tomi Valkeinen
0 siblings, 1 reply; 14+ messages in thread
From: Andrzej Hajda @ 2013-10-11 14:16 UTC (permalink / raw)
To: Tomi Valkeinen, Laurent Pinchart
Cc: linux-fbdev, dri-devel, Jesse Barnes, Benjamin Gaignard, Tom Gall,
Kyungmin Park, linux-media, Stephen Warren, Mark Zhang,
Alexandre Courbot, Ragesh Radhakrishnan, Thomas Petazzoni,
Sunil Joshi, Maxime Ripard, Vikas Sajjan, Marcus Lorentzon
On 10/11/2013 02:30 PM, Tomi Valkeinen wrote:
> On 11/10/13 14:19, Andrzej Hajda wrote:
>> On 10/11/2013 08:37 AM, Tomi Valkeinen wrote:
>>> The minimum bta-timeout should be deducable from the DSI bus speed,
>>> shouldn't it? Thus there's no need to define it anywhere.
>> Hmm, specification says "This specified period shall be longer then
>> the maximum possible turnaround delay for the unit to which the
>> turnaround request was sent".
> Ah, you're right. We can't know how long the peripheral will take
> responding. I was thinking of something that only depends on the
> bus-speed and the timings for that.
>
>> If I undrestand correctly you think about CDF topology like below:
>>
>> DispContr(SoC) ---> DSI-master(SoC) ---> encoder(DSI or I2C)
>>
>> But I think with mipi-dsi-bus topology could look like:
>>
>> DispContr(SoC) ---> encoder(DSI or I2C)
>>
>> DSI-master will not have its own entity, in the graph it could be
>> represented
>> by the link(--->), as it really does not process the video, only
>> transports it.
> At least in OMAP, the SoC's DSI-master receives parallel RGB data from
> DISPC, and encodes it to DSI. Isn't that processing? It's basically a
> DPI-to-DSI encoder. And it's not a simple pass-through, the DSI video
> timings could be considerably different than the DPI timings.
Picture size, content and format is the same on input and on output of DSI.
The same bits which enters DSI appears on the output. Internally bits
order can
be different but practically you are configuring DSI master and slave
with the same format.
If you create DSI entity you will have to always set the same format and
size on DSI input, DSI output and encoder input.
If you skip creating DSI entity you loose nothing, and you do not need
to take care of it.
>
>> In case of version A I think everything is clear.
>> In case of version B it does not seems so nice at the first sight, but
>> still seems quite straightforward to me - special plink in encoder's
>> node pointing
>> to DSI-master, driver will find the device in runtime and use ops as needed
>> (additional ops/helpers required).
>> This is also the way to support devices which can be controlled by DSI
>> and I2C
>> in the same time. Anyway I suspect such scenario will be quite rare.
> Okay, so if I gather it right, you say there would be something like
> 'dsi_adapter' (like i2c_adapter), which represents the dsi-master. And a
> driver could get pointer to this, regardless of whether it the linux
> device is a DSI device.
>
> At least one issue with this approach is the endpoint problem (see below).
>
>>> And, if we want to separate the video and control, I see no reason to
>>> explicitly require the video side to be present. I.e. we could as well
>>> have a DSI peripheral that has only the control bus used. How would that
>>> reflect to, say, the DT presentation? Say, if we have a version A of the
>>> encoder, we could have DT data like this (just a rough example):
>>>
>>> soc-dsi {
>>> encoder {
>>> input: endpoint {
>>> remote-endpoint = <&soc-dsi-ep>;
>> Here I would replace &soc-dsi-ep by phandle to display controller/crtc/....
>>
>>> /* configuration for the DSI lanes */
>>> dsi-lanes = <0 1 2 3 4 5>;
>> Wow, quite advanced DSI.
> Wha? That just means there is one clock lane and two datalanes, nothing
> more =). We can select the polarity of a lane, so we describe both the
> positive and negative lines there. So it says clk- is connected to pin
> 0, clk+ connected to pin 1, etc.
OK in V4L binding world it means DSI with six lanes :)
>
>>> };
>>> };
>>> };
>>>
>>> So the encoder would be places inside the SoC's DSI node, similar to how
>>> an i2c device would be placed inside SoC's i2c node. DSI configuration
>>> would be inside the video endpoint data.
>>>
>>> Version B would be almost the same:
>>>
>>> &i2c0 {
>>> encoder {
>>> input: endpoint {
>>> remote-endpoint = <&soc-dsi-ep>;
>> &soc-dsi-ep => &disp-ctrl-ep
>>> /* configuration for the DSI lanes */
>>> dsi-lanes = <0 1 2 3 4 5>;
>>> };
>>> };
>>> };
>>>
>>> Now, how would the video-bus-less device be defined?
>>> It'd be inside the
>>> soc-dsi node, that's clear. Where would the DSI lane configuration be?
>>> Not inside 'endpoint' node, as that's for video and wouldn't exist in
>>> this case. Would we have the same lane configuration in two places, once
>>> for video and once for control?
>> I think it is control setting, so it should be put outside endpoint node.
>> Probably it could be placed in encoder node.
> Well, one point of the endpoints is also to allow "switching" of video
> devices.
>
> For example, I could have a board with a SoC's DSI output, connected to
> two DSI panels. There would be some kind of mux between, so that I can
> select which of the panels is actually connected to the SoC.
>
> Here the first panel could use 2 datalanes, the second one 4. Thus, the
> DSI master would have two endpoints, the other one using 2 and the other
> 4 datalanes.
>
> If we decide that kind of support is not needed, well, is there even
> need for the V4L2 endpoints in the DT data at all?
Hmm, both panels connected to one endpoint of dispc ?
The problem I see is which driver should handle panel switching,
but this is question about hardware design as well. If this is realized
by dispc I have told already the solution. If this is realized by other
device I do not see a problem to create corresponding CDF entity,
or maybe it can be handled by "Pipeline Controller" ???
>
>>> I agree that having DSI/DBI control and video separated would be
>>> elegant. But I'd like to hear what is the technical benefit of that? At
>>> least to me it's clearly more complex to separate them than to keep them
>>> together (to the extent that I don't yet see how it is even possible),
>>> so there must be a good reason for the separation. I don't understand
>>> that reason. What is it?
>> Roughly speaking it is a question where is the more convenient place to
>> put bunch
>> of opses, technically both solutions can be somehow implemented.
> Well, it's also about dividing a single physical bus into two separate
> interfaces to it. It sounds to me that it would be much more complex
> with locking. With a single API, we can just say "the caller handles
> locking". With two separate interfaces, there must be locking at the
> lower level.
We say then: callee handles locking :)
>
>> Pros of mipi bus:
>> - no fake entity in CDF, with fake opses, I have to use similar entities
>> in MIPI-CSI
>> camera pipelines and it complicates life without any benefit(at least
>> from user side),
> You mean the DSI-master? I don't see how it's "fake", it's a video
> processing unit that has to be configured. Even if we forget the control
> side, and just think about plain video stream with DSI video mode,
> there's are things to configure with it.
>
> What kind of issues you have in the CSI side, then?
Not real issues, just needless calls to configure CSI entity pads,
with the same format and picture sizes as in camera.
>
>> - CDF models only video buses, control bus is a domain of Linux buses,
> Yes, but in this case the buses are the same. It makes me a bit nervous
> to have two separate ways (video and control) to use the same bus, in a
> case like video where timing is critical.
>
> So yes, we can consider video and control buses as "virtual" buses, and
> the actual transport is the DSI bus. Maybe it can be done. It just makes
> me a bit nervous =).
>
>> - less platform_bus abusing,
> Well, platform.txt says
>
> "This pseudo-bus
> is used to connect devices on busses with minimal infrastructure,
> like those used to integrate peripherals on many system-on-chip
> processors, or some "legacy" PC interconnects; as opposed to large
> formally specified ones like PCI or USB."
>
> I don't think DSI and DBI as platform bus is that far from the
> description. They are "simple", no probing point-to-point (in practice)
> buses. There's not much "bus" to speak of, just a point-to-point link.
Next section:
Platform devices
~~~~~~~~~~~~~~~~
Platform devices are devices that typically appear as autonomous
entities in the system. This includes legacy port-based devices and
host bridges to peripheral buses, and most controllers integrated
into system-on-chip platforms. What they usually have in common
is direct addressing from a CPU bus. Rarely, a platform_device will
be connected through a segment of some other kind of bus; but its
registers will still be directly addressable.
>> - better device tree topology (at least for common cases),
> Even if we use platform devices for DSI peripherals, we can have them
> described under the DSI master node.
Sorry, I meant rather Linux device tree topology, not DT.
>
>> - quite simple in case of typical devices.
> Still more complex than single API for both video and control =).
I agree.
Andrzej
> Tomi
>
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH/RFC v3 00/19] Common Display Framework
2013-10-11 14:16 ` Andrzej Hajda
@ 2013-10-11 14:45 ` Tomi Valkeinen
2013-10-17 7:48 ` Andrzej Hajda
0 siblings, 1 reply; 14+ messages in thread
From: Tomi Valkeinen @ 2013-10-11 14:45 UTC (permalink / raw)
To: Andrzej Hajda, Laurent Pinchart
Cc: linux-fbdev, dri-devel, Jesse Barnes, Benjamin Gaignard, Tom Gall,
Kyungmin Park, linux-media, Stephen Warren, Mark Zhang,
Alexandre Courbot, Ragesh Radhakrishnan, Thomas Petazzoni,
Sunil Joshi, Maxime Ripard, Vikas Sajjan, Marcus Lorentzon
[-- Attachment #1: Type: text/plain, Size: 6916 bytes --]
On 11/10/13 17:16, Andrzej Hajda wrote:
> Picture size, content and format is the same on input and on output of DSI.
> The same bits which enters DSI appears on the output. Internally bits
> order can
> be different but practically you are configuring DSI master and slave
> with the same format.
>
> If you create DSI entity you will have to always set the same format and
> size on DSI input, DSI output and encoder input.
> If you skip creating DSI entity you loose nothing, and you do not need
> to take care of it.
Well, this is really a different question from the bus problem. But
nothing says the DSI master cannot change the format or even size. For
sure it can change the video timings. The DSI master could even take two
parallel inputs, and combine them into one DSI output. You don't can't
what all the possible pieces of hardware do =).
If you have a bigger IP block that internally contains the DISPC and the
DSI, then, yes, you can combine them into one display entity. I don't
think that's correct, though. And if the DISPC and DSI are independent
blocks, then especially I think there must be an entity for the DSI
block, which will enable the powers, clocks, etc, when needed.
>> Well, one point of the endpoints is also to allow "switching" of video
>> devices.
>>
>> For example, I could have a board with a SoC's DSI output, connected to
>> two DSI panels. There would be some kind of mux between, so that I can
>> select which of the panels is actually connected to the SoC.
>>
>> Here the first panel could use 2 datalanes, the second one 4. Thus, the
>> DSI master would have two endpoints, the other one using 2 and the other
>> 4 datalanes.
>>
>> If we decide that kind of support is not needed, well, is there even
>> need for the V4L2 endpoints in the DT data at all?
> Hmm, both panels connected to one endpoint of dispc ?
> The problem I see is which driver should handle panel switching,
> but this is question about hardware design as well. If this is realized
> by dispc I have told already the solution. If this is realized by other
> device I do not see a problem to create corresponding CDF entity,
> or maybe it can be handled by "Pipeline Controller" ???
Well the switching could be automatic, when the panel power is enabled,
the DSI mux is switched for that panel. It's not relevant.
We still have two different endpoint configurations for the same
DSI-master port. If that configuration is in the DSI-master's port node,
not inside an endpoint data, then that can't be supported.
>>>> I agree that having DSI/DBI control and video separated would be
>>>> elegant. But I'd like to hear what is the technical benefit of that? At
>>>> least to me it's clearly more complex to separate them than to keep them
>>>> together (to the extent that I don't yet see how it is even possible),
>>>> so there must be a good reason for the separation. I don't understand
>>>> that reason. What is it?
>>> Roughly speaking it is a question where is the more convenient place to
>>> put bunch
>>> of opses, technically both solutions can be somehow implemented.
>> Well, it's also about dividing a single physical bus into two separate
>> interfaces to it. It sounds to me that it would be much more complex
>> with locking. With a single API, we can just say "the caller handles
>> locking". With two separate interfaces, there must be locking at the
>> lower level.
> We say then: callee handles locking :)
Sure, but my point was that the caller handling the locking is much
simpler than the callee handling locking. And the latter causes
atomicity issues, as the other API could be invoked in between two calls
for the first API.
But note that I'm not saying we should not implement bus model just
because it's more complex. We should go for bus model if it's better. I
just want to bring up these complexities, which I feel are quite more
difficult than with the simpler model.
>>> Pros of mipi bus:
>>> - no fake entity in CDF, with fake opses, I have to use similar entities
>>> in MIPI-CSI
>>> camera pipelines and it complicates life without any benefit(at least
>>> from user side),
>> You mean the DSI-master? I don't see how it's "fake", it's a video
>> processing unit that has to be configured. Even if we forget the control
>> side, and just think about plain video stream with DSI video mode,
>> there's are things to configure with it.
>>
>> What kind of issues you have in the CSI side, then?
> Not real issues, just needless calls to configure CSI entity pads,
> with the same format and picture sizes as in camera.
Well, the output of a component A is surely the same as the input of
component B, if B receives the data from A. So that does sound useless.
I don't do that kind of calls in my model.
>>> - CDF models only video buses, control bus is a domain of Linux buses,
>> Yes, but in this case the buses are the same. It makes me a bit nervous
>> to have two separate ways (video and control) to use the same bus, in a
>> case like video where timing is critical.
>>
>> So yes, we can consider video and control buses as "virtual" buses, and
>> the actual transport is the DSI bus. Maybe it can be done. It just makes
>> me a bit nervous =).
>>
>>> - less platform_bus abusing,
>> Well, platform.txt says
>>
>> "This pseudo-bus
>> is used to connect devices on busses with minimal infrastructure,
>> like those used to integrate peripherals on many system-on-chip
>> processors, or some "legacy" PC interconnects; as opposed to large
>> formally specified ones like PCI or USB."
>>
>> I don't think DSI and DBI as platform bus is that far from the
>> description. They are "simple", no probing point-to-point (in practice)
>> buses. There's not much "bus" to speak of, just a point-to-point link.
> Next section:
>
> Platform devices
> ~~~~~~~~~~~~~~~~
> Platform devices are devices that typically appear as autonomous
> entities in the system. This includes legacy port-based devices and
> host bridges to peripheral buses, and most controllers integrated
> into system-on-chip platforms. What they usually have in common
> is direct addressing from a CPU bus. Rarely, a platform_device will
> be connected through a segment of some other kind of bus; but its
> registers will still be directly addressable.
Yep, "typically" and "rarely" =). I agree, it's not clear. I think there
are things with DBI/DSI that clearly point to a platform device, but
also the other way.
>>> - better device tree topology (at least for common cases),
>> Even if we use platform devices for DSI peripherals, we can have them
>> described under the DSI master node.
> Sorry, I meant rather Linux device tree topology, not DT.
We can have the DSI peripheral platform devices as children of the
DSI-master device.
Tomi
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 901 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH/RFC v3 00/19] Common Display Framework
2013-10-11 14:45 ` Tomi Valkeinen
@ 2013-10-17 7:48 ` Andrzej Hajda
2013-10-17 8:18 ` Tomi Valkeinen
0 siblings, 1 reply; 14+ messages in thread
From: Andrzej Hajda @ 2013-10-17 7:48 UTC (permalink / raw)
To: Tomi Valkeinen, Laurent Pinchart
Cc: linux-fbdev, dri-devel, Jesse Barnes, Benjamin Gaignard, Tom Gall,
Kyungmin Park, linux-media, Stephen Warren, Mark Zhang,
Alexandre Courbot, Ragesh Radhakrishnan, Thomas Petazzoni,
Sunil Joshi, Maxime Ripard, Vikas Sajjan, Marcus Lorentzon
Hi Tomi,
Sorry for delayed response.
On 10/11/2013 04:45 PM, Tomi Valkeinen wrote:
> On 11/10/13 17:16, Andrzej Hajda wrote:
>
>> Picture size, content and format is the same on input and on output of DSI.
>> The same bits which enters DSI appears on the output. Internally bits
>> order can
>> be different but practically you are configuring DSI master and slave
>> with the same format.
>>
>> If you create DSI entity you will have to always set the same format and
>> size on DSI input, DSI output and encoder input.
>> If you skip creating DSI entity you loose nothing, and you do not need
>> to take care of it.
> Well, this is really a different question from the bus problem. But
> nothing says the DSI master cannot change the format or even size. For
> sure it can change the video timings. The DSI master could even take two
> parallel inputs, and combine them into one DSI output. You don't can't
> what all the possible pieces of hardware do =)
> If you have a bigger IP block that internally contains the DISPC and the
> DSI, then, yes, you can combine them into one display entity. I don't
> think that's correct, though. And if the DISPC and DSI are independent
> blocks, then especially I think there must be an entity for the DSI
> block, which will enable the powers, clocks, etc, when needed.
The main function of DSI is to transport pixels from one IP to another IP
and this function IMO should not be modeled by display entity.
"Power, clocks, etc" will be performed via control bus according to
panel demands.
If 'DSI chip' has additional functions for video processing they can
be modeled by CDF entity if it makes sense.
>>> Well, one point of the endpoints is also to allow "switching" of video
>>> devices.
>>>
>>> For example, I could have a board with a SoC's DSI output, connected to
>>> two DSI panels. There would be some kind of mux between, so that I can
>>> select which of the panels is actually connected to the SoC.
>>>
>>> Here the first panel could use 2 datalanes, the second one 4. Thus, the
>>> DSI master would have two endpoints, the other one using 2 and the other
>>> 4 datalanes.
>>>
>>> If we decide that kind of support is not needed, well, is there even
>>> need for the V4L2 endpoints in the DT data at all?
>> Hmm, both panels connected to one endpoint of dispc ?
>> The problem I see is which driver should handle panel switching,
>> but this is question about hardware design as well. If this is realized
>> by dispc I have told already the solution. If this is realized by other
>> device I do not see a problem to create corresponding CDF entity,
>> or maybe it can be handled by "Pipeline Controller" ???
> Well the switching could be automatic, when the panel power is enabled,
> the DSI mux is switched for that panel. It's not relevant.
>
> We still have two different endpoint configurations for the same
> DSI-master port. If that configuration is in the DSI-master's port node,
> not inside an endpoint data, then that can't be supported.
I am not sure if I understand it correctly. But it seems quite simple:
when panel starts/resumes it request DSI (via control bus) to fulfill
its configuration settings.
Of course there are some settings which are not panel dependent and those
should reside in DSI node.
>>>>> I agree that having DSI/DBI control and video separated would be
>>>>> elegant. But I'd like to hear what is the technical benefit of that? At
>>>>> least to me it's clearly more complex to separate them than to keep them
>>>>> together (to the extent that I don't yet see how it is even possible),
>>>>> so there must be a good reason for the separation. I don't understand
>>>>> that reason. What is it?
>>>> Roughly speaking it is a question where is the more convenient place to
>>>> put bunch
>>>> of opses, technically both solutions can be somehow implemented.
>>> Well, it's also about dividing a single physical bus into two separate
>>> interfaces to it. It sounds to me that it would be much more complex
>>> with locking. With a single API, we can just say "the caller handles
>>> locking". With two separate interfaces, there must be locking at the
>>> lower level.
>> We say then: callee handles locking :)
> Sure, but my point was that the caller handling the locking is much
> simpler than the callee handling locking. And the latter causes
> atomicity issues, as the other API could be invoked in between two calls
> for the first API.
>
>
Could you describe such scenario?
> But note that I'm not saying we should not implement bus model just
> because it's more complex. We should go for bus model if it's better. I
> just want to bring up these complexities, which I feel are quite more
> difficult than with the simpler model.
>
>>>> Pros of mipi bus:
>>>> - no fake entity in CDF, with fake opses, I have to use similar entities
>>>> in MIPI-CSI
>>>> camera pipelines and it complicates life without any benefit(at least
>>>> from user side),
>>> You mean the DSI-master? I don't see how it's "fake", it's a video
>>> processing unit that has to be configured. Even if we forget the control
>>> side, and just think about plain video stream with DSI video mode,
>>> there's are things to configure with it.
>>>
>>> What kind of issues you have in the CSI side, then?
>> Not real issues, just needless calls to configure CSI entity pads,
>> with the same format and picture sizes as in camera.
> Well, the output of a component A is surely the same as the input of
> component B, if B receives the data from A. So that does sound useless.
> I don't do that kind of calls in my model.
>
>>>> - CDF models only video buses, control bus is a domain of Linux buses,
>>> Yes, but in this case the buses are the same. It makes me a bit nervous
>>> to have two separate ways (video and control) to use the same bus, in a
>>> case like video where timing is critical.
>>>
>>> So yes, we can consider video and control buses as "virtual" buses, and
>>> the actual transport is the DSI bus. Maybe it can be done. It just makes
>>> me a bit nervous =).
>>>
>>>> - less platform_bus abusing,
>>> Well, platform.txt says
>>>
>>> "This pseudo-bus
>>> is used to connect devices on busses with minimal infrastructure,
>>> like those used to integrate peripherals on many system-on-chip
>>> processors, or some "legacy" PC interconnects; as opposed to large
>>> formally specified ones like PCI or USB."
>>>
>>> I don't think DSI and DBI as platform bus is that far from the
>>> description. They are "simple", no probing point-to-point (in practice)
>>> buses. There's not much "bus" to speak of, just a point-to-point link.
>> Next section:
>>
>> Platform devices
>> ~~~~~~~~~~~~~~~~
>> Platform devices are devices that typically appear as autonomous
>> entities in the system. This includes legacy port-based devices and
>> host bridges to peripheral buses, and most controllers integrated
>> into system-on-chip platforms. What they usually have in common
>> is direct addressing from a CPU bus. Rarely, a platform_device will
>> be connected through a segment of some other kind of bus; but its
>> registers will still be directly addressable.
> Yep, "typically" and "rarely" =). I agree, it's not clear. I think there
> are things with DBI/DSI that clearly point to a platform device, but
> also the other way.
Just to be sure, we are talking here about DSI-slaves, ie. for example
about panels,
where direct accessing from CPU bus usually is not possible.
Andrzej
>>>> - better device tree topology (at least for common cases),
>>> Even if we use platform devices for DSI peripherals, we can have them
>>> described under the DSI master node.
>> Sorry, I meant rather Linux device tree topology, not DT.
> We can have the DSI peripheral platform devices as children of the
> DSI-master device.
>
> Tomi
>
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH/RFC v3 00/19] Common Display Framework
2013-10-17 7:48 ` Andrzej Hajda
@ 2013-10-17 8:18 ` Tomi Valkeinen
2013-10-17 12:26 ` Andrzej Hajda
0 siblings, 1 reply; 14+ messages in thread
From: Tomi Valkeinen @ 2013-10-17 8:18 UTC (permalink / raw)
To: Andrzej Hajda, Laurent Pinchart
Cc: linux-fbdev, dri-devel, Jesse Barnes, Benjamin Gaignard, Tom Gall,
Kyungmin Park, linux-media, Stephen Warren, Mark Zhang,
Alexandre Courbot, Ragesh Radhakrishnan, Thomas Petazzoni,
Sunil Joshi, Maxime Ripard, Vikas Sajjan, Marcus Lorentzon
[-- Attachment #1: Type: text/plain, Size: 4411 bytes --]
On 17/10/13 10:48, Andrzej Hajda wrote:
> The main function of DSI is to transport pixels from one IP to another IP
> and this function IMO should not be modeled by display entity.
> "Power, clocks, etc" will be performed via control bus according to
> panel demands.
> If 'DSI chip' has additional functions for video processing they can
> be modeled by CDF entity if it makes sense.
Now I don't follow. What do you mean with "display entity" and with "CDF
entity"? Are they the same?
Let me try to clarify my point:
On OMAP SoC we have a DSI encoder, which takes input from the display
controller in parallel RGB format, and outputs DSI.
Then there are external encoders that take MIPI DPI as input, and output
DSI.
The only difference with the above two components is that the first one
is embedded into the SoC. I see no reason to represent them in different
ways (i.e. as you suggested, not representing the SoC's DSI at all).
Also, if you use DSI burst mode, you will have to have different video
timings in the DSI encoder's input and output. And depending on the
buffering of the DSI encoder, you could have different timings in any case.
Furthermore, both components could have extra processing. I know the
external encoders sometimes do have features like scaling.
>> We still have two different endpoint configurations for the same
>> DSI-master port. If that configuration is in the DSI-master's port node,
>> not inside an endpoint data, then that can't be supported.
> I am not sure if I understand it correctly. But it seems quite simple:
> when panel starts/resumes it request DSI (via control bus) to fulfill
> its configuration settings.
> Of course there are some settings which are not panel dependent and those
> should reside in DSI node.
Exactly. And when the two panels require different non-panel-dependent
settings, how do you represent them in the DT data?
>>> We say then: callee handles locking :)
>> Sure, but my point was that the caller handling the locking is much
>> simpler than the callee handling locking. And the latter causes
>> atomicity issues, as the other API could be invoked in between two calls
>> for the first API.
>>
>>
> Could you describe such scenario?
If we have two independent APIs, ctrl and video, that affect the same
underlying hardware, the DSI bus, we could have a scenario like this:
thread 1:
ctrl->op_foo();
ctrl->op_bar();
thread 2:
video->op_baz();
Even if all those ops do locking properly internally, the fact that
op_baz() can be called in between op_foo() and op_bar() may cause problems.
To avoid that issue with two APIs we'd need something like:
thread 1:
ctrl->lock();
ctrl->op_foo();
ctrl->op_bar();
ctrl->unlock();
thread 2:
video->lock();
video->op_baz();
video->unlock();
>>> Platform devices
>>> ~~~~~~~~~~~~~~~~
>>> Platform devices are devices that typically appear as autonomous
>>> entities in the system. This includes legacy port-based devices and
>>> host bridges to peripheral buses, and most controllers integrated
>>> into system-on-chip platforms. What they usually have in common
>>> is direct addressing from a CPU bus. Rarely, a platform_device will
>>> be connected through a segment of some other kind of bus; but its
>>> registers will still be directly addressable.
>> Yep, "typically" and "rarely" =). I agree, it's not clear. I think there
>> are things with DBI/DSI that clearly point to a platform device, but
>> also the other way.
> Just to be sure, we are talking here about DSI-slaves, ie. for example
> about panels,
> where direct accessing from CPU bus usually is not possible.
Yes. My point is that with DBI/DSI there's not much bus there (if a
normal bus would be PCI/USB/i2c etc), it's just a point to point link
without probing or a clearly specified setup sequence.
If DSI/DBI was used only for control, a linux bus would probably make
sense. But DSI/DBI is mainly a video transport channel, with the
control-part being "secondary".
And when considering that the video and control data are sent over the
same channel (i.e. there's no separate, independent ctrl channel), and
the strict timing restrictions with video, my gut feeling is just that
all the extra complexity brought with separating the control to a
separate bus is not worth it.
Tomi
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 901 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH/RFC v3 00/19] Common Display Framework
2013-10-17 8:18 ` Tomi Valkeinen
@ 2013-10-17 12:26 ` Andrzej Hajda
2013-10-17 12:55 ` Tomi Valkeinen
0 siblings, 1 reply; 14+ messages in thread
From: Andrzej Hajda @ 2013-10-17 12:26 UTC (permalink / raw)
To: Tomi Valkeinen, Laurent Pinchart
Cc: linux-fbdev, dri-devel, Jesse Barnes, Benjamin Gaignard, Tom Gall,
Kyungmin Park, linux-media, Stephen Warren, Mark Zhang,
Alexandre Courbot, Ragesh Radhakrishnan, Thomas Petazzoni,
Sunil Joshi, Maxime Ripard, Vikas Sajjan, Marcus Lorentzon
On 10/17/2013 10:18 AM, Tomi Valkeinen wrote:
> On 17/10/13 10:48, Andrzej Hajda wrote:
>
>> The main function of DSI is to transport pixels from one IP to another IP
>> and this function IMO should not be modeled by display entity.
>> "Power, clocks, etc" will be performed via control bus according to
>> panel demands.
>> If 'DSI chip' has additional functions for video processing they can
>> be modeled by CDF entity if it makes sense.
> Now I don't follow. What do you mean with "display entity" and with "CDF
> entity"? Are they the same?
Yes, they are the same, sorry for confusion.
>
> Let me try to clarify my point:
>
> On OMAP SoC we have a DSI encoder, which takes input from the display
> controller in parallel RGB format, and outputs DSI.
>
> Then there are external encoders that take MIPI DPI as input, and output
> DSI.
>
> The only difference with the above two components is that the first one
> is embedded into the SoC. I see no reason to represent them in different
> ways (i.e. as you suggested, not representing the SoC's DSI at all).
>
> Also, if you use DSI burst mode, you will have to have different video
> timings in the DSI encoder's input and output. And depending on the
> buffering of the DSI encoder, you could have different timings in any case.
I am not sure what exactly the encoder performs, if this is only image
transport from dispc to panel CDF pipeline in both cases should look like:
dispc ----> panel
The only difference is that panels will be connected via different Linux bus
adapters, but it will be irrelevant to CDF itself. In this case I would say
this is DSI-master rather than encoder, or at least that the only
function of the
encoder is DSI.
If display_timings on input and output differs, I suppose it should be
modeled
as display_entity, as this is an additional functionality(not covered by
DSI standard AFAIK).
CDF in such case:
dispc ---> encoder ---> panel
In this case I would call it encoder with DSI master.
>
> Furthermore, both components could have extra processing. I know the
> external encoders sometimes do have features like scaling.
The same as above, ISP with embedded DSI.
>
>>> We still have two different endpoint configurations for the same
>>> DSI-master port. If that configuration is in the DSI-master's port node,
>>> not inside an endpoint data, then that can't be supported.
>> I am not sure if I understand it correctly. But it seems quite simple:
>> when panel starts/resumes it request DSI (via control bus) to fulfill
>> its configuration settings.
>> Of course there are some settings which are not panel dependent and those
>> should reside in DSI node.
> Exactly. And when the two panels require different non-panel-dependent
> settings, how do you represent them in the DT data?
non-panel-dependent setting cannot depend on panel, by definition :)
>
>>>> We say then: callee handles locking :)
>>> Sure, but my point was that the caller handling the locking is much
>>> simpler than the callee handling locking. And the latter causes
>>> atomicity issues, as the other API could be invoked in between two calls
>>> for the first API.
>>>
>>>
>> Could you describe such scenario?
> If we have two independent APIs, ctrl and video, that affect the same
> underlying hardware, the DSI bus, we could have a scenario like this:
>
> thread 1:
>
> ctrl->op_foo();
> ctrl->op_bar();
>
> thread 2:
>
> video->op_baz();
>
> Even if all those ops do locking properly internally, the fact that
> op_baz() can be called in between op_foo() and op_bar() may cause problems.
>
> To avoid that issue with two APIs we'd need something like:
>
> thread 1:
>
> ctrl->lock();
> ctrl->op_foo();
> ctrl->op_bar();
> ctrl->unlock();
>
> thread 2:
>
> video->lock();
> video->op_baz();
> video->unlock();
I should mention I was asking for real hw/drivers configuration.
I do not know what do you mean with video->op_baz() ?
DSI-master is not modeled in CDF, and only CDF provides video
operations.
I guess one scenario, when two panels are connected to single DSI-master.
In such case both can call DSI ops, but I do not know how do you want to
prevent it in case of your CDF-T implementation.
>
>>>> Platform devices
>>>> ~~~~~~~~~~~~~~~~
>>>> Platform devices are devices that typically appear as autonomous
>>>> entities in the system. This includes legacy port-based devices and
>>>> host bridges to peripheral buses, and most controllers integrated
>>>> into system-on-chip platforms. What they usually have in common
>>>> is direct addressing from a CPU bus. Rarely, a platform_device will
>>>> be connected through a segment of some other kind of bus; but its
>>>> registers will still be directly addressable.
>>> Yep, "typically" and "rarely" =). I agree, it's not clear. I think there
>>> are things with DBI/DSI that clearly point to a platform device, but
>>> also the other way.
>> Just to be sure, we are talking here about DSI-slaves, ie. for example
>> about panels,
>> where direct accessing from CPU bus usually is not possible.
> Yes. My point is that with DBI/DSI there's not much bus there (if a
> normal bus would be PCI/USB/i2c etc), it's just a point to point link
> without probing or a clearly specified setup sequence.
This is why I considered replacing DSI bus with DSI-master as parent
device and panel as slave platorm_device, like in MFD devices.
>
> If DSI/DBI was used only for control, a linux bus would probably make
> sense. But DSI/DBI is mainly a video transport channel, with the
> control-part being "secondary".
>
> And when considering that the video and control data are sent over the
> same channel (i.e. there's no separate, independent ctrl channel), and
> the strict timing restrictions with video, my gut feeling is just that
> all the extra complexity brought with separating the control to a
> separate bus is not worth it.
There is additional complexity due to bus implementation requirements
(I would rather call it boiler-plate code), but in core it is still a
matter of ops.
With Linux bus those ops are available only to DSI-slave, which is
also a good thing I guess.
Andrzej
>
> Tomi
>
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH/RFC v3 00/19] Common Display Framework
2013-10-17 12:26 ` Andrzej Hajda
@ 2013-10-17 12:55 ` Tomi Valkeinen
2013-10-18 11:55 ` Andrzej Hajda
0 siblings, 1 reply; 14+ messages in thread
From: Tomi Valkeinen @ 2013-10-17 12:55 UTC (permalink / raw)
To: Andrzej Hajda, Laurent Pinchart
Cc: linux-fbdev, dri-devel, Jesse Barnes, Benjamin Gaignard, Tom Gall,
Kyungmin Park, linux-media, Stephen Warren, Mark Zhang,
Alexandre Courbot, Ragesh Radhakrishnan, Thomas Petazzoni,
Sunil Joshi, Maxime Ripard, Vikas Sajjan, Marcus Lorentzon
[-- Attachment #1: Type: text/plain, Size: 4219 bytes --]
On 17/10/13 15:26, Andrzej Hajda wrote:
> I am not sure what exactly the encoder performs, if this is only image
> transport from dispc to panel CDF pipeline in both cases should look like:
> dispc ----> panel
> The only difference is that panels will be connected via different Linux bus
> adapters, but it will be irrelevant to CDF itself. In this case I would say
> this is DSI-master rather than encoder, or at least that the only
> function of the
> encoder is DSI.
Yes, as I said, it's up to the driver writer how he wants to use CDF. If
he doesn't see the point of representing the SoC's DSI encoder as a
separate CDF entity, nobody forces him to do that.
On OMAP, we have single DISPC with multiple parallel outputs, and a
bunch of encoder IPs (MIPI DPI, DSI, DBI, etc). Each encoder IP can be
connected to some of the DISPC's output. In this case, even if the DSI
encoder does nothing special, I see it much better to represent the DSI
encoder as a CDF entity so that the links between DISPC, DSI, and the
DSI peripherals are all there.
> If display_timings on input and output differs, I suppose it should be
> modeled
> as display_entity, as this is an additional functionality(not covered by
> DSI standard AFAIK).
Well, DSI standard is about the DSI output. Not about the encoder's
input, or the internal operation of the encoder.
>>> Of course there are some settings which are not panel dependent and those
>>> should reside in DSI node.
>> Exactly. And when the two panels require different non-panel-dependent
>> settings, how do you represent them in the DT data?
>
> non-panel-dependent setting cannot depend on panel, by definition :)
With "non-panel-dependent" setting I meant something that is a property
of the DSI master device, but still needs to be configured differently
for each panel.
Say, pin configuration. When using panel A, the first pin of the DSI
block could be clock+. With panel B, the first pin could be clock-. This
configuration is about DSI master, but it is different for each panel.
If we have separate endpoint in the DSI master for each panel, this data
can be there. If we don't have the endpoint, as is the case with
separate control bus, where is that data?
>>> Could you describe such scenario?
>> If we have two independent APIs, ctrl and video, that affect the same
>> underlying hardware, the DSI bus, we could have a scenario like this:
>>
>> thread 1:
>>
>> ctrl->op_foo();
>> ctrl->op_bar();
>>
>> thread 2:
>>
>> video->op_baz();
>>
>> Even if all those ops do locking properly internally, the fact that
>> op_baz() can be called in between op_foo() and op_bar() may cause problems.
>>
>> To avoid that issue with two APIs we'd need something like:
>>
>> thread 1:
>>
>> ctrl->lock();
>> ctrl->op_foo();
>> ctrl->op_bar();
>> ctrl->unlock();
>>
>> thread 2:
>>
>> video->lock();
>> video->op_baz();
>> video->unlock();
> I should mention I was asking for real hw/drivers configuration.
> I do not know what do you mean with video->op_baz() ?
> DSI-master is not modeled in CDF, and only CDF provides video
> operations.
It was just an example of the additional complexity with regarding
locking when using two APIs.
The point is that if the panel driver has two pointers (i.e. API), one
for the control bus, one for the video bus, and ops on both buses affect
the same hardware, the locking is not easy.
If, on the other hand, the panel driver only has one API to use, it's
simple to require the caller to handle any locking.
> I guess one scenario, when two panels are connected to single DSI-master.
> In such case both can call DSI ops, but I do not know how do you want to
> prevent it in case of your CDF-T implementation.
No, that was not the case I was describing. This was about a single panel.
If we have two independent APIs, we need to define how locking is
managed for those APIs. Even if in practice both APIs are used by the
same driver, and the driver can manage the locking, that's not really a
valid requirement. It'd be almost the same as requiring that gpio API
cannot be called at the same time as i2c API.
Tomi
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 901 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH/RFC v3 00/19] Common Display Framework
2013-10-17 12:55 ` Tomi Valkeinen
@ 2013-10-18 11:55 ` Andrzej Hajda
0 siblings, 0 replies; 14+ messages in thread
From: Andrzej Hajda @ 2013-10-18 11:55 UTC (permalink / raw)
To: Tomi Valkeinen, Laurent Pinchart
Cc: linux-fbdev, dri-devel, Jesse Barnes, Benjamin Gaignard, Tom Gall,
Kyungmin Park, linux-media, Stephen Warren, Mark Zhang,
Alexandre Courbot, Ragesh Radhakrishnan, Thomas Petazzoni,
Sunil Joshi, Maxime Ripard, Vikas Sajjan, Marcus Lorentzon
On 10/17/2013 02:55 PM, Tomi Valkeinen wrote:
> On 17/10/13 15:26, Andrzej Hajda wrote:
>
>> I am not sure what exactly the encoder performs, if this is only image
>> transport from dispc to panel CDF pipeline in both cases should look like:
>> dispc ----> panel
>> The only difference is that panels will be connected via different Linux bus
>> adapters, but it will be irrelevant to CDF itself. In this case I would say
>> this is DSI-master rather than encoder, or at least that the only
>> function of the
>> encoder is DSI.
> Yes, as I said, it's up to the driver writer how he wants to use CDF. If
> he doesn't see the point of representing the SoC's DSI encoder as a
> separate CDF entity, nobody forces him to do that.
Having it as an entity would cause the 'problem' of two APIs as you
described below :)
One API via control bus, another one via CDF.
>
> On OMAP, we have single DISPC with multiple parallel outputs, and a
> bunch of encoder IPs (MIPI DPI, DSI, DBI, etc). Each encoder IP can be
> connected to some of the DISPC's output. In this case, even if the DSI
> encoder does nothing special, I see it much better to represent the DSI
> encoder as a CDF entity so that the links between DISPC, DSI, and the
> DSI peripherals are all there.
>
>> If display_timings on input and output differs, I suppose it should be
>> modeled
>> as display_entity, as this is an additional functionality(not covered by
>> DSI standard AFAIK).
> Well, DSI standard is about the DSI output. Not about the encoder's
> input, or the internal operation of the encoder.
>
>>>> Of course there are some settings which are not panel dependent and those
>>>> should reside in DSI node.
>>> Exactly. And when the two panels require different non-panel-dependent
>>> settings, how do you represent them in the DT data?
>> non-panel-dependent setting cannot depend on panel, by definition :)
> With "non-panel-dependent" setting I meant something that is a property
> of the DSI master device, but still needs to be configured differently
> for each panel.
>
> Say, pin configuration. When using panel A, the first pin of the DSI
> block could be clock+. With panel B, the first pin could be clock-. This
> configuration is about DSI master, but it is different for each panel.
>
> If we have separate endpoint in the DSI master for each panel, this data
> can be there. If we don't have the endpoint, as is the case with
> separate control bus, where is that data?
I am open to propositions. For me it seems somehow similar to clock mapping
in DT (clock-names are mapped to provider clocks), so I think it could
be put in panel node and it will be parsed by DSI-master.
>
>>>> Could you describe such scenario?
>>> If we have two independent APIs, ctrl and video, that affect the same
>>> underlying hardware, the DSI bus, we could have a scenario like this:
>>>
>>> thread 1:
>>>
>>> ctrl->op_foo();
>>> ctrl->op_bar();
>>>
>>> thread 2:
>>>
>>> video->op_baz();
>>>
>>> Even if all those ops do locking properly internally, the fact that
>>> op_baz() can be called in between op_foo() and op_bar() may cause problems.
>>>
>>> To avoid that issue with two APIs we'd need something like:
>>>
>>> thread 1:
>>>
>>> ctrl->lock();
>>> ctrl->op_foo();
>>> ctrl->op_bar();
>>> ctrl->unlock();
>>>
>>> thread 2:
>>>
>>> video->lock();
>>> video->op_baz();
>>> video->unlock();
>> I should mention I was asking for real hw/drivers configuration.
>> I do not know what do you mean with video->op_baz() ?
>> DSI-master is not modeled in CDF, and only CDF provides video
>> operations.
> It was just an example of the additional complexity with regarding
> locking when using two APIs.
>
> The point is that if the panel driver has two pointers (i.e. API), one
> for the control bus, one for the video bus, and ops on both buses affect
> the same hardware, the locking is not easy.
>
> If, on the other hand, the panel driver only has one API to use, it's
> simple to require the caller to handle any locking.
I guess you are describing scenario with DSI-master having its own entity.
In such case its video ops are accessible at least to all pipeline
neightbourgs and
to pipeline controler, so I do not see how the client side locking would
work anyway.
Additionally multiple panels connected to one DSI also makes it harder.
Thus I do not see that 'client lock' apporach would work anyway, even
using video-source approach.
Andrzej
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2013-10-18 11:55 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1376068510-30363-1-git-send-email-laurent.pinchart+renesas@ideasonboard.com>
[not found] ` <52498146.4050600@ti.com>
2013-10-02 12:23 ` [PATCH/RFC v3 00/19] Common Display Framework Andrzej Hajda
2013-10-02 13:24 ` Tomi Valkeinen
2013-10-09 14:08 ` Andrzej Hajda
2013-10-11 6:37 ` Tomi Valkeinen
2013-10-11 11:19 ` Andrzej Hajda
2013-10-11 12:30 ` Tomi Valkeinen
2013-10-11 14:16 ` Andrzej Hajda
2013-10-11 14:45 ` Tomi Valkeinen
2013-10-17 7:48 ` Andrzej Hajda
2013-10-17 8:18 ` Tomi Valkeinen
2013-10-17 12:26 ` Andrzej Hajda
2013-10-17 12:55 ` Tomi Valkeinen
2013-10-18 11:55 ` Andrzej Hajda
2013-08-09 23:02 Laurent Pinchart
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).