* Binding together tegradrm & nvhost
@ 2012-08-20 13:01 Terje Bergström
[not found] ` <50323513.3090606-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>
0 siblings, 1 reply; 22+ messages in thread
From: Terje Bergström @ 2012-08-20 13:01 UTC (permalink / raw)
To: Thierry Reding,
linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Mark Zhang,
Stephen Warren
Hi,
I've been trying to figure out the best way to bind together tegradrm
and nvhost. I assume that nvhost and tegradrm will live as separate
drivers, with tegradrm taking care of display controller, and nvhost
taking care of host1x and other client devices.
I've identified a bumps that we need to agree on. I've included here the
problem and my proposal:
1) Device & driver registration
tegradrm registers as platform_driver, and exports ioctl's. Here we
already have to agree on which device the platform_driver maps to.
Currently it maps to host1x, but we'll need to move control of host1x to
nvhost driver. We'll need to pass drm_platform_init() some
platform_device - I propose that we create a virtual device for this.
2) Device tree parsing
At bootup, we need to parse only host1x node and create a device for
that. host1x probe will need to dig into host1x to create the children.
This is something that we'll need to implement first in the internal
kernel. tegra-dc would get probed only after this sequence. If this is
ok, I'll take care of this part, and adjustments to tegradrm when this
becomes topical.
We include in device tree the register addresses. Some information that
would be needed is still clocks, clock gating behavior, power domain
ids, mapping of client devices to channels, and mapping of sync points
per channnel
3) The handling of ioctl's from user space
The ioctl's represent the needed synchronization and channel
functionality. I'll write the necessary glue. There would be two
categories of ioctl's:
3a) Simple operations such as synchronization:
Wait, signal, read, etc. are exported from nvhost as public APIs, and
tegradrm simply calls them. No big hurdle there. I already have concept
code to do this.
3b) Channel operations:
tegradrm needs to have a concept of logical channel. Channel open
creates a logical channel (/context) by calling nvhost. nvhost needs to
know which hw is going to be used by the channel to be able to control
power, and to map to physical channel, so that comes as a parameter in
ioctl.
Each channel operation needs to pass the channel id, and tegradrm passes
the calls to nvhost. Most important operation is submit, which sends a
command buffer to nvhost's queue.
4) Buffer management
We already know that this is a missing part. Hopefully we can get this
filled soon.
Terje
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Binding together tegradrm & nvhost
[not found] ` <50323513.3090606-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>
@ 2012-08-20 13:18 ` Thierry Reding
[not found] ` <20120820131800.GA13785-RM9K5IK7kjIQXX3q8xo1gnVAuStQJXxyR5q1nwbD4aMs9pC9oP6+/A@public.gmane.org>
2012-08-21 4:57 ` Mark Zhang
2012-08-21 21:53 ` Stephen Warren
2 siblings, 1 reply; 22+ messages in thread
From: Thierry Reding @ 2012-08-20 13:18 UTC (permalink / raw)
To: Terje Bergström
Cc: linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Mark Zhang,
Stephen Warren
[-- Attachment #1: Type: text/plain, Size: 3916 bytes --]
On Mon, Aug 20, 2012 at 04:01:07PM +0300, Terje Bergström wrote:
> Hi,
>
> I've been trying to figure out the best way to bind together tegradrm
> and nvhost. I assume that nvhost and tegradrm will live as separate
> drivers, with tegradrm taking care of display controller, and nvhost
> taking care of host1x and other client devices.
>
> I've identified a bumps that we need to agree on. I've included here the
> problem and my proposal:
>
> 1) Device & driver registration
> tegradrm registers as platform_driver, and exports ioctl's. Here we
> already have to agree on which device the platform_driver maps to.
> Currently it maps to host1x, but we'll need to move control of host1x to
> nvhost driver. We'll need to pass drm_platform_init() some
> platform_device - I propose that we create a virtual device for this.
>
> 2) Device tree parsing
> At bootup, we need to parse only host1x node and create a device for
> that. host1x probe will need to dig into host1x to create the children.
> This is something that we'll need to implement first in the internal
> kernel. tegra-dc would get probed only after this sequence. If this is
> ok, I'll take care of this part, and adjustments to tegradrm when this
> becomes topical.
I have a new patch series that takes care of these two steps. Mark sent
some patches for Tegra30 and HDMI on top of the older series that I need
to merge with what I have. Maybe I'll decide to send the series out
without the patches merged, depending on how much time I'll get or how
much effort it requires. I had hoped the next series would have working
HDMI support, which is why I waited.
Basically what I have is a very rudimentary driver for host1x, which
waits for some of the subdevices to be registered and then creates a
dummy device against which the Tegra DRM driver can bind. It's not quite
what you proposed above, but very similar.
For now I've also put the host1x driver in the same directory as the
Tegra DRM because there are no other users. We may want to change that
at some point.
> We include in device tree the register addresses. Some information that
> would be needed is still clocks, clock gating behavior, power domain
> ids, mapping of client devices to channels, and mapping of sync points
> per channnel
>
> 3) The handling of ioctl's from user space
> The ioctl's represent the needed synchronization and channel
> functionality. I'll write the necessary glue. There would be two
> categories of ioctl's:
>
> 3a) Simple operations such as synchronization:
>
> Wait, signal, read, etc. are exported from nvhost as public APIs, and
> tegradrm simply calls them. No big hurdle there. I already have concept
> code to do this.
>
> 3b) Channel operations:
>
> tegradrm needs to have a concept of logical channel. Channel open
> creates a logical channel (/context) by calling nvhost. nvhost needs to
> know which hw is going to be used by the channel to be able to control
> power, and to map to physical channel, so that comes as a parameter in
> ioctl.
>
> Each channel operation needs to pass the channel id, and tegradrm passes
> the calls to nvhost. Most important operation is submit, which sends a
> command buffer to nvhost's queue.
Some thought will probably have to go into these. The easiest would
probably be to have a driver that needs to do synchronization or other
channel operations. It may make the requirements on the exact ioctls
clearer.
> 4) Buffer management
> We already know that this is a missing part. Hopefully we can get this
> filled soon.
This should be cheap if we use GEM along with DMA-BUF. However without
other drivers that the buffers can be shared with this won't do us any
good. So maybe something like a video capturing driver for Tegra should
be added first so we can actually test buffer sharing.
>
> Terje
>
[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Binding together tegradrm & nvhost
[not found] ` <20120820131800.GA13785-RM9K5IK7kjIQXX3q8xo1gnVAuStQJXxyR5q1nwbD4aMs9pC9oP6+/A@public.gmane.org>
@ 2012-08-20 13:33 ` Terje Bergström
2012-08-21 3:50 ` Dennis Gilmore
2012-08-21 5:39 ` Mark Zhang
2 siblings, 0 replies; 22+ messages in thread
From: Terje Bergström @ 2012-08-20 13:33 UTC (permalink / raw)
To: Thierry Reding
Cc: linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Mark Zhang,
Stephen Warren
On 20.08.2012 16:18, Thierry Reding wrote:
> I have a new patch series that takes care of these two steps. Mark sent
> some patches for Tegra30 and HDMI on top of the older series that I need
> to merge with what I have. Maybe I'll decide to send the series out
> without the patches merged, depending on how much time I'll get or how
> much effort it requires. I had hoped the next series would have working
> HDMI support, which is why I waited.
Ok. I have done my testing on top of Mark's changes, but I don't have
your latest code. Looks like you have already solved some of the quirks
I found while trying out tegradrm.
I'm working mostly on Tegra30, so it helps me if we have Tegra30 support.
Let's hope the freedesktop.org work area gets created fast so that we
could keep our code bases in sync.
> Basically what I have is a very rudimentary driver for host1x, which
> waits for some of the subdevices to be registered and then creates a
> dummy device against which the Tegra DRM driver can bind. It's not quite
> what you proposed above, but very similar.
This sounds pretty good. I'll look into the subdevices implementation -
I might need to move their handling to nvhost. But until that happens,
this scheme sounds good.
> For now I've also put the host1x driver in the same directory as the
> Tegra DRM because there are no other users. We may want to change that
> at some point.
Ok. I can take care of exporting the host1x driver at the same time when
I get more functionality into it.
> Some thought will probably have to go into these. The easiest would
> probably be to have a driver that needs to do synchronization or other
> channel operations. It may make the requirements on the exact ioctls
> clearer.
Got it. When we are so far that there's channel support, I'll also write
a simple test program that performs some simple accelerated operation
with 2D unit. That test program can be used then as a template for other
user space code.
>> 4) Buffer management
> This should be cheap if we use GEM along with DMA-BUF. However without
> other drivers that the buffers can be shared with this won't do us any
> good. So maybe something like a video capturing driver for Tegra should
> be added first so we can actually test buffer sharing.
DMA-BUF could still help with sharing buffers between user space and
kernel, and between user space processes, so not all of the benefits
need another driver.
Terje
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Binding together tegradrm & nvhost
[not found] ` <20120820131800.GA13785-RM9K5IK7kjIQXX3q8xo1gnVAuStQJXxyR5q1nwbD4aMs9pC9oP6+/A@public.gmane.org>
2012-08-20 13:33 ` Terje Bergström
@ 2012-08-21 3:50 ` Dennis Gilmore
2012-08-21 5:39 ` Mark Zhang
2 siblings, 0 replies; 22+ messages in thread
From: Dennis Gilmore @ 2012-08-21 3:50 UTC (permalink / raw)
To: Thierry Reding
Cc: Terje Bergström,
linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Mark Zhang,
Stephen Warren
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
El Mon, 20 Aug 2012 15:18:01 +0200
Thierry Reding <thierry.reding@avionic-design.de> escribió:
> On Mon, Aug 20, 2012 at 04:01:07PM +0300, Terje Bergström wrote:
> > Hi,
> >
> > I've been trying to figure out the best way to bind together
> > tegradrm and nvhost. I assume that nvhost and tegradrm will live as
> > separate drivers, with tegradrm taking care of display controller,
> > and nvhost taking care of host1x and other client devices.
> >
> > I've identified a bumps that we need to agree on. I've included
> > here the problem and my proposal:
> >
> > 1) Device & driver registration
> > tegradrm registers as platform_driver, and exports ioctl's. Here we
> > already have to agree on which device the platform_driver maps to.
> > Currently it maps to host1x, but we'll need to move control of
> > host1x to nvhost driver. We'll need to pass drm_platform_init() some
> > platform_device - I propose that we create a virtual device for
> > this.
> >
> > 2) Device tree parsing
> > At bootup, we need to parse only host1x node and create a device for
> > that. host1x probe will need to dig into host1x to create the
> > children. This is something that we'll need to implement first in
> > the internal kernel. tegra-dc would get probed only after this
> > sequence. If this is ok, I'll take care of this part, and
> > adjustments to tegradrm when this becomes topical.
>
> I have a new patch series that takes care of these two steps. Mark
> sent some patches for Tegra30 and HDMI on top of the older series
> that I need to merge with what I have. Maybe I'll decide to send the
> series out without the patches merged, depending on how much time
> I'll get or how much effort it requires. I had hoped the next series
> would have working HDMI support, which is why I waited.
>
> Basically what I have is a very rudimentary driver for host1x, which
> waits for some of the subdevices to be registered and then creates a
> dummy device against which the Tegra DRM driver can bind. It's not
> quite what you proposed above, but very similar.
>
> For now I've also put the host1x driver in the same directory as the
> Tegra DRM because there are no other users. We may want to change that
> at some point.
>
> > We include in device tree the register addresses. Some information
> > that would be needed is still clocks, clock gating behavior, power
> > domain ids, mapping of client devices to channels, and mapping of
> > sync points per channnel
> >
> > 3) The handling of ioctl's from user space
> > The ioctl's represent the needed synchronization and channel
> > functionality. I'll write the necessary glue. There would be two
> > categories of ioctl's:
> >
> > 3a) Simple operations such as synchronization:
> >
> > Wait, signal, read, etc. are exported from nvhost as public APIs,
> > and tegradrm simply calls them. No big hurdle there. I already have
> > concept code to do this.
> >
> > 3b) Channel operations:
> >
> > tegradrm needs to have a concept of logical channel. Channel open
> > creates a logical channel (/context) by calling nvhost. nvhost
> > needs to know which hw is going to be used by the channel to be
> > able to control power, and to map to physical channel, so that
> > comes as a parameter in ioctl.
> >
> > Each channel operation needs to pass the channel id, and tegradrm
> > passes the calls to nvhost. Most important operation is submit,
> > which sends a command buffer to nvhost's queue.
>
> Some thought will probably have to go into these. The easiest would
> probably be to have a driver that needs to do synchronization or other
> channel operations. It may make the requirements on the exact ioctls
> clearer.
>
> > 4) Buffer management
> > We already know that this is a missing part. Hopefully we can get
> > this filled soon.
>
> This should be cheap if we use GEM along with DMA-BUF. However without
> other drivers that the buffers can be shared with this won't do us any
> good. So maybe something like a video capturing driver for Tegra
> should be added first so we can actually test buffer sharing.
Can we please push to get tegradrm merged into staging in linus's tree.
For fedora to ship a working console and allow tegra systems to run X
etc we need to get it merged upstream. There is a lot of fedora folks
with trimslices and other tegra based devices, it would be good to see
them fully supported for Fedora 18, we are very short in time we are
currently running 3.6 based kernels and we follow very closely linus's
tree. Building all kernels from it with as little patching as possible.
Dennis
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.18 (GNU/Linux)
iEYEARECAAYFAlAzBYIACgkQkSxm47BaWfeeiwCgsGIrn8MCCgC8DEHHBXJ1u+Vb
I1wAn2RmfpPSBccEVzxG4e/9MDjBT1tH
=khEj
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Binding together tegradrm & nvhost
[not found] ` <50323513.3090606-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>
2012-08-20 13:18 ` Thierry Reding
@ 2012-08-21 4:57 ` Mark Zhang
2012-08-21 5:40 ` Terje Bergström
2012-08-21 21:53 ` Stephen Warren
2 siblings, 1 reply; 22+ messages in thread
From: Mark Zhang @ 2012-08-21 4:57 UTC (permalink / raw)
To: Terje Bergstrom
Cc: Thierry Reding,
linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
Stephen Warren
On Mon, 2012-08-20 at 21:01 +0800, Terje Bergstrom wrote:
> Hi,
>
> I've been trying to figure out the best way to bind together tegradrm
> and nvhost. I assume that nvhost and tegradrm will live as separate
> drivers, with tegradrm taking care of display controller, and nvhost
> taking care of host1x and other client devices.
>
> I've identified a bumps that we need to agree on. I've included here the
> problem and my proposal:
>
> 1) Device & driver registration
> tegradrm registers as platform_driver, and exports ioctl's. Here we
> already have to agree on which device the platform_driver maps to.
> Currently it maps to host1x, but we'll need to move control of host1x to
> nvhost driver. We'll need to pass drm_platform_init() some
> platform_device - I propose that we create a virtual device for this.
>
This has been discussed several times. Indeed, we need a virtual device
for drm driver. The problem is, where do we define it? It's not a good
idea define it in dt, we all agreed with that before. Also it's not good
to define it in the code...
So, do you have any further proposal about this?
> 2) Device tree parsing
> At bootup, we need to parse only host1x node and create a device for
> that. host1x probe will need to dig into host1x to create the children.
> This is something that we'll need to implement first in the internal
> kernel. tegra-dc would get probed only after this sequence. If this is
> ok, I'll take care of this part, and adjustments to tegradrm when this
> becomes topical.
>
I know little about host1x hardware. I wanna know does host1x have the
functionality to enum it's children? If this is true, do we still need
to define these host1x child devices in the dt? Will the 2 dc devices be
enumed and created during host1x's probe?
> We include in device tree the register addresses. Some information that
> would be needed is still clocks, clock gating behavior, power domain
> ids, mapping of client devices to channels, and mapping of sync points
> per channnel
>
> 3) The handling of ioctl's from user space
> The ioctl's represent the needed synchronization and channel
> functionality. I'll write the necessary glue. There would be two
> categories of ioctl's:
>
> 3a) Simple operations such as synchronization:
>
> Wait, signal, read, etc. are exported from nvhost as public APIs, and
> tegradrm simply calls them. No big hurdle there. I already have concept
> code to do this.
>
Hm... I think in last conference we agreed that nvhost driver will not
have it's device file, so this kind of ioctl's are going to routed to
tegra drm driver, then drm driver passes these ioctl's to nvhost driver.
Right?
> 3b) Channel operations:
>
> tegradrm needs to have a concept of logical channel. Channel open
> creates a logical channel (/context) by calling nvhost. nvhost needs to
> know which hw is going to be used by the channel to be able to control
> power, and to map to physical channel, so that comes as a parameter in
> ioctl.
>
> Each channel operation needs to pass the channel id, and tegradrm passes
> the calls to nvhost. Most important operation is submit, which sends a
> command buffer to nvhost's queue.
>
> 4) Buffer management
> We already know that this is a missing part. Hopefully we can get this
> filled soon.
>
I'm still not very clear about this part. So let me try to explain this.
Correct me if I'm wrong.
[Userspace]
Cause dma-buf has not explicit userspace apis, so we consider GEM.
Userspace programs call GEM interfaces to create/close/flink/mmap the
buffers.
Besides, by using GEM PRIME's handle to fd ioctl, userspace program is
able to convert a GEM handle to a dma-buf fd. This FD can be passed to
kernel driver so that the drivers gain the opportunity to access the
buffer.
[Kernel]
DRM driver handles GEM buffer creation. Shmfs or CMA can be used as
backing storage. Right now CMA buffer allocation is wrapped by dma
mapping apis and shmfs has it's individual APIs.
DRM driver should export this buffer as dma-buf after GEM buffer is
created. Otherwise, drm prime can't get fd from this gem buffer handle
later.
Currently I'm still confused with these problems:
1. Userspace program is able to get a dma-buf fd for a specific GEM
buffer. Is this a unique fd? I mean, can I pass this fd from one process
to another, then other processes can access the same buffer? If the
answer is yes, does this mean we don't need GEM's "flink" functionality?
If the answer is no, GEM's "flink" makes sense.
2. How to sync buffer operations between these different frameworks? For
example, GEM has it's own buffer read/write/mmap interfaces, while
dma-buf has either. So if the userspace program does something on the
buffer via GEM apis, while a kernel driver is operating the same buffer
via dma-buf interfaces, what should we do? Because GEM and dma-buf are
different frameworks, where shall we setup a sync mechanism?
> Terje
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Binding together tegradrm & nvhost
[not found] ` <20120820131800.GA13785-RM9K5IK7kjIQXX3q8xo1gnVAuStQJXxyR5q1nwbD4aMs9pC9oP6+/A@public.gmane.org>
2012-08-20 13:33 ` Terje Bergström
2012-08-21 3:50 ` Dennis Gilmore
@ 2012-08-21 5:39 ` Mark Zhang
2012-08-21 5:42 ` Thierry Reding
2 siblings, 1 reply; 22+ messages in thread
From: Mark Zhang @ 2012-08-21 5:39 UTC (permalink / raw)
To: Thierry Reding
Cc: Terje Bergstrom,
linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
Stephen Warren
On Mon, 2012-08-20 at 21:18 +0800, Thierry Reding wrote:
> * PGP Signed by an unknown key
>
> On Mon, Aug 20, 2012 at 04:01:07PM +0300, Terje Bergström wrote:
> > Hi,
> >
> > I've been trying to figure out the best way to bind together tegradrm
> > and nvhost. I assume that nvhost and tegradrm will live as separate
> > drivers, with tegradrm taking care of display controller, and nvhost
> > taking care of host1x and other client devices.
> >
> > I've identified a bumps that we need to agree on. I've included here the
> > problem and my proposal:
> >
> > 1) Device & driver registration
> > tegradrm registers as platform_driver, and exports ioctl's. Here we
> > already have to agree on which device the platform_driver maps to.
> > Currently it maps to host1x, but we'll need to move control of host1x to
> > nvhost driver. We'll need to pass drm_platform_init() some
> > platform_device - I propose that we create a virtual device for this.
> >
> > 2) Device tree parsing
> > At bootup, we need to parse only host1x node and create a device for
> > that. host1x probe will need to dig into host1x to create the children.
> > This is something that we'll need to implement first in the internal
> > kernel. tegra-dc would get probed only after this sequence. If this is
> > ok, I'll take care of this part, and adjustments to tegradrm when this
> > becomes topical.
>
> I have a new patch series that takes care of these two steps. Mark sent
> some patches for Tegra30 and HDMI on top of the older series that I need
> to merge with what I have. Maybe I'll decide to send the series out
> without the patches merged, depending on how much time I'll get or how
> much effort it requires. I had hoped the next series would have working
> HDMI support, which is why I waited.
OK. So I'm going to pause the patch writing right now and wait for your
codes to be published on linux-tegra. Regardless of whether my patches
be merged into yours, I'll provide patches for the version which you
send to maillist in the near future.
>
> Basically what I have is a very rudimentary driver for host1x, which
> waits for some of the subdevices to be registered and then creates a
> dummy device against which the Tegra DRM driver can bind. It's not quite
> what you proposed above, but very similar.
>
> For now I've also put the host1x driver in the same directory as the
> Tegra DRM because there are no other users. We may want to change that
> at some point.
>
> > We include in device tree the register addresses. Some information that
> > would be needed is still clocks, clock gating behavior, power domain
> > ids, mapping of client devices to channels, and mapping of sync points
> > per channnel
> >
> > 3) The handling of ioctl's from user space
> > The ioctl's represent the needed synchronization and channel
> > functionality. I'll write the necessary glue. There would be two
> > categories of ioctl's:
> >
> > 3a) Simple operations such as synchronization:
> >
> > Wait, signal, read, etc. are exported from nvhost as public APIs, and
> > tegradrm simply calls them. No big hurdle there. I already have concept
> > code to do this.
> >
> > 3b) Channel operations:
> >
> > tegradrm needs to have a concept of logical channel. Channel open
> > creates a logical channel (/context) by calling nvhost. nvhost needs to
> > know which hw is going to be used by the channel to be able to control
> > power, and to map to physical channel, so that comes as a parameter in
> > ioctl.
> >
> > Each channel operation needs to pass the channel id, and tegradrm passes
> > the calls to nvhost. Most important operation is submit, which sends a
> > command buffer to nvhost's queue.
>
> Some thought will probably have to go into these. The easiest would
> probably be to have a driver that needs to do synchronization or other
> channel operations. It may make the requirements on the exact ioctls
> clearer.
>
> > 4) Buffer management
> > We already know that this is a missing part. Hopefully we can get this
> > filled soon.
>
> This should be cheap if we use GEM along with DMA-BUF. However without
> other drivers that the buffers can be shared with this won't do us any
> good. So maybe something like a video capturing driver for Tegra should
> be added first so we can actually test buffer sharing.
>
> >
> > Terje
> >
>
> * Unknown Key
> * 0x7F3EB3A1
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Binding together tegradrm & nvhost
2012-08-21 4:57 ` Mark Zhang
@ 2012-08-21 5:40 ` Terje Bergström
[not found] ` <50331F32.4040903-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>
0 siblings, 1 reply; 22+ messages in thread
From: Terje Bergström @ 2012-08-21 5:40 UTC (permalink / raw)
To: Mark Zhang
Cc: Thierry Reding,
linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
Stephen Warren
On 21.08.2012 07:57, Mark Zhang wrote:
> On Mon, 2012-08-20 at 21:01 +0800, Terje Bergstrom wrote:
>> I propose that we create a virtual device for this.
> This has been discussed several times. Indeed, we need a virtual device
> for drm driver. The problem is, where do we define it? It's not a good
> idea define it in dt, we all agreed with that before. Also it's not good
> to define it in the code...
> So, do you have any further proposal about this?
Let's see what Thierry came up with once he gets the code up. It seems
he solved this somehow.
> I know little about host1x hardware. I wanna know does host1x have the
> functionality to enum it's children? If this is true, do we still need
> to define these host1x child devices in the dt? Will the 2 dc devices be
> enumed and created during host1x's probe?
No, host1x doesn't have any probing functionality. Everything must be
known by software beforehand.
The dc devices can be created in host1x probe the same time as the rest.
I checked exynos driver and it seems to create the subdevices/drivers at
drm load callback. I don't know if it matters whether it's done in load
or probe phase.
> Hm... I think in last conference we agreed that nvhost driver will not
> have it's device file, so this kind of ioctl's are going to routed to
> tegra drm driver, then drm driver passes these ioctl's to nvhost driver.
> Right?
Yes, nvhost will export in-kernel API so that tegradrm can call nvhost
to implement the functionality. tegradrm will handle all ioctl related
infra, and nvhost will handle the hardware interaction.
nvhost has in our own kernel variant also ioctl API, but that won't
exist in upstream version.
> I'm still not very clear about this part. So let me try to explain this.
> Correct me if I'm wrong.
> [Userspace]
> Cause dma-buf has not explicit userspace apis, so we consider GEM.
> Userspace programs call GEM interfaces to create/close/flink/mmap the
> buffers.
> Besides, by using GEM PRIME's handle to fd ioctl, userspace program is
> able to convert a GEM handle to a dma-buf fd. This FD can be passed to
> kernel driver so that the drivers gain the opportunity to access the
> buffer.
Yes, correct. We can (naively) consider GEM being the API towards user
space, and dma-buf as the kernel side implementation. We can consider if
we need to implement GEM flink(), though. Please see below why.
> [Kernel]
> DRM driver handles GEM buffer creation. Shmfs or CMA can be used as
> backing storage. Right now CMA buffer allocation is wrapped by dma
> mapping apis and shmfs has it's individual APIs.
> DRM driver should export this buffer as dma-buf after GEM buffer is
> created. Otherwise, drm prime can't get fd from this gem buffer handle
> later.
We can just allocate memory with dma mapping API and use IOMMU for
handling the mapping to hardware and dma-buf for mapping to user and
kernel space. I don't think we need shmfs.
> Currently I'm still confused with these problems:
> 1. Userspace program is able to get a dma-buf fd for a specific GEM
> buffer. Is this a unique fd? I mean, can I pass this fd from one process
> to another, then other processes can access the same buffer? If the
> answer is yes, does this mean we don't need GEM's "flink" functionality?
> If the answer is no, GEM's "flink" makes sense.
User space process can send the fd to another process via a unix socket,
and the other process can import the fd to gain access to the same
memory. This is more secure thank flink, which (if I understand
correctly) allows anybody with knowledge about the name to access the
buffer.
> 2. How to sync buffer operations between these different frameworks? For
> example, GEM has it's own buffer read/write/mmap interfaces, while
> dma-buf has either. So if the userspace program does something on the
> buffer via GEM apis, while a kernel driver is operating the same buffer
> via dma-buf interfaces, what should we do? Because GEM and dma-buf are
> different frameworks, where shall we setup a sync mechanism?
User space must take care that it does not access the buffer if it has
given the buffer to hw. We can't enforce it, though, but we can give an
API to help. The API relies on fences, which map to sync points in hardware.
When user space sends an operation to host1x client, it will be given a
fence, which maps to a pair of sync point register number and value. The
operation will ask host1x client to signal the fence via host1x (=sync
point increment). We will give IOCTL's to user space so that it can
check if buffer is safe to reuse, and operation to wait for the fence.
For dc, I haven't checked what kinds of operations on buffers there will
be. We'll probably need dc to allocate a fence from nvhost (=sync point
increment max), and increment sync point when an event has completed.
This way we can pass the fence to user space, and let user space wait
for it. This way user space will know when a buffer that was passed to
dc is free to be reused.
In Linaro's mm-sig there is discussion on generalizing this
synchronization mechanism.
Terje
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Binding together tegradrm & nvhost
2012-08-21 5:39 ` Mark Zhang
@ 2012-08-21 5:42 ` Thierry Reding
[not found] ` <20120821054256.GA5325-RM9K5IK7kjIQXX3q8xo1gnVAuStQJXxyR5q1nwbD4aMs9pC9oP6+/A@public.gmane.org>
0 siblings, 1 reply; 22+ messages in thread
From: Thierry Reding @ 2012-08-21 5:42 UTC (permalink / raw)
To: Mark Zhang
Cc: Terje Bergstrom,
linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
Stephen Warren
[-- Attachment #1: Type: text/plain, Size: 989 bytes --]
On Tue, Aug 21, 2012 at 01:39:21PM +0800, Mark Zhang wrote:
> On Mon, 2012-08-20 at 21:18 +0800, Thierry Reding wrote:
> > I have a new patch series that takes care of these two steps. Mark sent
> > some patches for Tegra30 and HDMI on top of the older series that I need
> > to merge with what I have. Maybe I'll decide to send the series out
> > without the patches merged, depending on how much time I'll get or how
> > much effort it requires. I had hoped the next series would have working
> > HDMI support, which is why I waited.
>
> OK. So I'm going to pause the patch writing right now and wait for your
> codes to be published on linux-tegra. Regardless of whether my patches
> be merged into yours, I'll provide patches for the version which you
> send to maillist in the near future.
Let me see if I can upload my patches to the repository on gitorious.
I'm not sure when exactly I find time to merge your patches, and I don't
want to hold you up.
Thierry
[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Binding together tegradrm & nvhost
[not found] ` <50331F32.4040903-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>
@ 2012-08-21 6:12 ` Mark Zhang
2012-08-21 6:35 ` Terje Bergström
0 siblings, 1 reply; 22+ messages in thread
From: Mark Zhang @ 2012-08-21 6:12 UTC (permalink / raw)
To: Terje Bergstrom
Cc: Thierry Reding,
linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
Stephen Warren
On Tue, 2012-08-21 at 13:40 +0800, Terje Bergstrom wrote:
> On 21.08.2012 07:57, Mark Zhang wrote:
> > On Mon, 2012-08-20 at 21:01 +0800, Terje Bergstrom wrote:
> >> I propose that we create a virtual device for this.
> > This has been discussed several times. Indeed, we need a virtual device
> > for drm driver. The problem is, where do we define it? It's not a good
> > idea define it in dt, we all agreed with that before. Also it's not good
> > to define it in the code...
> > So, do you have any further proposal about this?
>
> Let's see what Thierry came up with once he gets the code up. It seems
> he solved this somehow.
>
> > I know little about host1x hardware. I wanna know does host1x have the
> > functionality to enum it's children? If this is true, do we still need
> > to define these host1x child devices in the dt? Will the 2 dc devices be
> > enumed and created during host1x's probe?
>
> No, host1x doesn't have any probing functionality. Everything must be
> known by software beforehand.
>
> The dc devices can be created in host1x probe the same time as the rest.
> I checked exynos driver and it seems to create the subdevices/drivers at
> drm load callback. I don't know if it matters whether it's done in load
> or probe phase.
>
OK, thank you. In current version, all devices are created by
"of_platform_populate" in board init function. So if we still need to
define devices in dt, what's the benefit that we put these device
creation works into host1x's probe function? I don't see any difference
although create device in host1x probe() sounds more reasonable...
> > Hm... I think in last conference we agreed that nvhost driver will not
> > have it's device file, so this kind of ioctl's are going to routed to
> > tegra drm driver, then drm driver passes these ioctl's to nvhost driver.
> > Right?
>
> Yes, nvhost will export in-kernel API so that tegradrm can call nvhost
> to implement the functionality. tegradrm will handle all ioctl related
> infra, and nvhost will handle the hardware interaction.
>
> nvhost has in our own kernel variant also ioctl API, but that won't
> exist in upstream version.
>
Got it.
> > I'm still not very clear about this part. So let me try to explain this.
> > Correct me if I'm wrong.
> > [Userspace]
> > Cause dma-buf has not explicit userspace apis, so we consider GEM.
> > Userspace programs call GEM interfaces to create/close/flink/mmap the
> > buffers.
> > Besides, by using GEM PRIME's handle to fd ioctl, userspace program is
> > able to convert a GEM handle to a dma-buf fd. This FD can be passed to
> > kernel driver so that the drivers gain the opportunity to access the
> > buffer.
>
> Yes, correct. We can (naively) consider GEM being the API towards user
> space, and dma-buf as the kernel side implementation. We can consider if
> we need to implement GEM flink(), though. Please see below why.
>
> > [Kernel]
> > DRM driver handles GEM buffer creation. Shmfs or CMA can be used as
> > backing storage. Right now CMA buffer allocation is wrapped by dma
> > mapping apis and shmfs has it's individual APIs.
> > DRM driver should export this buffer as dma-buf after GEM buffer is
> > created. Otherwise, drm prime can't get fd from this gem buffer handle
> > later.
>
> We can just allocate memory with dma mapping API and use IOMMU for
> handling the mapping to hardware and dma-buf for mapping to user and
> kernel space. I don't think we need shmfs.
>
Agree. Shmfs is not mandatory, DMA mapping API + IOMMU works. Even
without IOMMU, CMA works either.
> > Currently I'm still confused with these problems:
> > 1. Userspace program is able to get a dma-buf fd for a specific GEM
> > buffer. Is this a unique fd? I mean, can I pass this fd from one process
> > to another, then other processes can access the same buffer? If the
> > answer is yes, does this mean we don't need GEM's "flink" functionality?
> > If the answer is no, GEM's "flink" makes sense.
>
> User space process can send the fd to another process via a unix socket,
> and the other process can import the fd to gain access to the same
> memory. This is more secure thank flink, which (if I understand
> correctly) allows anybody with knowledge about the name to access the
> buffer.
>
> > 2. How to sync buffer operations between these different frameworks? For
> > example, GEM has it's own buffer read/write/mmap interfaces, while
> > dma-buf has either. So if the userspace program does something on the
> > buffer via GEM apis, while a kernel driver is operating the same buffer
> > via dma-buf interfaces, what should we do? Because GEM and dma-buf are
> > different frameworks, where shall we setup a sync mechanism?
>
> User space must take care that it does not access the buffer if it has
> given the buffer to hw. We can't enforce it, though, but we can give an
> API to help. The API relies on fences, which map to sync points in hardware.
>
> When user space sends an operation to host1x client, it will be given a
> fence, which maps to a pair of sync point register number and value. The
> operation will ask host1x client to signal the fence via host1x (=sync
> point increment). We will give IOCTL's to user space so that it can
> check if buffer is safe to reuse, and operation to wait for the fence.
>
> For dc, I haven't checked what kinds of operations on buffers there will
> be. We'll probably need dc to allocate a fence from nvhost (=sync point
> increment max), and increment sync point when an event has completed.
> This way we can pass the fence to user space, and let user space wait
> for it. This way user space will know when a buffer that was passed to
> dc is free to be reused.
>
OK. So we have fence to sync all operations on a specific buffer. So
this also means we should add fence support on GEM and dma-buf
implementation both, right?
> In Linaro's mm-sig there is discussion on generalizing this
> synchronization mechanism.
>
> Terje
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Binding together tegradrm & nvhost
[not found] ` <20120821054256.GA5325-RM9K5IK7kjIQXX3q8xo1gnVAuStQJXxyR5q1nwbD4aMs9pC9oP6+/A@public.gmane.org>
@ 2012-08-21 6:16 ` Mark Zhang
2012-08-21 6:21 ` Thierry Reding
2012-08-21 14:57 ` Thierry Reding
0 siblings, 2 replies; 22+ messages in thread
From: Mark Zhang @ 2012-08-21 6:16 UTC (permalink / raw)
To: Thierry Reding
Cc: Terje Bergstrom,
linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
Stephen Warren
On Tue, 2012-08-21 at 13:42 +0800, Thierry Reding wrote:
> * PGP Signed by an unknown key
>
> On Tue, Aug 21, 2012 at 01:39:21PM +0800, Mark Zhang wrote:
> > On Mon, 2012-08-20 at 21:18 +0800, Thierry Reding wrote:
> > > I have a new patch series that takes care of these two steps. Mark sent
> > > some patches for Tegra30 and HDMI on top of the older series that I need
> > > to merge with what I have. Maybe I'll decide to send the series out
> > > without the patches merged, depending on how much time I'll get or how
> > > much effort it requires. I had hoped the next series would have working
> > > HDMI support, which is why I waited.
> >
> > OK. So I'm going to pause the patch writing right now and wait for your
> > codes to be published on linux-tegra. Regardless of whether my patches
> > be merged into yours, I'll provide patches for the version which you
> > send to maillist in the near future.
>
> Let me see if I can upload my patches to the repository on gitorious.
> I'm not sure when exactly I find time to merge your patches, and I don't
> want to hold you up.
>
All right, thank you. Let us know when you finish uploading.
By the way, any updates from freedesktop?
> Thierry
>
> * Unknown Key
> * 0x7F3EB3A1
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Binding together tegradrm & nvhost
2012-08-21 6:16 ` Mark Zhang
@ 2012-08-21 6:21 ` Thierry Reding
2012-08-21 14:57 ` Thierry Reding
1 sibling, 0 replies; 22+ messages in thread
From: Thierry Reding @ 2012-08-21 6:21 UTC (permalink / raw)
To: Mark Zhang
Cc: Terje Bergstrom,
linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
Stephen Warren
[-- Attachment #1: Type: text/plain, Size: 1376 bytes --]
On Tue, Aug 21, 2012 at 02:16:01PM +0800, Mark Zhang wrote:
> On Tue, 2012-08-21 at 13:42 +0800, Thierry Reding wrote:
> > * PGP Signed by an unknown key
> >
> > On Tue, Aug 21, 2012 at 01:39:21PM +0800, Mark Zhang wrote:
> > > On Mon, 2012-08-20 at 21:18 +0800, Thierry Reding wrote:
> > > > I have a new patch series that takes care of these two steps. Mark sent
> > > > some patches for Tegra30 and HDMI on top of the older series that I need
> > > > to merge with what I have. Maybe I'll decide to send the series out
> > > > without the patches merged, depending on how much time I'll get or how
> > > > much effort it requires. I had hoped the next series would have working
> > > > HDMI support, which is why I waited.
> > >
> > > OK. So I'm going to pause the patch writing right now and wait for your
> > > codes to be published on linux-tegra. Regardless of whether my patches
> > > be merged into yours, I'll provide patches for the version which you
> > > send to maillist in the near future.
> >
> > Let me see if I can upload my patches to the repository on gitorious.
> > I'm not sure when exactly I find time to merge your patches, and I don't
> > want to hold you up.
> >
>
> All right, thank you. Let us know when you finish uploading.
Will do.
> By the way, any updates from freedesktop?
No, no updates yet.
Thierry
[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Binding together tegradrm & nvhost
2012-08-21 6:12 ` Mark Zhang
@ 2012-08-21 6:35 ` Terje Bergström
[not found] ` <50332C22.7090009-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>
0 siblings, 1 reply; 22+ messages in thread
From: Terje Bergström @ 2012-08-21 6:35 UTC (permalink / raw)
To: Mark Zhang
Cc: Thierry Reding,
linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
Stephen Warren
On 21.08.2012 09:12, Mark Zhang wrote:
> OK, thank you. In current version, all devices are created by
> "of_platform_populate" in board init function. So if we still need to
> define devices in dt, what's the benefit that we put these device
> creation works into host1x's probe function? I don't see any difference
> although create device in host1x probe() sounds more reasonable...
Until I have managed to integrate nvhost to tegradrm, the devices
creation should be done as it is done now. With nvhost, we will need
extra data per device, so we'll need to create the devices in nvhost.
> OK. So we have fence to sync all operations on a specific buffer. So
> this also means we should add fence support on GEM and dma-buf
> implementation both, right?
We'll have fences for operations, not buffers. User space must figure
out that if operation that reads from/writes to buffer is complete, the
buffer is ready to be reused.
Discussion in mm-sig attaches fences to buffers, which would cause a lot
of synchronization logic being added to kernel. Each operation can work
on multiple buffers, some in read-only, some in read-write, some in
write-only mode, so we'd end up returning an array of fences in a
complicated structure.
It's simpler if kernel just knows when operation ends, and lets user
space take care of the complexity.
Terje
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Binding together tegradrm & nvhost
[not found] ` <50332C22.7090009-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>
@ 2012-08-21 7:12 ` Mark Zhang
2012-08-21 21:57 ` Stephen Warren
1 sibling, 0 replies; 22+ messages in thread
From: Mark Zhang @ 2012-08-21 7:12 UTC (permalink / raw)
To: Terje Bergström
Cc: Thierry Reding,
linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
Stephen Warren
On Tue, 2012-08-21 at 14:35 +0800, Terje Bergström wrote:
> On 21.08.2012 09:12, Mark Zhang wrote:
> > OK, thank you. In current version, all devices are created by
> > "of_platform_populate" in board init function. So if we still need to
> > define devices in dt, what's the benefit that we put these device
> > creation works into host1x's probe function? I don't see any difference
> > although create device in host1x probe() sounds more reasonable...
>
> Until I have managed to integrate nvhost to tegradrm, the devices
> creation should be done as it is done now. With nvhost, we will need
> extra data per device, so we'll need to create the devices in nvhost.
>
OK. Thank you.
> > OK. So we have fence to sync all operations on a specific buffer. So
> > this also means we should add fence support on GEM and dma-buf
> > implementation both, right?
>
> We'll have fences for operations, not buffers. User space must figure
> out that if operation that reads from/writes to buffer is complete, the
> buffer is ready to be reused.
> Discussion in mm-sig attaches fences to buffers, which would cause a lot
> of synchronization logic being added to kernel. Each operation can work
> on multiple buffers, some in read-only, some in read-write, some in
> write-only mode, so we'd end up returning an array of fences in a
> complicated structure.
>
OK. I'll have a look at discussions in mm-sig.
> It's simpler if kernel just knows when operation ends, and lets user
> space take care of the complexity.
>
> Terje
> --
> To unsubscribe from this list: send the line "unsubscribe linux-tegra" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Binding together tegradrm & nvhost
2012-08-21 6:16 ` Mark Zhang
2012-08-21 6:21 ` Thierry Reding
@ 2012-08-21 14:57 ` Thierry Reding
[not found] ` <20120821145709.GA701-RM9K5IK7kjIQXX3q8xo1gnVAuStQJXxyR5q1nwbD4aMs9pC9oP6+/A@public.gmane.org>
1 sibling, 1 reply; 22+ messages in thread
From: Thierry Reding @ 2012-08-21 14:57 UTC (permalink / raw)
To: Mark Zhang
Cc: Terje Bergstrom,
linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
Stephen Warren
[-- Attachment #1: Type: text/plain, Size: 2136 bytes --]
On Tue, Aug 21, 2012 at 02:16:01PM +0800, Mark Zhang wrote:
> On Tue, 2012-08-21 at 13:42 +0800, Thierry Reding wrote:
> > * PGP Signed by an unknown key
> >
> > On Tue, Aug 21, 2012 at 01:39:21PM +0800, Mark Zhang wrote:
> > > On Mon, 2012-08-20 at 21:18 +0800, Thierry Reding wrote:
> > > > I have a new patch series that takes care of these two steps. Mark sent
> > > > some patches for Tegra30 and HDMI on top of the older series that I need
> > > > to merge with what I have. Maybe I'll decide to send the series out
> > > > without the patches merged, depending on how much time I'll get or how
> > > > much effort it requires. I had hoped the next series would have working
> > > > HDMI support, which is why I waited.
> > >
> > > OK. So I'm going to pause the patch writing right now and wait for your
> > > codes to be published on linux-tegra. Regardless of whether my patches
> > > be merged into yours, I'll provide patches for the version which you
> > > send to maillist in the near future.
> >
> > Let me see if I can upload my patches to the repository on gitorious.
> > I'm not sure when exactly I find time to merge your patches, and I don't
> > want to hold you up.
> >
>
> All right, thank you. Let us know when you finish uploading.
> By the way, any updates from freedesktop?
Finally the upload is done. You can find my latest work in the next
branch of the following repository:
git://gitorious.org/linux-tegra-drm/linux-tegra-drm.git
It's down to two patches and based on next-20120820. Eventually I was
going to maybe split it up a bit to separate at least the hunks for
arch/ (Stephen will probably want this anyway) and maybe also some
functionality like HDMI support.
Mark: If you find the time it'd be great if you could rebase your
patches on top of that branch. I think for a first submission to
mainline some things probably need to be cleaned up and the history
should probably be clean, so we may want to squash some of the patches
that you sent into the existing ones. For now it's probably best if you
continue your work on top of this.
Thierry
[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Binding together tegradrm & nvhost
[not found] ` <50323513.3090606-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>
2012-08-20 13:18 ` Thierry Reding
2012-08-21 4:57 ` Mark Zhang
@ 2012-08-21 21:53 ` Stephen Warren
[not found] ` <50340343.1050206-3lzwWm7+Weoh9ZMKESR00Q@public.gmane.org>
2 siblings, 1 reply; 22+ messages in thread
From: Stephen Warren @ 2012-08-21 21:53 UTC (permalink / raw)
To: Terje Bergström
Cc: Thierry Reding,
linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Mark Zhang,
Stephen Warren
On 08/20/2012 07:01 AM, Terje Bergström wrote:
> Hi,
>
> I've been trying to figure out the best way to bind together tegradrm
> and nvhost. I assume that nvhost and tegradrm will live as separate
> drivers, with tegradrm taking care of display controller, and nvhost
> taking care of host1x and other client devices.
>
> I've identified a bumps that we need to agree on. I've included here the
> problem and my proposal:
>
> 1) Device & driver registration
> tegradrm registers as platform_driver, and exports ioctl's. Here we
> already have to agree on which device the platform_driver maps to.
> Currently it maps to host1x, but we'll need to move control of host1x to
> nvhost driver. We'll need to pass drm_platform_init() some
> platform_device - I propose that we create a virtual device for this.
I don't think there's any need for a virtual device. There's one device
in HW, and that can be represented by a single device object within the
kernel. There's nothing then stopping that device exposing multiple
APIs, i.e. providing host1x APIs to clients, and also instantiating the
tegra-drm driver directly on top of the host1x device.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Binding together tegradrm & nvhost
[not found] ` <50332C22.7090009-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>
2012-08-21 7:12 ` Mark Zhang
@ 2012-08-21 21:57 ` Stephen Warren
[not found] ` <50340445.6010908-3lzwWm7+Weoh9ZMKESR00Q@public.gmane.org>
1 sibling, 1 reply; 22+ messages in thread
From: Stephen Warren @ 2012-08-21 21:57 UTC (permalink / raw)
To: Terje Bergström
Cc: Mark Zhang, Thierry Reding,
linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
Stephen Warren
On 08/21/2012 12:35 AM, Terje Bergström wrote:
> On 21.08.2012 09:12, Mark Zhang wrote:
>> OK, thank you. In current version, all devices are created by
>> "of_platform_populate" in board init function. So if we still need to
>> define devices in dt, what's the benefit that we put these device
>> creation works into host1x's probe function? I don't see any difference
>> although create device in host1x probe() sounds more reasonable...
>
> Until I have managed to integrate nvhost to tegradrm, the devices
> creation should be done as it is done now. With nvhost, we will need
> extra data per device, so we'll need to create the devices in nvhost.
I don't believe that has any impact on how the devices need to be created.
Both the following should be equally workable:
a)
* Each device gets instantiated as a platform device through simple
of_platform_populate.
* Each driver parses the device node for any information needed by a
host1x client. This parsing could be implemented via a helper function.
The driver can then register the device with host1x, passing in the
host1x-client-information parsed from DT.
b)
* host1x driver enumerates all the clients (sub-nodes) manually.
* As part of the enumeration, the host1x driver parses information from
the client nodes in order to create the device.
* Drivers for host1x devices get probed based on the devices created in
the previous step.
(a) sounds a heck of a lot simpler, because we don't end up creating a
new bus types etc., which in previous conversations you'd mentioned
ended up duplicating a lot of the logic already in the platform bus driver.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Binding together tegradrm & nvhost
[not found] ` <20120821145709.GA701-RM9K5IK7kjIQXX3q8xo1gnVAuStQJXxyR5q1nwbD4aMs9pC9oP6+/A@public.gmane.org>
@ 2012-08-22 2:29 ` Mark Zhang
2012-08-22 8:42 ` Terje Bergström
1 sibling, 0 replies; 22+ messages in thread
From: Mark Zhang @ 2012-08-22 2:29 UTC (permalink / raw)
To: Thierry Reding
Cc: Terje Bergstrom,
linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
Stephen Warren
On Tue, 2012-08-21 at 22:57 +0800, Thierry Reding wrote:
> * PGP Signed by an unknown key
>
> On Tue, Aug 21, 2012 at 02:16:01PM +0800, Mark Zhang wrote:
> > On Tue, 2012-08-21 at 13:42 +0800, Thierry Reding wrote:
> > > > Old Signed by an unknown key
> > >
> > > On Tue, Aug 21, 2012 at 01:39:21PM +0800, Mark Zhang wrote:
> > > > On Mon, 2012-08-20 at 21:18 +0800, Thierry Reding wrote:
> > > > > I have a new patch series that takes care of these two steps. Mark sent
> > > > > some patches for Tegra30 and HDMI on top of the older series that I need
> > > > > to merge with what I have. Maybe I'll decide to send the series out
> > > > > without the patches merged, depending on how much time I'll get or how
> > > > > much effort it requires. I had hoped the next series would have working
> > > > > HDMI support, which is why I waited.
> > > >
> > > > OK. So I'm going to pause the patch writing right now and wait for your
> > > > codes to be published on linux-tegra. Regardless of whether my patches
> > > > be merged into yours, I'll provide patches for the version which you
> > > > send to maillist in the near future.
> > >
> > > Let me see if I can upload my patches to the repository on gitorious.
> > > I'm not sure when exactly I find time to merge your patches, and I don't
> > > want to hold you up.
> > >
> >
> > All right, thank you. Let us know when you finish uploading.
> > By the way, any updates from freedesktop?
>
> Finally the upload is done. You can find my latest work in the next
> branch of the following repository:
>
> git://gitorious.org/linux-tegra-drm/linux-tegra-drm.git
>
> It's down to two patches and based on next-20120820. Eventually I was
> going to maybe split it up a bit to separate at least the hunks for
> arch/ (Stephen will probably want this anyway) and maybe also some
> functionality like HDMI support.
>
> Mark: If you find the time it'd be great if you could rebase your
> patches on top of that branch. I think for a first submission to
> mainline some things probably need to be cleaned up and the history
> should probably be clean, so we may want to squash some of the patches
> that you sent into the existing ones. For now it's probably best if you
> continue your work on top of this.
>
Yes, I can rebase my works on top of yours.
Besides, I want to say that let's use this repository before the
repository in freedesktop created. And push our changes to it at the
first time, in that way, we can keep track of both's progresses and
avoid duplicate works.
Mark
> Thierry
>
> * Unknown Key
> * 0x7F3EB3A1
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Binding together tegradrm & nvhost
[not found] ` <50340445.6010908-3lzwWm7+Weoh9ZMKESR00Q@public.gmane.org>
@ 2012-08-22 5:54 ` Thierry Reding
0 siblings, 0 replies; 22+ messages in thread
From: Thierry Reding @ 2012-08-22 5:54 UTC (permalink / raw)
To: Stephen Warren
Cc: Terje Bergström, Mark Zhang,
linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
Stephen Warren
[-- Attachment #1: Type: text/plain, Size: 2644 bytes --]
On Tue, Aug 21, 2012 at 03:57:25PM -0600, Stephen Warren wrote:
> On 08/21/2012 12:35 AM, Terje Bergström wrote:
> > On 21.08.2012 09:12, Mark Zhang wrote:
> >> OK, thank you. In current version, all devices are created by
> >> "of_platform_populate" in board init function. So if we still need to
> >> define devices in dt, what's the benefit that we put these device
> >> creation works into host1x's probe function? I don't see any difference
> >> although create device in host1x probe() sounds more reasonable...
> >
> > Until I have managed to integrate nvhost to tegradrm, the devices
> > creation should be done as it is done now. With nvhost, we will need
> > extra data per device, so we'll need to create the devices in nvhost.
>
> I don't believe that has any impact on how the devices need to be created.
>
> Both the following should be equally workable:
>
> a)
>
> * Each device gets instantiated as a platform device through simple
> of_platform_populate.
>
> * Each driver parses the device node for any information needed by a
> host1x client. This parsing could be implemented via a helper function.
>
> The driver can then register the device with host1x, passing in the
> host1x-client-information parsed from DT.
>
> b)
>
> * host1x driver enumerates all the clients (sub-nodes) manually.
>
> * As part of the enumeration, the host1x driver parses information from
> the client nodes in order to create the device.
>
> * Drivers for host1x devices get probed based on the devices created in
> the previous step.
>
> (a) sounds a heck of a lot simpler, because we don't end up creating a
> new bus types etc., which in previous conversations you'd mentioned
> ended up duplicating a lot of the logic already in the platform bus driver.
The former is what I've implemented in the latest series. host1x will
obviously be probed before any of the child devices because it is their
parent. Client drivers rely on the parent-child relationship to obtain a
reference to the host1x and register themselves with a call to the
host1x_register_client() function. Each struct host1x_client is required
to have some fields prefilled by the driver, like .ops or .dev. An extra
field for the channel can easily be added. I think we previously agreed
that sync points could be allocated on an as-needed basis, so that
assignment can happen in host1x_register_client().
I find this implementation very lightweight and it is very close to how
other subsystems (input, fb, I2C, SPI, ...) work. And after all, host1x
isn't anything else than a small subsystem.
Thierry
[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Binding together tegradrm & nvhost
[not found] ` <50340343.1050206-3lzwWm7+Weoh9ZMKESR00Q@public.gmane.org>
@ 2012-08-22 6:49 ` Thierry Reding
0 siblings, 0 replies; 22+ messages in thread
From: Thierry Reding @ 2012-08-22 6:49 UTC (permalink / raw)
To: Stephen Warren
Cc: Terje Bergström,
linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Mark Zhang,
Stephen Warren
[-- Attachment #1: Type: text/plain, Size: 1807 bytes --]
On Tue, Aug 21, 2012 at 03:53:07PM -0600, Stephen Warren wrote:
> On 08/20/2012 07:01 AM, Terje Bergström wrote:
> > Hi,
> >
> > I've been trying to figure out the best way to bind together tegradrm
> > and nvhost. I assume that nvhost and tegradrm will live as separate
> > drivers, with tegradrm taking care of display controller, and nvhost
> > taking care of host1x and other client devices.
> >
> > I've identified a bumps that we need to agree on. I've included here the
> > problem and my proposal:
> >
> > 1) Device & driver registration
> > tegradrm registers as platform_driver, and exports ioctl's. Here we
> > already have to agree on which device the platform_driver maps to.
> > Currently it maps to host1x, but we'll need to move control of host1x to
> > nvhost driver. We'll need to pass drm_platform_init() some
> > platform_device - I propose that we create a virtual device for this.
>
> I don't think there's any need for a virtual device. There's one device
> in HW, and that can be represented by a single device object within the
> kernel. There's nothing then stopping that device exposing multiple
> APIs, i.e. providing host1x APIs to clients, and also instantiating the
> tegra-drm driver directly on top of the host1x device.
The problem with the host1x' platform device is that we already
associate host1x' private data with it. drm_platform_init() will
eventually override that with the struct drm_device. That's the
reason for the drm_soc patch that I've included. It basically
creates a child device of host1x that the DRM driver can bind to
in order to side-step the issue.
This isn't as hackish as it may sound because the DRM device is
essentially a virtual device and no platform device would really
be a good choice.
Thierry
[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Binding together tegradrm & nvhost
[not found] ` <20120821145709.GA701-RM9K5IK7kjIQXX3q8xo1gnVAuStQJXxyR5q1nwbD4aMs9pC9oP6+/A@public.gmane.org>
2012-08-22 2:29 ` Mark Zhang
@ 2012-08-22 8:42 ` Terje Bergström
[not found] ` <50349B58.4000809-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>
1 sibling, 1 reply; 22+ messages in thread
From: Terje Bergström @ 2012-08-22 8:42 UTC (permalink / raw)
To: Thierry Reding
Cc: Mark Zhang, linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
Stephen Warren
On 21.08.2012 17:57, Thierry Reding wrote:
> Finally the upload is done. You can find my latest work in the next
> branch of the following repository:
>
> git://gitorious.org/linux-tegra-drm/linux-tegra-drm.git
>
> It's down to two patches and based on next-20120820. Eventually I was
> going to maybe split it up a bit to separate at least the hunks for
> arch/ (Stephen will probably want this anyway) and maybe also some
> functionality like HDMI support.
Hi,
What's the purpose of the drm_clients list? I notice that the drivers
register into that list, but I can't see how the list is used. I feel
like I'm missing something.
This patch ties all of the drivers to host1x. As not all host1x clients
are going to be controlled via DRM, we need to eventually decouple
these. I'm expecting the host1x hardware to controlled by nvhost driver,
and tegradrm will just call nvhost when it needs host1x.
I know it's a bit difficult to understand in concrete terms what I am
after as long as I haven't been upload any code. The memory management
is one obstacle that needs some code before nvhost is usable in upstream
kernel, and lack of device tree support is another.
If you wish me to do the required changes while I'm integrating nvhost,
just let me know.
Terje
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Binding together tegradrm & nvhost
[not found] ` <50349B58.4000809-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>
@ 2012-08-22 10:33 ` Thierry Reding
[not found] ` <20120822103309.GB31448-RM9K5IK7kjIQXX3q8xo1gnVAuStQJXxyR5q1nwbD4aMs9pC9oP6+/A@public.gmane.org>
0 siblings, 1 reply; 22+ messages in thread
From: Thierry Reding @ 2012-08-22 10:33 UTC (permalink / raw)
To: Terje Bergström
Cc: Mark Zhang, linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
Stephen Warren
[-- Attachment #1: Type: text/plain, Size: 2442 bytes --]
On Wed, Aug 22, 2012 at 11:42:00AM +0300, Terje Bergström wrote:
> On 21.08.2012 17:57, Thierry Reding wrote:
> > Finally the upload is done. You can find my latest work in the next
> > branch of the following repository:
> >
> > git://gitorious.org/linux-tegra-drm/linux-tegra-drm.git
> >
> > It's down to two patches and based on next-20120820. Eventually I was
> > going to maybe split it up a bit to separate at least the hunks for
> > arch/ (Stephen will probably want this anyway) and maybe also some
> > functionality like HDMI support.
>
> Hi,
>
> What's the purpose of the drm_clients list? I notice that the drivers
> register into that list, but I can't see how the list is used. I feel
> like I'm missing something.
This is used to determine when the DRM driver can be safely initialized.
Basically the host1x driver scans the DT for display controllers and
outputs and adds them to this list if they are available. When the
drivers for those device call host1x_register_client(), they'll be moved
to the drm_active list and once the the drm_clients list becomes empty,
the DRM driver is registered using the call to drm_soc_init().
> This patch ties all of the drivers to host1x. As not all host1x clients
> are going to be controlled via DRM, we need to eventually decouple
> these. I'm expecting the host1x hardware to controlled by nvhost driver,
> and tegradrm will just call nvhost when it needs host1x.
I'm not sure how this can be solved any better than the above. All the
drivers are inherently tied to host1x anyway. That's why I suggested
putting the host1x driver along with the DRM driver in the last version
of these patches so that we can get the initial support written. If the
functionality is required by other drivers we have two options, either
the API is exported from the DRM driver or the host1x driver is moved to
a more central location where other drivers can use it.
> I know it's a bit difficult to understand in concrete terms what I am
> after as long as I haven't been upload any code. The memory management
> is one obstacle that needs some code before nvhost is usable in upstream
> kernel, and lack of device tree support is another.
That's precisely why I think it's better to get going with the DRM part
and keep the tight coupling for now. If needed it can always be split
off at a later point in time should the need arise.
Thierry
[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Binding together tegradrm & nvhost
[not found] ` <20120822103309.GB31448-RM9K5IK7kjIQXX3q8xo1gnVAuStQJXxyR5q1nwbD4aMs9pC9oP6+/A@public.gmane.org>
@ 2012-08-22 11:42 ` Terje Bergström
0 siblings, 0 replies; 22+ messages in thread
From: Terje Bergström @ 2012-08-22 11:42 UTC (permalink / raw)
To: Thierry Reding
Cc: Mark Zhang, linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
Stephen Warren
On 22.08.2012 13:33, Thierry Reding wrote:
> This is used to determine when the DRM driver can be safely initialized.
> Basically the host1x driver scans the DT for display controllers and
> outputs and adds them to this list if they are available. When the
> drivers for those device call host1x_register_client(), they'll be moved
> to the drm_active list and once the the drm_clients list becomes empty,
> the DRM driver is registered using the call to drm_soc_init().
Ok, thanks for clarifying.
> I'm not sure how this can be solved any better than the above. All the
> drivers are inherently tied to host1x anyway. That's why I suggested
> putting the host1x driver along with the DRM driver in the last version
> of these patches so that we can get the initial support written. If the
> functionality is required by other drivers we have two options, either
> the API is exported from the DRM driver or the host1x driver is moved to
> a more central location where other drivers can use it.
Ok, let's go with tight coupling now. Once I have code ready, I'll do
the necessary tricks to do some decoupling.
Terje
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2012-08-22 11:42 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-08-20 13:01 Binding together tegradrm & nvhost Terje Bergström
[not found] ` <50323513.3090606-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>
2012-08-20 13:18 ` Thierry Reding
[not found] ` <20120820131800.GA13785-RM9K5IK7kjIQXX3q8xo1gnVAuStQJXxyR5q1nwbD4aMs9pC9oP6+/A@public.gmane.org>
2012-08-20 13:33 ` Terje Bergström
2012-08-21 3:50 ` Dennis Gilmore
2012-08-21 5:39 ` Mark Zhang
2012-08-21 5:42 ` Thierry Reding
[not found] ` <20120821054256.GA5325-RM9K5IK7kjIQXX3q8xo1gnVAuStQJXxyR5q1nwbD4aMs9pC9oP6+/A@public.gmane.org>
2012-08-21 6:16 ` Mark Zhang
2012-08-21 6:21 ` Thierry Reding
2012-08-21 14:57 ` Thierry Reding
[not found] ` <20120821145709.GA701-RM9K5IK7kjIQXX3q8xo1gnVAuStQJXxyR5q1nwbD4aMs9pC9oP6+/A@public.gmane.org>
2012-08-22 2:29 ` Mark Zhang
2012-08-22 8:42 ` Terje Bergström
[not found] ` <50349B58.4000809-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>
2012-08-22 10:33 ` Thierry Reding
[not found] ` <20120822103309.GB31448-RM9K5IK7kjIQXX3q8xo1gnVAuStQJXxyR5q1nwbD4aMs9pC9oP6+/A@public.gmane.org>
2012-08-22 11:42 ` Terje Bergström
2012-08-21 4:57 ` Mark Zhang
2012-08-21 5:40 ` Terje Bergström
[not found] ` <50331F32.4040903-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>
2012-08-21 6:12 ` Mark Zhang
2012-08-21 6:35 ` Terje Bergström
[not found] ` <50332C22.7090009-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>
2012-08-21 7:12 ` Mark Zhang
2012-08-21 21:57 ` Stephen Warren
[not found] ` <50340445.6010908-3lzwWm7+Weoh9ZMKESR00Q@public.gmane.org>
2012-08-22 5:54 ` Thierry Reding
2012-08-21 21:53 ` Stephen Warren
[not found] ` <50340343.1050206-3lzwWm7+Weoh9ZMKESR00Q@public.gmane.org>
2012-08-22 6:49 ` Thierry Reding
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).