* xf86-video-tegra or xf86-video-modesetting?
@ 2012-11-24 21:09 Thierry Reding
[not found] ` <20121124210916.GB27042-RM9K5IK7kjIyiCvfTdI0JKcOhU4Rzj621B7CTYaBSLdn68oJJulU0Q@public.gmane.org>
0 siblings, 1 reply; 16+ messages in thread
From: Thierry Reding @ 2012-11-24 21:09 UTC (permalink / raw)
To: xorg-devel-go0+a7rfsptAfugRpC6u6w
Cc: Dave Airlie, linux-tegra-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: Type: text/plain, Size: 1082 bytes --]
Hi,
With tegra-drm going into Linux 3.8 and NVIDIA posting initial patches
for 2D acceleration on top of it, I've been looking at the various ways
how this can best be leveraged.
The most obvious choice would be to start work on an xf86-video-tegra
driver that uses the code currently in the works to implement the EXA
callbacks that allow some of the rendering to be offloaded to the GPU.
The way I would go about this is to fork xf86-video-modesetting, do some
rebranding and add the various bits required to offload rendering.
However, that has all the usual drawbacks of a fork so I thought maybe
it would be better to write some code to xf86-video-modesetting to add
GPU-specific acceleration on top. Such code could be leveraged by other
drivers as well and all of them could share a common base for the
functionality provided through the standard DRM IOCTLs.
That approach has some disadvantages of its own, like the potential
bloat if many GPUs do the same. It would also be a bit of a step back
to the old monolithic days of X.
So what do other people think?
Thierry
[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: xf86-video-tegra or xf86-video-modesetting?
[not found] ` <20121124210916.GB27042-RM9K5IK7kjIyiCvfTdI0JKcOhU4Rzj621B7CTYaBSLdn68oJJulU0Q@public.gmane.org>
@ 2012-11-24 22:54 ` Lucas Stach
2012-11-25 13:37 ` Thierry Reding
2012-11-25 15:47 ` Terje Bergström
2012-11-25 11:45 ` Michal Suchanek
` (3 subsequent siblings)
4 siblings, 2 replies; 16+ messages in thread
From: Lucas Stach @ 2012-11-24 22:54 UTC (permalink / raw)
To: Thierry Reding
Cc: xorg-devel-go0+a7rfsptAfugRpC6u6w, Dave Airlie,
linux-tegra-u79uwXL29TY76Z2rM5mHXA
Am Samstag, den 24.11.2012, 22:09 +0100 schrieb Thierry Reding:
> Hi,
>
> With tegra-drm going into Linux 3.8 and NVIDIA posting initial patches
> for 2D acceleration on top of it, I've been looking at the various ways
> how this can best be leveraged.
>
> The most obvious choice would be to start work on an xf86-video-tegra
> driver that uses the code currently in the works to implement the EXA
> callbacks that allow some of the rendering to be offloaded to the GPU.
> The way I would go about this is to fork xf86-video-modesetting, do some
> rebranding and add the various bits required to offload rendering.
>
As much as I dislike to say this, but forking the modesetting driver to
bring in the Tegra specific 2D accel might be the best way to go for
now. Especially looking at the very limited resources available to
tegradrm development and NVIDIAs expressed desire to do as few changes
as possible to their downstream work.
> However, that has all the usual drawbacks of a fork so I thought maybe
> it would be better to write some code to xf86-video-modesetting to add
> GPU-specific acceleration on top. Such code could be leveraged by other
> drivers as well and all of them could share a common base for the
> functionality provided through the standard DRM IOCTLs.
>
We don't have any standard DRM IOCTLs for doing acceleration today. The
single fact that we are stitching together command streams in userspace
for execution by the GPU renders a common interface unusable. We don't
even have a common interface to allocate GPU resources suitable for
acceleration: the dumb IOCTLs are only guaranteed to give you a buffer
the display engine can scan out from, nothing in there let's you set up
more fancy things like tiling etc, which might be needed to operate on
the buffer with other engines in some way.
> That approach has some disadvantages of its own, like the potential
> bloat if many GPUs do the same. It would also be a bit of a step back
> to the old monolithic days of X.
>
For some thoughts about how a unified accelerated driver for various
hardware devices could be done I would like to point at my presentation
at this years XDC.
However doing this right might prove as a major task, so as I already
said it might be more worthwhile to just stuff the Tegra specific bits
into a fork of the modesetting driver.
Regards,
Lucas
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: xf86-video-tegra or xf86-video-modesetting?
[not found] ` <20121124210916.GB27042-RM9K5IK7kjIyiCvfTdI0JKcOhU4Rzj621B7CTYaBSLdn68oJJulU0Q@public.gmane.org>
2012-11-24 22:54 ` Lucas Stach
@ 2012-11-25 11:45 ` Michal Suchanek
[not found] ` <CAOMqctTQGzhu3gU5hdJWKOCU0Dyk1vxCjE918PMa7aR+o1pTiQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-11-26 2:51 ` Alex Deucher
` (2 subsequent siblings)
4 siblings, 1 reply; 16+ messages in thread
From: Michal Suchanek @ 2012-11-25 11:45 UTC (permalink / raw)
To: Thierry Reding
Cc: xorg-devel-go0+a7rfsptAfugRpC6u6w,
linux-tegra-u79uwXL29TY76Z2rM5mHXA, Dave Airlie
Hello,
On 24 November 2012 22:09, Thierry Reding
<thierry.reding-RM9K5IK7kjKj5M59NBduVrNAH6kLmebB@public.gmane.org> wrote:
> However, that has all the usual drawbacks of a fork so I thought maybe
> it would be better to write some code to xf86-video-modesetting to add
> GPU-specific acceleration on top. Such code could be leveraged by other
> drivers as well and all of them could share a common base for the
> functionality provided through the standard DRM IOCTLs.
>
> That approach has some disadvantages of its own, like the potential
> bloat if many GPUs do the same. It would also be a bit of a step back
> to the old monolithic days of X.
>
You can always make the tegra part a submodule if you think putting it
into the driver directly is too monolithic.
Thanks
Michal
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: xf86-video-tegra or xf86-video-modesetting?
2012-11-24 22:54 ` Lucas Stach
@ 2012-11-25 13:37 ` Thierry Reding
[not found] ` <20121125133759.GA30264-RM9K5IK7kjIyiCvfTdI0JKcOhU4Rzj621B7CTYaBSLdn68oJJulU0Q@public.gmane.org>
2012-11-25 15:47 ` Terje Bergström
1 sibling, 1 reply; 16+ messages in thread
From: Thierry Reding @ 2012-11-25 13:37 UTC (permalink / raw)
To: Lucas Stach
Cc: xorg-devel-go0+a7rfsptAfugRpC6u6w, Dave Airlie,
linux-tegra-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: Type: text/plain, Size: 3438 bytes --]
On Sat, Nov 24, 2012 at 11:54:32PM +0100, Lucas Stach wrote:
> Am Samstag, den 24.11.2012, 22:09 +0100 schrieb Thierry Reding:
> > Hi,
> >
> > With tegra-drm going into Linux 3.8 and NVIDIA posting initial patches
> > for 2D acceleration on top of it, I've been looking at the various ways
> > how this can best be leveraged.
> >
> > The most obvious choice would be to start work on an xf86-video-tegra
> > driver that uses the code currently in the works to implement the EXA
> > callbacks that allow some of the rendering to be offloaded to the GPU.
> > The way I would go about this is to fork xf86-video-modesetting, do some
> > rebranding and add the various bits required to offload rendering.
> >
> As much as I dislike to say this, but forking the modesetting driver to
> bring in the Tegra specific 2D accel might be the best way to go for
> now. Especially looking at the very limited resources available to
> tegradrm development and NVIDIAs expressed desire to do as few changes
> as possible to their downstream work.
So true. But I'm not sure if it's a good excuse to not do things the
right way, even if it ends up being more work. If the general situation
can be improved then I think it's worth the effort.
> > However, that has all the usual drawbacks of a fork so I thought maybe
> > it would be better to write some code to xf86-video-modesetting to add
> > GPU-specific acceleration on top. Such code could be leveraged by other
> > drivers as well and all of them could share a common base for the
> > functionality provided through the standard DRM IOCTLs.
> >
> We don't have any standard DRM IOCTLs for doing acceleration today. The
> single fact that we are stitching together command streams in userspace
> for execution by the GPU renders a common interface unusable. We don't
> even have a common interface to allocate GPU resources suitable for
> acceleration: the dumb IOCTLs are only guaranteed to give you a buffer
> the display engine can scan out from, nothing in there let's you set up
> more fancy things like tiling etc, which might be needed to operate on
> the buffer with other engines in some way.
With the common base that could be shared I meant all the modesetting
code and framebuffer setup that xf86-video-modesetting already does.
I've been wanting to add support for planes as well, which comes with
another set of standard IOCTLs in DRM.
Rewriting all of that in different drivers doesn't seem very desirable
to me and sounds like a lot of wasted effort. And that's not couting the
maintenance burden to keep up with the latest changes in the generic
modesetting driver.
> > That approach has some disadvantages of its own, like the potential
> > bloat if many GPUs do the same. It would also be a bit of a step back
> > to the old monolithic days of X.
> >
> For some thoughts about how a unified accelerated driver for various
> hardware devices could be done I would like to point at my presentation
> at this years XDC.
> However doing this right might prove as a major task, so as I already
> said it might be more worthwhile to just stuff the Tegra specific bits
> into a fork of the modesetting driver.
One major advantage of putting this into the modesetting driver is that
as more hardware support is added common patterns might emerge and make
it easier to refactor things into generic code.
Thierry
[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: xf86-video-tegra or xf86-video-modesetting?
[not found] ` <CAOMqctTQGzhu3gU5hdJWKOCU0Dyk1vxCjE918PMa7aR+o1pTiQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2012-11-25 13:40 ` Thierry Reding
0 siblings, 0 replies; 16+ messages in thread
From: Thierry Reding @ 2012-11-25 13:40 UTC (permalink / raw)
To: Michal Suchanek
Cc: xorg-devel-go0+a7rfsptAfugRpC6u6w,
linux-tegra-u79uwXL29TY76Z2rM5mHXA, Dave Airlie
[-- Attachment #1: Type: text/plain, Size: 1153 bytes --]
On Sun, Nov 25, 2012 at 12:45:07PM +0100, Michal Suchanek wrote:
> Hello,
>
> On 24 November 2012 22:09, Thierry Reding
> <thierry.reding-RM9K5IK7kjKj5M59NBduVrNAH6kLmebB@public.gmane.org> wrote:
>
> > However, that has all the usual drawbacks of a fork so I thought maybe
> > it would be better to write some code to xf86-video-modesetting to add
> > GPU-specific acceleration on top. Such code could be leveraged by other
> > drivers as well and all of them could share a common base for the
> > functionality provided through the standard DRM IOCTLs.
> >
> > That approach has some disadvantages of its own, like the potential
> > bloat if many GPUs do the same. It would also be a bit of a step back
> > to the old monolithic days of X.
> >
>
> You can always make the tegra part a submodule if you think putting it
> into the driver directly is too monolithic.
Good point. I suppose it could also be made selectable at compile time,
similar to how things are done in libdrm. I'm not particularly worried
about the monolithic aspect, but I wanted to bring it up for discussion
and see what other people think.
Thierry
[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: xf86-video-tegra or xf86-video-modesetting?
2012-11-24 22:54 ` Lucas Stach
2012-11-25 13:37 ` Thierry Reding
@ 2012-11-25 15:47 ` Terje Bergström
1 sibling, 0 replies; 16+ messages in thread
From: Terje Bergström @ 2012-11-25 15:47 UTC (permalink / raw)
To: Lucas Stach
Cc: Thierry Reding,
xorg-devel-go0+a7rfsptAfugRpC6u6w@public.gmane.org, Dave Airlie,
linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
On 25.11.2012 00:54, Lucas Stach wrote:
> As much as I dislike to say this, but forking the modesetting driver to
> bring in the Tegra specific 2D accel might be the best way to go for
> now. Especially looking at the very limited resources available to
> tegradrm development and NVIDIAs expressed desire to do as few changes
> as possible to their downstream work.
Which changes to downstream work would these be? What changes would help
in getting the acceleration support into mode setting driver?
We've done quite a lot of changes in downstream nvhost to help
upstreaming, and I'll keep downstream nvhost as close to upstream as
possible when/if nvhost wiggles its way into upstream. But you might be
talking about something else.
> We don't have any standard DRM IOCTLs for doing acceleration today. The
> single fact that we are stitching together command streams in userspace
> for execution by the GPU renders a common interface unusable. We don't
Command streams for Tegra 2D are easiest to stitch together in user
space. We have discussed the possibility to implement some simple
operations in kernel, too, f.ex. using 2D to clear or copy a memory
region. But in the end there are not too many reasons to implement that
in kernel versus doing it in user space.
Do we have to even attempt standardizing the IOCTLs? The standardization
could happen in libdrm level. We've already implemented some common 2D
operations for Tegra 2D, but we haven't yet solved the question that
where should we put the code in - in our internal tree it's now inside
libdrm. None of the other GPUs supported by libdrm implement
acceleration inside libdrm, so if we follow that, we'll just provide
libtegradrm with the 2D operations. That'll require a Tegra specific
modesetting driver.
Path of least resistance?
Terje
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: xf86-video-tegra or xf86-video-modesetting?
[not found] ` <20121124210916.GB27042-RM9K5IK7kjIyiCvfTdI0JKcOhU4Rzj621B7CTYaBSLdn68oJJulU0Q@public.gmane.org>
2012-11-24 22:54 ` Lucas Stach
2012-11-25 11:45 ` Michal Suchanek
@ 2012-11-26 2:51 ` Alex Deucher
[not found] ` <CADnq5_P1D7mwL6iYYbJSEBt8Ub5ejQmsMbupMNUU74d5==+gTw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-11-26 5:56 ` Mark Zhang
2012-11-26 17:45 ` Aaron Plattner
4 siblings, 1 reply; 16+ messages in thread
From: Alex Deucher @ 2012-11-26 2:51 UTC (permalink / raw)
To: Thierry Reding
Cc: xorg-devel-go0+a7rfsptAfugRpC6u6w,
linux-tegra-u79uwXL29TY76Z2rM5mHXA, Dave Airlie
On Sat, Nov 24, 2012 at 4:09 PM, Thierry Reding
<thierry.reding-RM9K5IK7kjKj5M59NBduVrNAH6kLmebB@public.gmane.org> wrote:
> going into Linux 3.8 and NVIDIA posting initial patches
> for 2D acceleration on top of it, I've been looking at the various ways
> how this can best be leveraged.
>
> The most obvious choice would be to start work on an xf86-video-tegra
> driver that uses the code currently in the works to implement the EXA
> callbacks that allow some of the rendering to be offloaded to the GPU.
> The way I would go about this is to fork xf86-video-modesetting, do some
> rebranding and add the various bits required to offload rendering.
>
> However, that has all the usual drawbacks of a fork so I thought maybe
> it would be better to write some code to xf86-video-modesetting to add
> GPU-specific acceleration on top. Such code could be leveraged by other
> drivers as well and all of them could share a common base for the
> functionality provided through the standard DRM IOCTLs.
>
> That approach has some disadvantages of its own, like the potential
> bloat if many GPUs do the same. It would also be a bit of a step back
> to the old monolithic days of X.
Just fork and fill in your own GPU specific bits. Most accel stuff
ends up being very GPU specific.
Alex
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: xf86-video-tegra or xf86-video-modesetting?
[not found] ` <20121124210916.GB27042-RM9K5IK7kjIyiCvfTdI0JKcOhU4Rzj621B7CTYaBSLdn68oJJulU0Q@public.gmane.org>
` (2 preceding siblings ...)
2012-11-26 2:51 ` Alex Deucher
@ 2012-11-26 5:56 ` Mark Zhang
2012-11-26 17:45 ` Aaron Plattner
4 siblings, 0 replies; 16+ messages in thread
From: Mark Zhang @ 2012-11-26 5:56 UTC (permalink / raw)
To: Thierry Reding
Cc: xorg-devel-go0+a7rfsptAfugRpC6u6w@public.gmane.org, Dave Airlie,
linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
In technical, it's true to focus xf86-video-modesetting on the hardware
independent stuffs/making frameworks and GPU vendors upstreams their
hardware dependent codes. But this needs a lot of cooperation and the
progress would be slow. This is not GPU vendors want so I assume they'll
not put lots of efforts into it.
So I think it's better to fork the xf86-video-modesetting right now
unless someday most of the GPU vendors are willing to work together and
do contributions to modesetting driver.
Mark
On 11/25/2012 05:09 AM, Thierry Reding wrote:
> * PGP Signed by an unknown key
>
> Hi,
>
> With tegra-drm going into Linux 3.8 and NVIDIA posting initial patches
> for 2D acceleration on top of it, I've been looking at the various ways
> how this can best be leveraged.
>
> The most obvious choice would be to start work on an xf86-video-tegra
> driver that uses the code currently in the works to implement the EXA
> callbacks that allow some of the rendering to be offloaded to the GPU.
> The way I would go about this is to fork xf86-video-modesetting, do some
> rebranding and add the various bits required to offload rendering.
>
> However, that has all the usual drawbacks of a fork so I thought maybe
> it would be better to write some code to xf86-video-modesetting to add
> GPU-specific acceleration on top. Such code could be leveraged by other
> drivers as well and all of them could share a common base for the
> functionality provided through the standard DRM IOCTLs.
>
> That approach has some disadvantages of its own, like the potential
> bloat if many GPUs do the same. It would also be a bit of a step back
> to the old monolithic days of X.
>
> So what do other people think?
>
> Thierry
>
> * Unknown Key
> * 0x7F3EB3A1
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: xf86-video-tegra or xf86-video-modesetting?
[not found] ` <CADnq5_P1D7mwL6iYYbJSEBt8Ub5ejQmsMbupMNUU74d5==+gTw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2012-11-26 7:32 ` Thierry Reding
[not found] ` <20121126073234.GA17600-RM9K5IK7kjIyiCvfTdI0JKcOhU4Rzj621B7CTYaBSLdn68oJJulU0Q@public.gmane.org>
0 siblings, 1 reply; 16+ messages in thread
From: Thierry Reding @ 2012-11-26 7:32 UTC (permalink / raw)
To: Alex Deucher
Cc: xorg-devel-go0+a7rfsptAfugRpC6u6w,
linux-tegra-u79uwXL29TY76Z2rM5mHXA, Dave Airlie
[-- Attachment #1: Type: text/plain, Size: 2314 bytes --]
On Sun, Nov 25, 2012 at 09:51:46PM -0500, Alex Deucher wrote:
> On Sat, Nov 24, 2012 at 4:09 PM, Thierry Reding
> <thierry.reding-RM9K5IK7kjKj5M59NBduVrNAH6kLmebB@public.gmane.org> wrote:
> > going into Linux 3.8 and NVIDIA posting initial patches
> > for 2D acceleration on top of it, I've been looking at the various ways
> > how this can best be leveraged.
> >
> > The most obvious choice would be to start work on an xf86-video-tegra
> > driver that uses the code currently in the works to implement the EXA
> > callbacks that allow some of the rendering to be offloaded to the GPU.
> > The way I would go about this is to fork xf86-video-modesetting, do some
> > rebranding and add the various bits required to offload rendering.
> >
> > However, that has all the usual drawbacks of a fork so I thought maybe
> > it would be better to write some code to xf86-video-modesetting to add
> > GPU-specific acceleration on top. Such code could be leveraged by other
> > drivers as well and all of them could share a common base for the
> > functionality provided through the standard DRM IOCTLs.
> >
> > That approach has some disadvantages of its own, like the potential
> > bloat if many GPUs do the same. It would also be a bit of a step back
> > to the old monolithic days of X.
>
> Just fork and fill in your own GPU specific bits. Most accel stuff
> ends up being very GPU specific.
That doesn't exclude the alternative that I described. Maybe I didn't
express what I had in mind very clearly. What I propose is to add some
code to the modesetting driver that would allow GPU-specific code to be
called if matching hardware is detected (perhaps as stupidly as looking
at the DRM driver name/version). Such code could perhaps be called from
the DDX' .ScreenInit and call the GPU-specific function to register an
EXA driver.
That would allow a large body of code (modesetting, VT switching, ...)
to be shared among a number of drivers instead of duplicating the code
for each one and having to keep merging updates from the modesetting
driver as it evolves. So the GPU-specific acceleration would just sit on
top of the existing code and only be activated on specific hardware.
What I'm *not* proposing is to create an abstraction layer for
acceleration.
Thierry
[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: xf86-video-tegra or xf86-video-modesetting?
[not found] ` <20121126073234.GA17600-RM9K5IK7kjIyiCvfTdI0JKcOhU4Rzj621B7CTYaBSLdn68oJJulU0Q@public.gmane.org>
@ 2012-11-26 7:45 ` Dave Airlie
[not found] ` <CAPM=9tzfjtUVC5PrRLA3Y659ausf1j=uXp-zfMZFUXz-ir67FA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
0 siblings, 1 reply; 16+ messages in thread
From: Dave Airlie @ 2012-11-26 7:45 UTC (permalink / raw)
To: Thierry Reding
Cc: Alex Deucher, xorg-devel-go0+a7rfsptAfugRpC6u6w,
linux-tegra-u79uwXL29TY76Z2rM5mHXA
On Mon, Nov 26, 2012 at 5:32 PM, Thierry Reding
<thierry.reding-RM9K5IK7kjKj5M59NBduVrNAH6kLmebB@public.gmane.org> wrote:
> On Sun, Nov 25, 2012 at 09:51:46PM -0500, Alex Deucher wrote:
>> On Sat, Nov 24, 2012 at 4:09 PM, Thierry Reding
>> <thierry.reding-RM9K5IK7kjKj5M59NBduVrNAH6kLmebB@public.gmane.org> wrote:
>> > going into Linux 3.8 and NVIDIA posting initial patches
>> > for 2D acceleration on top of it, I've been looking at the various ways
>> > how this can best be leveraged.
>> >
>> > The most obvious choice would be to start work on an xf86-video-tegra
>> > driver that uses the code currently in the works to implement the EXA
>> > callbacks that allow some of the rendering to be offloaded to the GPU.
>> > The way I would go about this is to fork xf86-video-modesetting, do some
>> > rebranding and add the various bits required to offload rendering.
>> >
>> > However, that has all the usual drawbacks of a fork so I thought maybe
>> > it would be better to write some code to xf86-video-modesetting to add
>> > GPU-specific acceleration on top. Such code could be leveraged by other
>> > drivers as well and all of them could share a common base for the
>> > functionality provided through the standard DRM IOCTLs.
>> >
>> > That approach has some disadvantages of its own, like the potential
>> > bloat if many GPUs do the same. It would also be a bit of a step back
>> > to the old monolithic days of X.
>>
>> Just fork and fill in your own GPU specific bits. Most accel stuff
>> ends up being very GPU specific.
>
> That doesn't exclude the alternative that I described. Maybe I didn't
> express what I had in mind very clearly. What I propose is to add some
> code to the modesetting driver that would allow GPU-specific code to be
> called if matching hardware is detected (perhaps as stupidly as looking
> at the DRM driver name/version). Such code could perhaps be called from
> the DDX' .ScreenInit and call the GPU-specific function to register an
> EXA driver.
>
> That would allow a large body of code (modesetting, VT switching, ...)
> to be shared among a number of drivers instead of duplicating the code
> for each one and having to keep merging updates from the modesetting
> driver as it evolves. So the GPU-specific acceleration would just sit on
> top of the existing code and only be activated on specific hardware.
> What I'm *not* proposing is to create an abstraction layer for
> acceleration.
>
vmware kinda did something like that initially with modesetting, it
was a bit messier, it would be nice though to just plug in stuff like
glamor and things, but you still need to deal with pixmap allocation
on a per-gpu basis.
I'd rather make the modesetting code into a library, either separate
or in the X server, but i've also investigated that and found it was
too much effort for me at the time also.
Dave.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: xf86-video-tegra or xf86-video-modesetting?
[not found] ` <CAPM=9tzfjtUVC5PrRLA3Y659ausf1j=uXp-zfMZFUXz-ir67FA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2012-11-26 8:13 ` Thierry Reding
0 siblings, 0 replies; 16+ messages in thread
From: Thierry Reding @ 2012-11-26 8:13 UTC (permalink / raw)
To: Dave Airlie
Cc: Alex Deucher, xorg-devel-go0+a7rfsptAfugRpC6u6w,
linux-tegra-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: Type: text/plain, Size: 3634 bytes --]
On Mon, Nov 26, 2012 at 05:45:38PM +1000, Dave Airlie wrote:
> On Mon, Nov 26, 2012 at 5:32 PM, Thierry Reding
> <thierry.reding-RM9K5IK7kjKj5M59NBduVrNAH6kLmebB@public.gmane.org> wrote:
> > On Sun, Nov 25, 2012 at 09:51:46PM -0500, Alex Deucher wrote:
> >> On Sat, Nov 24, 2012 at 4:09 PM, Thierry Reding
> >> <thierry.reding-RM9K5IK7kjKj5M59NBduVrNAH6kLmebB@public.gmane.org> wrote:
> >> > going into Linux 3.8 and NVIDIA posting initial patches
> >> > for 2D acceleration on top of it, I've been looking at the various ways
> >> > how this can best be leveraged.
> >> >
> >> > The most obvious choice would be to start work on an xf86-video-tegra
> >> > driver that uses the code currently in the works to implement the EXA
> >> > callbacks that allow some of the rendering to be offloaded to the GPU.
> >> > The way I would go about this is to fork xf86-video-modesetting, do some
> >> > rebranding and add the various bits required to offload rendering.
> >> >
> >> > However, that has all the usual drawbacks of a fork so I thought maybe
> >> > it would be better to write some code to xf86-video-modesetting to add
> >> > GPU-specific acceleration on top. Such code could be leveraged by other
> >> > drivers as well and all of them could share a common base for the
> >> > functionality provided through the standard DRM IOCTLs.
> >> >
> >> > That approach has some disadvantages of its own, like the potential
> >> > bloat if many GPUs do the same. It would also be a bit of a step back
> >> > to the old monolithic days of X.
> >>
> >> Just fork and fill in your own GPU specific bits. Most accel stuff
> >> ends up being very GPU specific.
> >
> > That doesn't exclude the alternative that I described. Maybe I didn't
> > express what I had in mind very clearly. What I propose is to add some
> > code to the modesetting driver that would allow GPU-specific code to be
> > called if matching hardware is detected (perhaps as stupidly as looking
> > at the DRM driver name/version). Such code could perhaps be called from
> > the DDX' .ScreenInit and call the GPU-specific function to register an
> > EXA driver.
> >
> > That would allow a large body of code (modesetting, VT switching, ...)
> > to be shared among a number of drivers instead of duplicating the code
> > for each one and having to keep merging updates from the modesetting
> > driver as it evolves. So the GPU-specific acceleration would just sit on
> > top of the existing code and only be activated on specific hardware.
> > What I'm *not* proposing is to create an abstraction layer for
> > acceleration.
> >
>
> vmware kinda did something like that initially with modesetting, it
> was a bit messier, it would be nice though to just plug in stuff like
> glamor and things, but you still need to deal with pixmap allocation
> on a per-gpu basis.
I'm still very new to this game and I probably have a lot of catching up
to do. However I would expect it to be possible to override pixmap
allocation with GPU specific implementations. I've been looking at some
implementations of DDX drivers and I seem to remember that pixmap
management was done in the EXA driver, in which case it would be part of
the GPU specific code anyway.
> I'd rather make the modesetting code into a library, either separate
> or in the X server, but i've also investigated that and found it was
> too much effort for me at the time also.
That idea occurred to me as well. Given my lack of experience I'm not
sure I'd be very well suited for the job if you already judged it to be
too much effort...
Thierry
[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: xf86-video-tegra or xf86-video-modesetting?
[not found] ` <20121125133759.GA30264-RM9K5IK7kjIyiCvfTdI0JKcOhU4Rzj621B7CTYaBSLdn68oJJulU0Q@public.gmane.org>
@ 2012-11-26 14:01 ` Alex Deucher
[not found] ` <CADnq5_PwR1P6HDZSD-UeoaKUdQzFhK3cdQ4jhEWtR9Lgb-P2hQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
0 siblings, 1 reply; 16+ messages in thread
From: Alex Deucher @ 2012-11-26 14:01 UTC (permalink / raw)
To: Thierry Reding
Cc: Lucas Stach, linux-tegra-u79uwXL29TY76Z2rM5mHXA, Dave Airlie,
xorg-devel-go0+a7rfsptAfugRpC6u6w
On Sun, Nov 25, 2012 at 8:37 AM, Thierry Reding
<thierry.reding-RM9K5IK7kjKj5M59NBduVrNAH6kLmebB@public.gmane.org> wrote:
> On Sat, Nov 24, 2012 at 11:54:32PM +0100, Lucas Stach wrote:
>> Am Samstag, den 24.11.2012, 22:09 +0100 schrieb Thierry Reding:
>> > Hi,
>> >
>> > With tegra-drm going into Linux 3.8 and NVIDIA posting initial patches
>> > for 2D acceleration on top of it, I've been looking at the various ways
>> > how this can best be leveraged.
>> >
>> > The most obvious choice would be to start work on an xf86-video-tegra
>> > driver that uses the code currently in the works to implement the EXA
>> > callbacks that allow some of the rendering to be offloaded to the GPU.
>> > The way I would go about this is to fork xf86-video-modesetting, do some
>> > rebranding and add the various bits required to offload rendering.
>> >
>> As much as I dislike to say this, but forking the modesetting driver to
>> bring in the Tegra specific 2D accel might be the best way to go for
>> now. Especially looking at the very limited resources available to
>> tegradrm development and NVIDIAs expressed desire to do as few changes
>> as possible to their downstream work.
>
> So true. But I'm not sure if it's a good excuse to not do things the
> right way, even if it ends up being more work. If the general situation
> can be improved then I think it's worth the effort.
>
>> > However, that has all the usual drawbacks of a fork so I thought maybe
>> > it would be better to write some code to xf86-video-modesetting to add
>> > GPU-specific acceleration on top. Such code could be leveraged by other
>> > drivers as well and all of them could share a common base for the
>> > functionality provided through the standard DRM IOCTLs.
>> >
>> We don't have any standard DRM IOCTLs for doing acceleration today. The
>> single fact that we are stitching together command streams in userspace
>> for execution by the GPU renders a common interface unusable. We don't
>> even have a common interface to allocate GPU resources suitable for
>> acceleration: the dumb IOCTLs are only guaranteed to give you a buffer
>> the display engine can scan out from, nothing in there let's you set up
>> more fancy things like tiling etc, which might be needed to operate on
>> the buffer with other engines in some way.
>
> With the common base that could be shared I meant all the modesetting
> code and framebuffer setup that xf86-video-modesetting already does.
> I've been wanting to add support for planes as well, which comes with
> another set of standard IOCTLs in DRM.
>
> Rewriting all of that in different drivers doesn't seem very desirable
> to me and sounds like a lot of wasted effort. And that's not couting the
> maintenance burden to keep up with the latest changes in the generic
> modesetting driver.
>
You don't really end up rewriting it, most people just copy the
modesetting driver, change the name, and start adding acceleration; in
which case, the work is already done. Also, the generic code doesn't
change much. Based on other ddxes, you rarely have to change the
modesetting and framebuffer code. Most of the work ends up being the
device specific acceleration and memory management code.
Also, depending on what hardware is available, I'm not sure
traditional 2D engines will gain much over shadowfb other than hw
accelerated buffer swaps for GL. In my opinion something like glamor
is the best bet for mapping legacy X APIs on to modern GL hw.
Alex
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: xf86-video-tegra or xf86-video-modesetting?
[not found] ` <20121124210916.GB27042-RM9K5IK7kjIyiCvfTdI0JKcOhU4Rzj621B7CTYaBSLdn68oJJulU0Q@public.gmane.org>
` (3 preceding siblings ...)
2012-11-26 5:56 ` Mark Zhang
@ 2012-11-26 17:45 ` Aaron Plattner
[not found] ` <50B3AACE.3050908-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>
4 siblings, 1 reply; 16+ messages in thread
From: Aaron Plattner @ 2012-11-26 17:45 UTC (permalink / raw)
To: Thierry Reding
Cc: xorg-devel-go0+a7rfsptAfugRpC6u6w@public.gmane.org,
linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Dave Airlie
On 11/24/2012 01:09 PM, Thierry Reding wrote:
> Hi,
>
> With tegra-drm going into Linux 3.8 and NVIDIA posting initial patches
> for 2D acceleration on top of it, I've been looking at the various ways
> how this can best be leveraged.
>
> The most obvious choice would be to start work on an xf86-video-tegra
> driver that uses the code currently in the works to implement the EXA
> callbacks that allow some of the rendering to be offloaded to the GPU.
> The way I would go about this is to fork xf86-video-modesetting, do some
> rebranding and add the various bits required to offload rendering.
From a purely logistical standpoint, if you do choose to create a fork, calling
it xf86-video-tegra might be a problem since there's already an existing
tegra_drv.so.
You could probably graft tegradrm support onto xf86-video-nv pretty easily, if
you want to reuse an existing driver package.
-- Aaron
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: xf86-video-tegra or xf86-video-modesetting?
[not found] ` <50B3AACE.3050908-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>
@ 2012-11-26 21:27 ` Thierry Reding
0 siblings, 0 replies; 16+ messages in thread
From: Thierry Reding @ 2012-11-26 21:27 UTC (permalink / raw)
To: Aaron Plattner
Cc: xorg-devel-go0+a7rfsptAfugRpC6u6w@public.gmane.org,
linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Dave Airlie
[-- Attachment #1: Type: text/plain, Size: 1267 bytes --]
On Mon, Nov 26, 2012 at 09:45:50AM -0800, Aaron Plattner wrote:
> On 11/24/2012 01:09 PM, Thierry Reding wrote:
> >Hi,
> >
> >With tegra-drm going into Linux 3.8 and NVIDIA posting initial patches
> >for 2D acceleration on top of it, I've been looking at the various ways
> >how this can best be leveraged.
> >
> >The most obvious choice would be to start work on an xf86-video-tegra
> >driver that uses the code currently in the works to implement the EXA
> >callbacks that allow some of the rendering to be offloaded to the GPU.
> >The way I would go about this is to fork xf86-video-modesetting, do some
> >rebranding and add the various bits required to offload rendering.
>
> From a purely logistical standpoint, if you do choose to create a
> fork, calling it xf86-video-tegra might be a problem since there's
> already an existing tegra_drv.so.
Right, you already use that name. Anyone have any great ideas for a new
name?
> You could probably graft tegradrm support onto xf86-video-nv pretty
> easily, if you want to reuse an existing driver package.
I think I'd rather go with a fork of the modesetting driver since it
already provides everything that we need and only the acceleration bits
need to be added on top.
Thierry
[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: xf86-video-tegra or xf86-video-modesetting?
[not found] ` <CADnq5_PwR1P6HDZSD-UeoaKUdQzFhK3cdQ4jhEWtR9Lgb-P2hQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2012-11-26 23:14 ` Stephen Warren
[not found] ` <50B3F7C7.6040602-3lzwWm7+Weoh9ZMKESR00Q@public.gmane.org>
0 siblings, 1 reply; 16+ messages in thread
From: Stephen Warren @ 2012-11-26 23:14 UTC (permalink / raw)
To: Alex Deucher
Cc: Thierry Reding, Lucas Stach, linux-tegra-u79uwXL29TY76Z2rM5mHXA,
Dave Airlie, xorg-devel-go0+a7rfsptAfugRpC6u6w
On 11/26/2012 07:01 AM, Alex Deucher wrote:
> On Sun, Nov 25, 2012 at 8:37 AM, Thierry Reding
> <thierry.reding-RM9K5IK7kjKj5M59NBduVrNAH6kLmebB@public.gmane.org> wrote:
...
>> With the common base that could be shared I meant all the modesetting
>> code and framebuffer setup that xf86-video-modesetting already does.
>> I've been wanting to add support for planes as well, which comes with
>> another set of standard IOCTLs in DRM.
>>
>> Rewriting all of that in different drivers doesn't seem very desirable
>> to me and sounds like a lot of wasted effort. And that's not couting the
>> maintenance burden to keep up with the latest changes in the generic
>> modesetting driver.
>>
>
> You don't really end up rewriting it, most people just copy the
> modesetting driver, change the name, and start adding acceleration; in
> which case, the work is already done. Also, the generic code doesn't
> change much. Based on other ddxes, you rarely have to change the
> modesetting and framebuffer code. Most of the work ends up being the
> device specific acceleration and memory management code.
> Also, depending on what hardware is available, I'm not sure
> traditional 2D engines will gain much over shadowfb other than hw
> accelerated buffer swaps for GL. In my opinion something like glamor
> is the best bet for mapping legacy X APIs on to modern GL hw.
Rather than have every driver cut/paste the modesetting code, can't the
modesetting core of the DDX be pulled out into a utility library or
similar, so that it can just be compiled/linked into all the DDXs
without actually duplicating the code? That way there's no code
duplication, but each DDX can still be flexible about all the
HW-specific code without making a monolithic DDX.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: xf86-video-tegra or xf86-video-modesetting?
[not found] ` <50B3F7C7.6040602-3lzwWm7+Weoh9ZMKESR00Q@public.gmane.org>
@ 2012-11-27 0:51 ` Alex Deucher
0 siblings, 0 replies; 16+ messages in thread
From: Alex Deucher @ 2012-11-27 0:51 UTC (permalink / raw)
To: Stephen Warren
Cc: Thierry Reding, Lucas Stach, linux-tegra-u79uwXL29TY76Z2rM5mHXA,
Dave Airlie, xorg-devel-go0+a7rfsptAfugRpC6u6w
On Mon, Nov 26, 2012 at 6:14 PM, Stephen Warren <swarren-3lzwWm7+Weoh9ZMKESR00Q@public.gmane.org> wrote:
> On 11/26/2012 07:01 AM, Alex Deucher wrote:
>> On Sun, Nov 25, 2012 at 8:37 AM, Thierry Reding
>> <thierry.reding-RM9K5IK7kjKj5M59NBduVrNAH6kLmebB@public.gmane.org> wrote:
> ...
>>> With the common base that could be shared I meant all the modesetting
>>> code and framebuffer setup that xf86-video-modesetting already does.
>>> I've been wanting to add support for planes as well, which comes with
>>> another set of standard IOCTLs in DRM.
>>>
>>> Rewriting all of that in different drivers doesn't seem very desirable
>>> to me and sounds like a lot of wasted effort. And that's not couting the
>>> maintenance burden to keep up with the latest changes in the generic
>>> modesetting driver.
>>>
>>
>> You don't really end up rewriting it, most people just copy the
>> modesetting driver, change the name, and start adding acceleration; in
>> which case, the work is already done. Also, the generic code doesn't
>> change much. Based on other ddxes, you rarely have to change the
>> modesetting and framebuffer code. Most of the work ends up being the
>> device specific acceleration and memory management code.
>> Also, depending on what hardware is available, I'm not sure
>> traditional 2D engines will gain much over shadowfb other than hw
>> accelerated buffer swaps for GL. In my opinion something like glamor
>> is the best bet for mapping legacy X APIs on to modern GL hw.
>
> Rather than have every driver cut/paste the modesetting code, can't the
> modesetting core of the DDX be pulled out into a utility library or
> similar, so that it can just be compiled/linked into all the DDXs
> without actually duplicating the code? That way there's no code
> duplication, but each DDX can still be flexible about all the
> HW-specific code without making a monolithic DDX.
>
If someone has the time to look into it further, it could probably be
done. I'm not sure there would be much to share at the end however.
Alex
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2012-11-27 0:51 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-11-24 21:09 xf86-video-tegra or xf86-video-modesetting? Thierry Reding
[not found] ` <20121124210916.GB27042-RM9K5IK7kjIyiCvfTdI0JKcOhU4Rzj621B7CTYaBSLdn68oJJulU0Q@public.gmane.org>
2012-11-24 22:54 ` Lucas Stach
2012-11-25 13:37 ` Thierry Reding
[not found] ` <20121125133759.GA30264-RM9K5IK7kjIyiCvfTdI0JKcOhU4Rzj621B7CTYaBSLdn68oJJulU0Q@public.gmane.org>
2012-11-26 14:01 ` Alex Deucher
[not found] ` <CADnq5_PwR1P6HDZSD-UeoaKUdQzFhK3cdQ4jhEWtR9Lgb-P2hQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-11-26 23:14 ` Stephen Warren
[not found] ` <50B3F7C7.6040602-3lzwWm7+Weoh9ZMKESR00Q@public.gmane.org>
2012-11-27 0:51 ` Alex Deucher
2012-11-25 15:47 ` Terje Bergström
2012-11-25 11:45 ` Michal Suchanek
[not found] ` <CAOMqctTQGzhu3gU5hdJWKOCU0Dyk1vxCjE918PMa7aR+o1pTiQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-11-25 13:40 ` Thierry Reding
2012-11-26 2:51 ` Alex Deucher
[not found] ` <CADnq5_P1D7mwL6iYYbJSEBt8Ub5ejQmsMbupMNUU74d5==+gTw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-11-26 7:32 ` Thierry Reding
[not found] ` <20121126073234.GA17600-RM9K5IK7kjIyiCvfTdI0JKcOhU4Rzj621B7CTYaBSLdn68oJJulU0Q@public.gmane.org>
2012-11-26 7:45 ` Dave Airlie
[not found] ` <CAPM=9tzfjtUVC5PrRLA3Y659ausf1j=uXp-zfMZFUXz-ir67FA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-11-26 8:13 ` Thierry Reding
2012-11-26 5:56 ` Mark Zhang
2012-11-26 17:45 ` Aaron Plattner
[not found] ` <50B3AACE.3050908-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>
2012-11-26 21:27 ` Thierry Reding
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox