* [RFC] Virtual CRTCs (proposal + experimental code)
@ 2011-11-03 15:59 Ilija Hadzic
2011-11-03 17:21 ` Daniel Vetter
` (3 more replies)
0 siblings, 4 replies; 15+ messages in thread
From: Ilija Hadzic @ 2011-11-03 15:59 UTC (permalink / raw)
To: dri-devel, linux-fbdev
Hi everyone,
I would like to bring to the attention of dri-devel and linux-fbdev
community a set of hopefully useful and interesting patches that
I (and a few other colleagues) have been working on during the past
few months. Here, I will provide a short abstract, so that you can
decide whether this is of interest for you. At the end, I will
provide the pointers to the code and documentation.
The code is based on Dave Arilie's tree, drm-next branch and it
allows a GPU driver to have an arbitrary number of CRTCs
(configurable by user) instead of only those CRTCs that represent
real hardware.
The new CRTCs, that we call Virtual CRTCs can be attached to a
foreign device, that we call CTD device (short for Compression
Transmission and Display), and pixels can be streamed out of the GPU
to the device.
In one example, we use AMD/ATI Radeon GPU to do 3D rendering
(accelerated, of course) and we use our code to add additional
monitor heads using DisplayLink devices. In other words, we achieve
accelerated 3D rendering on a DisplayLink monitor. In another example
we funnel rendered pixels to userland by emulating a Video-for-Linux
device (and then userland can do whatever it wants with it). While
doing all this, GPU has no idea that we are doing this, the entire DRI
"thinks" that it is just dealing with a GPU that has a few "extra"
connectors and CRTCs. So everything ports without the need to modify
anything in the userland.
In general any device that can do something good with rendered pixels
can act as a CTD device, allowing a GPU to be an acceleration device
for a less capable display device or (the opposite) a frame-buffer-based
display device to be an expansion card for a GPU. Of course, for
each display device, a driver would have to be made compatible with our
new infrastructure (which is what we have done with DisplayLink driver
and also wrote one "synthetic" driver to fake out a V4L2 device as a
CTD device).
The newly introduced kernel module that we call VCRTCM (short for
Virtual CRTC Manager) handles the "traffic" between GPUs (actually
their CRTCs) and CTDs. The code makes use of DMA wherever possible
and also deals with specifics of CRTCs like modes, vblanks, page
flips, hardware cursor, etc. (just for kick, we played OpenArena
and watched Unigine Heaven demo on a DisplayLink monitor).
At this time, we would like to solicit feedback, comments, and
possibly contributions. The code is on github (pointers below)
and is based on the current state of drm-next branch from Dave's
tree. The code is very experimental, but complete and stable enough
that you can do something useful with it. We will be adding more
CTD drivers and updates to current ones in the near future and will
continue to maintain the code on github.
If the community finds this useful, we would be glad to work with
the maintainers on merging this upstream. So we would especially like
to hear what you would like to see changed to make this code acceptable
for the mainline of development.
My Github page is at https://github.com/ihadzic. To access the kernel
code type:
$ git clone git://github.com/ihadzic/linux-vcrtcm.git
$ git branch drm-next-vcrtcm origin/drm-next-vcrtcm
$ git checkout drm-next-vcrtcm
You will get all that's currently on Dave's drm-next plus our patches on
the top. We kept the development history preserved without squashing patches
(unless we had to due to merge/rebase conflicts), so you can see (and laugh
at) all our goofs and fixes to them.
To access the documentation, type:
$ git clone git://github.com/ihadzic/vcrtcm-doc.git
Then read the HOWTO.txt file. The first few sections provide some
general overview, and the sections that come later provide instructions
how to use our stuff.
Again, all comments, positive or negative, are very welcome.
-- Ilija
^ permalink raw reply [flat|nested] 15+ messages in thread* Re: [RFC] Virtual CRTCs (proposal + experimental code) 2011-11-03 15:59 [RFC] Virtual CRTCs (proposal + experimental code) Ilija Hadzic @ 2011-11-03 17:21 ` Daniel Vetter 2011-11-03 17:27 ` David Airlie ` (2 subsequent siblings) 3 siblings, 0 replies; 15+ messages in thread From: Daniel Vetter @ 2011-11-03 17:21 UTC (permalink / raw) To: Ilija Hadzic; +Cc: linux-fbdev, dri-devel Hi, Quick question: How does this compare to Dave Airlies' PRIME buffer sharing work for drm respectively the more generic dma_buf buffer sharing work pushed by Linaro? You seem to aim for a solution for similar problems (judging by your description) using a rather different approach. Cheers, Daniel On Thu, Nov 03, 2011 at 11:59:21AM -0400, Ilija Hadzic wrote: > Hi everyone, > > I would like to bring to the attention of dri-devel and linux-fbdev > community a set of hopefully useful and interesting patches that > I (and a few other colleagues) have been working on during the past > few months. Here, I will provide a short abstract, so that you can > decide whether this is of interest for you. At the end, I will > provide the pointers to the code and documentation. > > The code is based on Dave Arilie's tree, drm-next branch and it > allows a GPU driver to have an arbitrary number of CRTCs > (configurable by user) instead of only those CRTCs that represent > real hardware. > > The new CRTCs, that we call Virtual CRTCs can be attached to a > foreign device, that we call CTD device (short for Compression > Transmission and Display), and pixels can be streamed out of the GPU > to the device. > > In one example, we use AMD/ATI Radeon GPU to do 3D rendering > (accelerated, of course) and we use our code to add additional > monitor heads using DisplayLink devices. In other words, we achieve > accelerated 3D rendering on a DisplayLink monitor. In another example > we funnel rendered pixels to userland by emulating a Video-for-Linux > device (and then userland can do whatever it wants with it). While > doing all this, GPU has no idea that we are doing this, the entire DRI > "thinks" that it is just dealing with a GPU that has a few "extra" > connectors and CRTCs. So everything ports without the need to modify > anything in the userland. > > In general any device that can do something good with rendered pixels > can act as a CTD device, allowing a GPU to be an acceleration device > for a less capable display device or (the opposite) a frame-buffer-based > display device to be an expansion card for a GPU. Of course, for > each display device, a driver would have to be made compatible with our > new infrastructure (which is what we have done with DisplayLink driver > and also wrote one "synthetic" driver to fake out a V4L2 device as a > CTD device). > > The newly introduced kernel module that we call VCRTCM (short for > Virtual CRTC Manager) handles the "traffic" between GPUs (actually > their CRTCs) and CTDs. The code makes use of DMA wherever possible > and also deals with specifics of CRTCs like modes, vblanks, page > flips, hardware cursor, etc. (just for kick, we played OpenArena > and watched Unigine Heaven demo on a DisplayLink monitor). > > At this time, we would like to solicit feedback, comments, and > possibly contributions. The code is on github (pointers below) > and is based on the current state of drm-next branch from Dave's > tree. The code is very experimental, but complete and stable enough > that you can do something useful with it. We will be adding more > CTD drivers and updates to current ones in the near future and will > continue to maintain the code on github. > > If the community finds this useful, we would be glad to work with > the maintainers on merging this upstream. So we would especially like > to hear what you would like to see changed to make this code acceptable > for the mainline of development. > > My Github page is at https://github.com/ihadzic. To access the kernel > code type: > > $ git clone git://github.com/ihadzic/linux-vcrtcm.git > $ git branch drm-next-vcrtcm origin/drm-next-vcrtcm > $ git checkout drm-next-vcrtcm > > You will get all that's currently on Dave's drm-next plus our patches on > the top. We kept the development history preserved without squashing patches > (unless we had to due to merge/rebase conflicts), so you can see (and laugh > at) all our goofs and fixes to them. > > To access the documentation, type: > > $ git clone git://github.com/ihadzic/vcrtcm-doc.git > > Then read the HOWTO.txt file. The first few sections provide some > general overview, and the sections that come later provide instructions > how to use our stuff. > > Again, all comments, positive or negative, are very welcome. > > -- Ilija > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > http://lists.freedesktop.org/mailman/listinfo/dri-devel -- Daniel Vetter Mail: daniel@ffwll.ch Mobile: +41 (0)79 365 57 48 ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC] Virtual CRTCs (proposal + experimental code) 2011-11-03 15:59 [RFC] Virtual CRTCs (proposal + experimental code) Ilija Hadzic 2011-11-03 17:21 ` Daniel Vetter @ 2011-11-03 17:27 ` David Airlie 2011-11-03 17:53 ` Alan Cox 2011-11-03 18:00 ` Ilija Hadzic 2011-11-07 12:58 ` Dave Airlie 2011-11-23 11:48 ` Dave Airlie 3 siblings, 2 replies; 15+ messages in thread From: David Airlie @ 2011-11-03 17:27 UTC (permalink / raw) To: Ilija Hadzic; +Cc: linux-fbdev, dri-devel > > Hi everyone, > > I would like to bring to the attention of dri-devel and linux-fbdev > community a set of hopefully useful and interesting patches that > I (and a few other colleagues) have been working on during the past > few months. Here, I will provide a short abstract, so that you can > decide whether this is of interest for you. At the end, I will > provide the pointers to the code and documentation. > > The code is based on Dave Arilie's tree, drm-next branch and it > allows a GPU driver to have an arbitrary number of CRTCs > (configurable by user) instead of only those CRTCs that represent > real hardware. Well the current plan I had for this was to do it in userspace, I don't think the kernel has any business doing it and I think for the simple USB case its fine but will fallover when you get to the non-trivial cases where some sort of acceleration is required to move pixels around. But in saying that its good you've done what something, and I'll try and spend some time reviewing it. The current plan from my POV is to add hotplug support to the X server and just hotplug USB devices up there, and not mess up the kernel with lots of extra state that really the drivers don't need to know about. I'm also not sure how you deal with tiling etc, you can also start hitting rendering limits, where a GPU can render to 4kx4k but you can plug in more USB devices, again I'm hoping to solve this in userspace as well. But I'll take some time if I can find it to look over it. Dave. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC] Virtual CRTCs (proposal + experimental code) 2011-11-03 17:27 ` David Airlie @ 2011-11-03 17:53 ` Alan Cox 2011-11-03 18:00 ` Ilija Hadzic 1 sibling, 0 replies; 15+ messages in thread From: Alan Cox @ 2011-11-03 17:53 UTC (permalink / raw) To: David Airlie; +Cc: linux-fbdev, dri-devel > Well the current plan I had for this was to do it in userspace, I don't think the kernel > has any business doing it and I think for the simple USB case its fine but will fallover > when you get to the non-trivial cases where some sort of acceleration is required to move > pixels around. But in saying that its good you've done what something, and I'll try and spend > some time reviewing it. There are some clear advantages in the kernel doing bits of this I think. The kernel understands device to device DMA, and has a better idea than userspace about things like buffer alignment internals. It also means this ultimately can work without X running which is a plus for some applications (I want a displaylink gadget for my phone but thats another story 8)). > I'm also not sure how you deal with tiling etc, you can also start hitting rendering limits, > where a GPU can render to 4kx4k but you can plug in more USB devices, again I'm hoping to > solve this in userspace as well. Tiling has to be handled by the recipient (at least when the fb is shared). The nastier end of it that I don't see convered in the documentation is the handling of fencing between cards. Eg if you wanted to do display on one card fed into a second to do effects processing (think about TV type stuff) Alan ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC] Virtual CRTCs (proposal + experimental code) 2011-11-03 17:27 ` David Airlie 2011-11-03 17:53 ` Alan Cox @ 2011-11-03 18:00 ` Ilija Hadzic 1 sibling, 0 replies; 15+ messages in thread From: Ilija Hadzic @ 2011-11-03 18:00 UTC (permalink / raw) To: David Airlie; +Cc: linux-fbdev, dri-devel On Thu, 3 Nov 2011, David Airlie wrote: > > Well the current plan I had for this was to do it in userspace, I don't think the kernel > has any business doing it and I think for the simple USB case its fine but will fallover > when you get to the non-trivial cases where some sort of acceleration is required to move > pixels around. But in saying that its good you've done what something, and I'll try and spend > some time reviewing it. > The reason I opted for doing this in kernel is that I wanted to confine all the changes to a relatively small set of modules. At first this was a pragmatic approach, because I live out of the mainstream development tree and I didn't want to turn my life into an ethernal merging/conflict-resolution activity. However, a more fundamental reason for it is that I didn't want to be tied to X. I deal with some userland applications (that unfortunately I can't provide much detail of .... yet) that live directly on the top of libdrm. So I set myself a goal of "full application transparency". Whatever is thrown at me, I wanted to be able to handle without having to touch any piece of application or library that the application relies on. I think I have achieved this goal and really everything I tried just worked out of the box (with an exception of two bug fixes to ATI DDX and Xorg, that are bugs with or without my work). -- Ilija ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC] Virtual CRTCs (proposal + experimental code) 2011-11-03 15:59 [RFC] Virtual CRTCs (proposal + experimental code) Ilija Hadzic 2011-11-03 17:21 ` Daniel Vetter 2011-11-03 17:27 ` David Airlie @ 2011-11-07 12:58 ` Dave Airlie 2011-11-07 13:52 ` Ilija Hadzic 2011-11-23 11:48 ` Dave Airlie 3 siblings, 1 reply; 15+ messages in thread From: Dave Airlie @ 2011-11-07 12:58 UTC (permalink / raw) To: Ilija Hadzic; +Cc: linux-fbdev, dri-devel > In general any device that can do something good with rendered pixels > can act as a CTD device, allowing a GPU to be an acceleration device > for a less capable display device or (the opposite) a frame-buffer-based > display device to be an expansion card for a GPU. Of course, for > each display device, a driver would have to be made compatible with our > new infrastructure (which is what we have done with DisplayLink driver > and also wrote one "synthetic" driver to fake out a V4L2 device as a > CTD device). So I expect the way I'd like this to work, is that we have full blown drm drivers for all the devices and then some sort of linkage layer between them, so one driver can export crtcs from another, instead of having special case ctd drivers. Now I can see the reason for the v4l one, but I have for example a udl kms driver, and I'd like to have it work alongside this stuff, and userspace could bind the crtcs from it to another driver. I'm not sure how much work this would be or if its just a matter of adding a CTD interface to the udl kms device. Dave ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC] Virtual CRTCs (proposal + experimental code) 2011-11-07 12:58 ` Dave Airlie @ 2011-11-07 13:52 ` Ilija Hadzic 0 siblings, 0 replies; 15+ messages in thread From: Ilija Hadzic @ 2011-11-07 13:52 UTC (permalink / raw) To: Dave Airlie; +Cc: linux-fbdev, dri-devel On Mon, 7 Nov 2011, Dave Airlie wrote: > > So I expect the way I'd like this to work, is that we have full blown drm > drivers for all the devices and then some sort of linkage layer between them, > so one driver can export crtcs from another, instead of having special case > ctd drivers. I agree and that is actually a long-term plan from our side. CTD functionality should be integral part of existing drivers not the new driver, unless there is a new functionality that makes sense as CTD-only (like v4l2ctd). In the world that I imagine, likage layer is VCRTCM. Unaccelerated frabebuffer devices (UDL for example, but in general anything that "lives" in fbdev world) can choose (based on some policy from userland) whether to act as CTD driver and register themselves with VCRTCM (when they want acceleration "assistance" from a GPU in the system) or to load as normal fbdev devices (when they want to run on their own). > Now I can see the reason for the v4l one, but I have for example > a udl kms driver, and I'd like to have it work alongside this stuff, > and userspace > could bind the crtcs from it to another driver. I'm not sure how much work > this would be or if its just a matter of adding a CTD interface to the udl kms > device. > The only reason we wrote a new udlctd driver was because it was quicker that way (we just ripped some code from udlfb driver and added our CTD functionality). The plan was always to merge udlctd and udlfb at some point, but first I'd like to see how perceptive is the community to the concept. If the concept makes sense, then we'll throw in enough programming to consolidate the drivers. Nobody wants three competing drivers for the same device (udlfb, your udl kms driver and our udlctd). Speaking of udl driver, is your udl-v2 branch competing with udlfb? Externally, they seem to do similar or the same thing, but one is based on DRM (and I guess the required DDX is xf86-video-modesetting), while the other is based on fbdev and the required DDX is xf8b-video-fbdev. While from my perspective both could be consilidated with a CTD functionality, but it makes me wonder in which direction is the community development moving: is everything from fbdev-world moving under DRM as a set of unaccelerated KMS drivers or is fbdev staying separate for good ? Depending on what the trend is, one or the other udl driver (udl-v2 or udlfb) will make more sense to be merged with udlctd. -- Ilija ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC] Virtual CRTCs (proposal + experimental code) 2011-11-03 15:59 [RFC] Virtual CRTCs (proposal + experimental code) Ilija Hadzic ` (2 preceding siblings ...) 2011-11-07 12:58 ` Dave Airlie @ 2011-11-23 11:48 ` Dave Airlie 2011-11-24 5:59 ` Ilija Hadzic 3 siblings, 1 reply; 15+ messages in thread From: Dave Airlie @ 2011-11-23 11:48 UTC (permalink / raw) To: Ilija Hadzic; +Cc: linux-fbdev, dri-devel > > I would like to bring to the attention of dri-devel and linux-fbdev > community a set of hopefully useful and interesting patches that > I (and a few other colleagues) have been working on during the past > few months. Here, I will provide a short abstract, so that you can > decide whether this is of interest for you. At the end, I will > provide the pointers to the code and documentation. > > The code is based on Dave Arilie's tree, drm-next branch and it > allows a GPU driver to have an arbitrary number of CRTCs > (configurable by user) instead of only those CRTCs that represent > real hardware. So another question I have is how you would intend this to work from a user POV, like how it would integrate with a desktop environment, X or wayland, i.e. with little or no configuration. I still foresee problems with tiling, we generally don't encourage accel code to live in the kernel, and you'll really want a tiled->untiled blit for this thing, also for Intel GPUs where you have UMA, would you read from the UMA. It also doesn't solve the optimus GPU problem in any useful fashion, since it can't deal with all the use cases, so we still have to write an alternate solution that can deal with them, so we just end up with two answers. Dave. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC] Virtual CRTCs (proposal + experimental code) 2011-11-23 11:48 ` Dave Airlie @ 2011-11-24 5:59 ` Ilija Hadzic 2011-11-24 8:52 ` Dave Airlie 0 siblings, 1 reply; 15+ messages in thread From: Ilija Hadzic @ 2011-11-24 5:59 UTC (permalink / raw) To: Dave Airlie; +Cc: linux-fbdev, dri-devel On Wed, 23 Nov 2011, Dave Airlie wrote: > So another question I have is how you would intend this to work from a > user POV, like how it would integrate with a desktop environment, X or > wayland, i.e. with little or no configuration. > First thing to understand is that when a virtual CRTC is created, it looks to the user like the GPU has an additional DisplayPort connector. At present I "abuse" DisplayPort, but I have seen that you pushed a patch from VMware that adds Virtual connector, so eventually I'll switch to that naming. The number of virtual CRTCs is determined when the driver loads and that is a static configuration parameter. This does not restrict the user because unused virutal CRTCs are just like disconnected connectors on the GPU. In extreme case, a user could max out the number of virtual CRTCs (i.e. 32 minus #-of-physical-CRTCs), but in general the system needs to be booted with maximum number of anticipated CRTCs. Run-time addition and removal of CRTCs is not supported at this time and that would be much harder to implement and would affect the whole DRM module everywhere. So now we have a system that booted up and DRM sees all of its real connectors as well as virtual ones (as DisplayPorts at present). If there is no CTD device attached to virtual CRTCs, these virtual connectors are disconnected as far as DRM is concerned. Now the userspace must call "attach/fps" ioctl to associate CTDs with CRTCs. I'll explain shortly how to automate that and how to eliminate the burden from the user, but for now, please assume that "attach/fps" gets called from userland somehow. When the attach happens, that is a hotplug event (VCRTCM generates it) to DRM, just like someone plugged in the monitor. Then when XOrg starts, it will use the DisplayPort that represents a virtual CRTC just like any other connector. How it will use it, will depend on what the xorg.conf says, but the key point is that this connector is no different from any other connector that the GPU provides and is thus used as an "equal citizen". No special configuration is necessary once attached to CTD. If CTD is detached and new CTD attached, that is just like yanking out monitor cable and plugging in the new one. DRM will get all hotplug events and windowing system will do the same thing it would normally do with any other port. If RANDR is called to resize the desktop it will also work and X will have no idea that one of the connectors is on a virtual CRTC. I also have another feature, where when CTD is attached, it can ask the device it drives for the connection status and propagate that all the way back to DRM (this is useful for CTD devices that drive real monitors, like DisplayLink). So now let's go back to the attach/fps ioctl. To attach a CTD device this ioctl must happen as a result of some policy. That can be done by having CTD device generate UDEV events when it loads for which one can write policies to determine which CTD device attaches to which virtual CRTC. Ultimately that becomes an user configuration, but it's no different from what one has to do now with UDEV policies to customize the system. Having explained this, let's take your hotplug example that you put up on your web page and redo it with virtual CRTCs. Here is how it would work: Boot up the system and tell the GPU to create a few virtual CRTCs. Bring up Xorg with no DisplayLink dongles in it. Now plug in the DisplayLink. Once the CTD driver loads as the result of the hotplug (right now UDLCTD is a separate driver, but as we discussed before, this is a temporary state and at some point its CTD function should be merged wither with UDLFB or with your UDL-V2) CTD function in the driver generates an UDEV event. The policy directs UDEV to call run the program that issues the ioctl to perform attach/fps. Attach/fps of the UDLCTD is now a hotplug event and DRM "thinks" that a new connector changed the status from disconnected to connected. That causes it to query the modes for the new connector and because it's the virtual CRTC, it lands in the virtual CRTC helpers in the GPU driver. Virtual CRTC helpers route it to VCRTCM, which further routes to it to CTD (UDLCTD in this case). CTD returns the modes and DRM gets them ... the rest you know better than me what happens ;-) So this is your hotplug demo, but the difference is that the new desktop can use direct rendering. Also, everything that would work for a normal connector works here without having to do any additional tricks. RANDR also works seamlessly without having to do anything special. If you move away from Xorg, to some other system (Wayland?), it still works for as long as the new system knows how to deal with connectors that connect and disconnect. Everything I described above is ready to go except the UDEV event from UDLCTD and UDEV rules to automate this. Both are straightforwar and won't take long to do. So very shortly, I'll be able to show the hotplug-bis. From what you wrote in your blog, it sounds like this is exactly what you are looking for. I recognize that it disrupts your current views/plans on how this should be done, but I do want to work with you to find a suitable middle ground that covers most of the possiblities. In case you are looking at my code to follow the above-described scenarios, please make sure you pull the latest stuff from my github repository. I have been pushing new stuff since my original annoucement. > I still foresee problems with tiling, we generally don't encourage > accel code to live in the kernel, and you'll really want a > tiled->untiled blit for this thing, Accel code should not go into the kernel (that I fully agree) and there is nothing here that would behove us to do so. Restricting my comments to Radeon GPU (which is the only one that I know well enough), shaders for blit copy live in the kernel and irrespective of VCRTCM work. I rely on them to move the frame buffer out of VRAM to CTD device but I don't add any additional features. Now for detiling, I think that it should be the responsibility of the receiving CTD device, not the GPU pushing the data (Alan mentioned that during the initial set of comments, and although I didn't say anything to it that has been my view as well). Even if you wanted to use GPU for detiling (which I'll explain shortly why you should not), it would not require any new accel code in the kernel. It would merely require one bit flip in the setup of blit copy that already lives in the kernel. However, de-tiling in GPU is a bad idea for two reasons. I tried to do that just as an experiment on Radeon GPUs and watched with the PCI Express analyzer what happens on the bus (yeah, I have some "heavy weapons" in my lab). Normally a tile is a continuous array of memory locations in VRAM. If blit-copy function is told to assume tiled source and linear destination (de-tiling) it will read a continuous set of addresses in VRAM, but then scatter 8 rows of 8 pixels each on non-contignuous set of addresses of the destination. If the destination is the PCI-Express bus, it will result in 8 32-byte write transactions instead of 2 128-byte transactions per each tile. That will choke the throughput of the bus right there. BTW, this is the crux of the blit-copy performance improvement that you got from me back in October. Since blit-copy deals with copying a linear array, playing with tiled/non-tiled bits only affects the order in which addresses are accessed, so the trick was to get rid of short PCIe transactions and also shape up linear to rectangle mapping to make address pattern more friendly for the host. > also for Intel GPUs where you have > UMA, would you read from the UMA. > Yes the read would be from UMA. I have not yet looked at Intel GPUs in detail, so I don't have an answer for you on what problems would pop up and how to solve them, but I'll be glad to revisit the Intel discussion once I do some homework. Some initial thoughts is that frame buffer in Intel are at the end of the day pages in the system memory, so anyone/anything can get to them if they are correctly mapped. > It also doesn't solve the optimus GPU problem in any useful fashion, > since it can't deal with all the use cases, so we still have to write > an alternate solution that can deal with them, so we just end up with > two answers. > Can you elaborate on some specific use cases that are of your concern? I have had this case in mind and I think I can make it work. First I would have to add CTD functionality to Intel driver. That should be straightforward. Once I get there, I'll be ready to experiment and we'll probably be in better position to discuss the specifics then (i.e. when we have something working to compare with what you did in PRIME experiemnt), but it would be good to know your specific concerns early. thanks, Ilija ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC] Virtual CRTCs (proposal + experimental code) 2011-11-24 5:59 ` Ilija Hadzic @ 2011-11-24 8:52 ` Dave Airlie 2011-11-24 10:52 ` Daniel Vetter ` (2 more replies) 0 siblings, 3 replies; 15+ messages in thread From: Dave Airlie @ 2011-11-24 8:52 UTC (permalink / raw) To: Ilija Hadzic; +Cc: linux-fbdev, dri-devel On Thu, Nov 24, 2011 at 5:59 AM, Ilija Hadzic <ihadzic@research.bell-labs.com> wrote: > > > On Wed, 23 Nov 2011, Dave Airlie wrote: > >> So another question I have is how you would intend this to work from a >> user POV, like how it would integrate with a desktop environment, X or >> wayland, i.e. with little or no configuration. >> > > First thing to understand is that when a virtual CRTC is created, it looks > to the user like the GPU has an additional DisplayPort connector. > At present I "abuse" DisplayPort, but I have seen that you pushed a patch > from VMware that adds Virtual connector, so eventually I'll switch to that > naming. The number of virtual CRTCs is determined when the driver loads and > that is a static configuration parameter. This does not restrict the user > because unused virutal CRTCs are just like disconnected connectors on the > GPU. In extreme case, a user could max out the number of virtual CRTCs (i.e. > 32 minus #-of-physical-CRTCs), but in general the system needs to be booted > with maximum number of anticipated CRTCs. Run-time addition and removal of > CRTCs is not supported at this time and that would be much harder to > implement and would affect the whole DRM module everywhere. > > So now we have a system that booted up and DRM sees all of its real > connectors as well as virtual ones (as DisplayPorts at present). If there is > no CTD device attached to virtual CRTCs, these virtual connectors are > disconnected as far as DRM is concerned. Now the userspace must call > "attach/fps" ioctl to associate CTDs with CRTCs. I'll explain shortly how to > automate that and how to eliminate the burden from the user, but for now, > please assume that "attach/fps" gets called from userland somehow. > > When the attach happens, that is a hotplug event (VCRTCM generates it) to > DRM, just like someone plugged in the monitor. Then when XOrg starts, it > will use the DisplayPort that represents a virtual CRTC just like any other > connector. How it will use it, will depend on what the xorg.conf says, but > the key point is that this connector is no different from any other > connector that the GPU provides and is thus used as an "equal citizen". No > special configuration is necessary once attached to CTD. > > If CTD is detached and new CTD attached, that is just like yanking out > monitor cable and plugging in the new one. DRM will get all hotplug events > and windowing system will do the same thing it would normally do with any > other port. If RANDR is called to resize the desktop it will also work and X > will have no idea that one of the connectors is on a virtual CRTC. I also > have another feature, where when CTD is attached, it can ask the device it > drives for the connection status and propagate that all the way back to DRM > (this is useful for CTD devices that drive real monitors, like DisplayLink). Okay so thats pretty much how I expected it to work, I don't think Virtual makes sense for a displaylink attached device though, again if you were using a real driver you would just re-use whatever output type it uses, though I'm not sure how well that works, Do you propogate full EDID information and all the modes or just the supported modes? we use this in userspace to put monitor names in GNOME display settings etc. what does xrandr output looks like for a radeon GPU with 4 vcrtcs? do you see 4 disconnected connectors? that again isn't a pretty user experience. > So this is your hotplug demo, but the difference is that the new desktop can > use direct rendering. Also, everything that would work for a normal > connector works here without having to do any additional tricks. RANDR also > works seamlessly without having to do anything special. If you move away > from Xorg, to some other system (Wayland?), it still works for as long as > the new system knows how to deal with connectors that connect and > disconnect. My main problem with this is as I'll explain below it only covers some of the use cases, and I don't want a 50% solution at this point, by doing something like this you are making it harder to get proper support into something like wayland as they can ignore some of the problems, however since this doesn't solve all the other problems it means getting to a finished solution is actually less likely to happen. >> I still foresee problems with tiling, we generally don't encourage >> accel code to live in the kernel, and you'll really want a >> tiled->untiled blit for this thing, > > Accel code should not go into the kernel (that I fully agree) and there is > nothing here that would behove us to do so. Restricting my comments to > Radeon GPU (which is the only one that I know well enough), shaders for blit > copy live in the kernel and irrespective of VCRTCM work. I rely on them to > move the frame buffer out of VRAM to CTD device but I don't add any > additional features. > > Now for detiling, I think that it should be the responsibility of the > receiving CTD device, not the GPU pushing the data (Alan mentioned that > during the initial set of comments, and although I didn't say anything to it > that has been my view as well). That is a pretty much a fundamental problem, there is no way you can enumerate all the detiling necessary in the CTD device and there is no way I'd want to merge that code to the kernel. r600 has 16 tiling modes (we might only see 2 of these on scanout), r300->r500 have another different set, r100->r200 have another different set, nouveau has an major amount of modes, and intel has a full set and has crazy memory configuration dependant swizzling, then gma500, etc. This just won't be workable or scalable. > Even if you wanted to use GPU for detiling (which I'll explain shortly why > you should not), it would not require any new accel code in the kernel. It > would merely require one bit flip in the setup of blit copy that already > lives in the kernel. That is fine for radeon, not so much for intel, nouveau etc. > However, de-tiling in GPU is a bad idea for two reasons. I tried to do that > just as an experiment on Radeon GPUs and watched with the PCI Express > analyzer what happens on the bus (yeah, I have some "heavy weapons" in my > lab). Normally a tile is a continuous array of memory locations in VRAM. If > blit-copy function is told to assume tiled source and linear destination > (de-tiling) it will read a continuous set of addresses in VRAM, but then > scatter 8 rows of 8 pixels each on non-contignuous set of addresses of the > destination. If the destination is the PCI-Express bus, it will result in 8 > 32-byte write transactions instead of 2 128-byte transactions per each tile. > That will choke the throughput of the bus right there. The thing is this is how optimus works, the nvidia gpus have an engine that you can program to move data from the nvidia tiled VRAM format to the intel main memory tiled format, and make if efficent. radeon's also have some engines that AMD so far haven't told us about, but someone with no NDA with AMD could easily start REing that sort of thing. > > Yes the read would be from UMA. I have not yet looked at Intel GPUs in > detail, so I don't have an answer for you on what problems would pop up and > how to solve them, but I'll be glad to revisit the Intel discussion once I > do some homework. Probably a good idea to do some more research on intel/nvidia GPUs. With intel you can't read back from UMA since it'll be uncached memory so unuseable, so you'll need to use the GPU to detile and move to some sort of cached linear area you can readback from. >> It also doesn't solve the optimus GPU problem in any useful fashion, >> since it can't deal with all the use cases, so we still have to write >> an alternate solution that can deal with them, so we just end up with >> two answers. >> > > Can you elaborate on some specific use cases that are of your concern? I > have had this case in mind and I think I can make it work. First I would > have to add CTD functionality to Intel driver. That should be > straightforward. Once I get there, I'll be ready to experiment and we'll > probably be in better position to discuss the specifics then (i.e. when we > have something working to compare with what you did in PRIME experiemnt), > but it would be good to know your specific concerns early. > Switchable/Optimus mode has two modes of operation, a) nvidia GPU is rendering engine and the intel GPU is just used as a scanout buffer for the LVDS panel. This mode is used when an external digital display is plugged in, or in some plugged in configurations. b) intel GPU is primary rendering engine, and the nvidia gpu is used as an offload engine. This mode is used when on battery or power saving, with no external displays plugged in. You can completely turn on/off the nvidia GPU. Moving between a and b has to be completely dynamic, userspace apps need to deal with the whole world changing beneath them. There is also switchable graphics mode, where there is a MUX used to switch the outputs between the two GPUs. So the main problem with taking all this code on-board is it sort of solves (a), and (b) needs another bunch of work. Now I'd rather not solve 50% of the issue and have future userspace apps just think they can ignore the problem. As much as I dislike the whole dual-gpu setups the fact is they exist and we can't change that, so writing userspace to ignore the problem because its too hard isn't going to work. So if I merge this VCRTC stuff I give a lot of people an excuse for not bothering to fix the harder problems that hotplug and dynamic GPUs put in front of you. Dave. > > thanks, > > Ilija > > ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC] Virtual CRTCs (proposal + experimental code) 2011-11-24 8:52 ` Dave Airlie @ 2011-11-24 10:52 ` Daniel Vetter 2011-11-25 5:11 ` Ilija Hadzic 2011-11-24 12:58 ` Alan Cox 2011-11-25 4:08 ` Ilija Hadzic 2 siblings, 1 reply; 15+ messages in thread From: Daniel Vetter @ 2011-11-24 10:52 UTC (permalink / raw) To: Dave Airlie; +Cc: linux-fbdev, dri-devel On Thu, Nov 24, 2011 at 08:52:45AM +0000, Dave Airlie wrote: > So the main problem with taking all this code on-board is it sort of > solves (a), and (b) needs another bunch of work. Now I'd rather not > solve 50% of the issue and have future userspace apps just think they > can ignore the problem. As much as I dislike the whole dual-gpu setups > the fact is they exist and we can't change that, so writing userspace > to ignore the problem because its too hard isn't going to work. So if > I merge this VCRTC stuff I give a lot of people an excuse for not > bothering to fix the harder problems that hotplug and dynamic GPUs put > in front of you. My 2 Rappen on this: I agree completely with your point that we should aim for a full solution. GPU memory management across different devices is hard, but solveable. Furthermore I fear that a 50% solution that hides the memory management and shuffling issues from userspace will end up being a leaky abstraction (e.g. how and when is stuff transferred to the usb dp port, the kernel might pin scanout buffers behind userspace's back screwing up the vram accounting in userspace, random hotplugging of outputs ...). Also v4l/embedded folks have similar issues (and the same tendency to just go with a "simple" solution fitting their usecase) and with Intel dead-set on entering the SoC market I'll have the joy to mess around with this stuff pretty soon, too. So I think we do have enough people interested in this and should be able to cobble together something that does The Right Thing. Cheers, Daniel -- Daniel Vetter Mail: daniel@ffwll.ch Mobile: +41 (0)79 365 57 48 ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC] Virtual CRTCs (proposal + experimental code) 2011-11-24 10:52 ` Daniel Vetter @ 2011-11-25 5:11 ` Ilija Hadzic 0 siblings, 0 replies; 15+ messages in thread From: Ilija Hadzic @ 2011-11-25 5:11 UTC (permalink / raw) To: Daniel Vetter; +Cc: linux-fbdev, dri-devel > > So I think we do have enough people interested in this and should be able > to cobble together something that does The Right Thing. > We indeed have a non-trivial set of people interested in the same set of problems and each of us has partial and maybe competing solution. I want to make it clear that my, maybe disruptive and different from the plan-of-record, proposal should not be viewed as destructive or distracting. I am just offering to the community what I think is useful. If this discussion sparks some joint effort that will bring us to the solution that everyone is happy with, even if no line of my code is found useful, I am perfectly fine with that (and I'll join the effort). So at this point I think I should put out my back-of-the-napkin desiderata. That will hopefully shed some light on where I am coming from with VCRTCM proposal. I want to be able to pull pixels out of the GPU and redirect them to an arbitrary device that can do something useful with them. This should not be limited to shooting photons into human eyeballs. I want to be able to run my applications without having to run X. I'd like the solution to be transparent to the application; that is, if I can write an application that can render something to a full screen, I want to redirect that "screen" to wherever I want without having to rewrite, recompile or relink the application. Actually, I want to do that redirection at runtime. I'd like to support all of the above in a way that it can also help solve more imminent shotcomings of Linux graphics system (Optimus, DisplayLink, etc. ... cf. previous E-mails in this thread). I'd like it to work with multiple render nodes on the same GPU (something like Dave's multiseat work, in which both GPU and its display resources are virtual). The logical consequence of this is that the render node and the display node should at some point become logically separate (different driver modules) even if they are physically on the same GPU. They are really two different subsystems that just happen to reside on the same circuit board, so it makes sense to separate them. I don't think what I am saying is anything unique and what I said probably overlaps in good part with what others also want from the graphics subsystem. I can see the role of VCRTCM in all of the above, but I am open-minded. If we end up with a solution that has nothing to do with VCRTCM, I have no emotional ties with my code (and code of my colleagues that worked with me so far). -- Ilija ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC] Virtual CRTCs (proposal + experimental code) 2011-11-24 8:52 ` Dave Airlie 2011-11-24 10:52 ` Daniel Vetter @ 2011-11-24 12:58 ` Alan Cox 2011-11-24 13:48 ` Dave Airlie 2011-11-25 4:08 ` Ilija Hadzic 2 siblings, 1 reply; 15+ messages in thread From: Alan Cox @ 2011-11-24 12:58 UTC (permalink / raw) To: Dave Airlie; +Cc: linux-fbdev, dri-devel > The thing is this is how optimus works, the nvidia gpus have an engine > that you can program to move data from the nvidia tiled VRAM format to This is even more of a special case than DisplayLink ;-) > Probably a good idea to do some more research on intel/nvidia GPUs. > With intel you can't read back from UMA since it'll be uncached memory > so unuseable, so you'll need to use the GPU to detile and move to some > sort of cached linear area you can readback from. It's main memory so there are various ways to read it or pull it into cached space. > I merge this VCRTC stuff I give a lot of people an excuse for not > bothering to fix the harder problems that hotplug and dynamic GPUs put > in front of you. I think both cases are slightly missing the mark, both are specialist corner cases and once you add things like cameras to the mix that will become even more painfully obvious. The underlying need I think is a way to negotiate a shared buffer format or pipeline between two devices. You also need in some cases to think about shared fencing, and that is the bit that is really scary. Figuring out the transform from A to B ('lets both use this buffer format') or 'I can render then convert' is one thing. Dealing with two GPUs firing into the same buffer while scanning it out I just pray doesn't ever need shared fences. Alan ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC] Virtual CRTCs (proposal + experimental code) 2011-11-24 12:58 ` Alan Cox @ 2011-11-24 13:48 ` Dave Airlie 0 siblings, 0 replies; 15+ messages in thread From: Dave Airlie @ 2011-11-24 13:48 UTC (permalink / raw) To: Alan Cox; +Cc: linux-fbdev, dri-devel On Thu, Nov 24, 2011 at 12:58 PM, Alan Cox <alan@lxorguk.ukuu.org.uk> wrote: >> The thing is this is how optimus works, the nvidia gpus have an engine >> that you can program to move data from the nvidia tiled VRAM format to > > This is even more of a special case than DisplayLink ;-) > >> Probably a good idea to do some more research on intel/nvidia GPUs. >> With intel you can't read back from UMA since it'll be uncached memory >> so unuseable, so you'll need to use the GPU to detile and move to some >> sort of cached linear area you can readback from. > > It's main memory so there are various ways to read it or pull it into > cached space. We have no way to detile on the CPU for lots of intel corner cases, I don't hold out for it being a proper solution, though in theory for hibernate its a requirement to figure out. But you can expose stuff via the GTT using fences to detile, but you can't then get cached access to it. So no really various ways, none of them useful or faster than getting the GPU to blit somewhere linear and flipping the dest mapping. > >> I merge this VCRTC stuff I give a lot of people an excuse for not >> bothering to fix the harder problems that hotplug and dynamic GPUs put >> in front of you. > > I think both cases are slightly missing the mark, both are specialist > corner cases and once you add things like cameras to the mix that will > become even more painfully obvious. > > The underlying need I think is a way to negotiate a shared buffer format > or pipeline between two devices. You also need in some cases to think > about shared fencing, and that is the bit that is really scary. > > Figuring out the transform from A to B ('lets both use this buffer > format') or 'I can render then convert' is one thing. Dealing with two > GPUs firing into the same buffer while scanning it out I just pray > doesn't ever need shared fences. But we have a project looking into all that, called dmabuf, we also have the PRIME work which we hope to build on top of dmabuf. The thing is there are lots of building blocks we need to put in place, and we've mostly identified what they are, its just typing now. Dave. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [RFC] Virtual CRTCs (proposal + experimental code) 2011-11-24 8:52 ` Dave Airlie 2011-11-24 10:52 ` Daniel Vetter 2011-11-24 12:58 ` Alan Cox @ 2011-11-25 4:08 ` Ilija Hadzic 2 siblings, 0 replies; 15+ messages in thread From: Ilija Hadzic @ 2011-11-25 4:08 UTC (permalink / raw) To: Dave Airlie; +Cc: linux-fbdev, dri-devel On Thu, 24 Nov 2011, Dave Airlie wrote: > Okay so thats pretty much how I expected it to work, I don't think > Virtual makes sense for a displaylink attached device though, > again if you were using a real driver you would just re-use whatever > output type it uses, though I'm not sure how well that works, That is the consequence of the fact that virtual CRTCs are created at startup time when attached CTD is not known, while CTDs are attached at runtime. So when I register the virtual CRTC and the associated connector I have to use something for the connector type. Admitting that my logic is biased by my design, to me "Virtual" connector type is an indicative that from GPU's perspective it's a connector that does not physically exist and is yet to be attached to some real display device. At that point the properties of the attached display become known to the system. > > Do you propogate full EDID information and all the modes or just the > supported modes? we use this in userspace to put monitor names in > GNOME display settings etc. > Right now we propagate the entire list of modes that the attached CTD device has queried from the connected display (monitor). Propagating full EDID is really easy to add. That's if the CTD is driver for some real display. If CTD is just a "make-believe" display whose purpose is to be the conduit to some other pixel-processing component (e.g. V4L2CTD), then at some point in the chain we have to make up the set of modes that the logical display accepts and in that case the EDID does not exist by definition. > what does xrandr output looks like for a radeon GPU with 4 vcrtcs? do > you see 4 disconnected connectors? that again isn't a pretty user > experience. > Yes it shows 4 disconnected monitors. To me that is a logical consequence of the design in which virtual CRTCs and associated virtual connectors are always there. By know, it's clear to me that you are not too thrilled about it, but please allow me to turn the question back to you: in your solution with udl-v2 driver and a dedicated DDX for it, can you do the big desktop that spans across GPU's local and "foreign" displays and have acceleration on both? If not, what would it take you to get there and how complex the end result will be? I'll get to Optimus/PRIME use case later, but if we for the moment focus on the use case in which a dumb framebuffer device extens the number of displays of a rendering-capable GPU, I think that VCRTCM offers quite a complete and universal solution and it is completely transparent with regard to the application, window manager, and display server. Radeon + DisplayLink is the specific example. But in general it's Any GPU + Any fbdev. It's not just one use case, it's a whole class of use cases that would follow the same principle and for then the VCRTC alone suffices. > My main problem with this is as I'll explain below it only covers some > of the use cases, and I don't want a 50% solution at this point, by > doing something like this you are making it harder to get proper > support into something like wayland as they can ignore some of the > problems, however since this doesn't solve all the other problems it > means getting to a finished solution is actually less likely to > happen. > I presume that by 50% solution you are referring to Optimus/PRIME use case. That case actually consists of two related, but different problems. First is "render on node X and display on node Y" and the second is "dynamically and hitlessly switch rendering between node X and Y". I have never claimed that VCRTCs solve the second problem (I could switch by restarting Xorg, but I know that this is not the solution you are looking for). I fully understand why you want both problems solved at the same time. However, I don't understand why solving one first would inhibit solving the other. On the other hand, the Radeon + DisplayLink tandem use case (or in general GPU + fbdev tandem) consists only of "render on X, display on Y" problem. Here, you will probably say that there one can switch between hardware and software rendering and that it also has both problems. That is true, but unlike the Optimus/PRIME use case, using fbdev as a display extension to GPU is still useful alone. My point is that there is a value in solving first one problem and then follow with the other. I think the crux of the problem is that you are not convinced that the VCRTCM solution for problem #1 will make solving problem #2 easier and maybe you are afraid that it will make it harder. If that's a fair statement and if having me create an existence proof for problem #2 that still uses VCRTCM will help bring our positions closer, I am perfectly willing to do so .... I guess I've just signed up for some hacking ;-) Note that for hitless GPU switching, I fully agree that support must be in userspace (you have to swap out paths in Mesa and DDX before even getting to kernel), but like I said, that is a separate problem from redirecting the display to another node. > r600 has 16 tiling > modes (we might only see 2 of these on scanout) But VCRTC emulates a CRTC, so the only ones relevant are those that we see on the scnout. Do we really anticipate using all 16 for CRTC buffers ? > > The thing is this is how optimus works, the nvidia gpus have an engine > that you can program to move data from the nvidia tiled VRAM format to > the intel main memory tiled format, and make if efficent. radeon's > also have some engines that AMD so far haven't told us about, but > someone with no NDA with AMD could easily start REing that sort of > thing. > If we could have every GPU efficiently push out pixels in some "common denominator" format that would be ideal, but at this time, the reality is far from it. Whether the obstacles are technical or legal, doesn't matter. I fully understand your concern about the number of tiling/detiling combinations getting out of control, but I am not sure that the problem is as bad as you picture it if CRTC buffer uses only a subset of available tiling modes. > > Switchable/Optimus mode has two modes of operation, > > a) nvidia GPU is rendering engine and the intel GPU is just used as a > scanout buffer for the LVDS panel. This mode is used when an external > digital display is plugged in, or in some plugged in configurations. > > b) intel GPU is primary rendering engine, and the nvidia gpu is used > as an offload engine. This mode is used when on battery or power > saving, with no external displays plugged in. You can completely turn > on/off the nvidia GPU. > > Moving between a and b has to be completely dynamic, userspace apps > need to deal with the whole world changing beneath them. > So case a) is "render on X display on Y" problem and case b) (when NVidia is turned off, offload aside) is just a traditional rendering on one (Intel) GPU. Real sticky point is dynamic switching and offload. > There is also switchable graphics mode, where there is a MUX used to > switch the outputs between the two GPUs. > One question for my education: I understand that MUX is essentially a switch outside the two GPUs that selects whether the output takes NVidia's "connector" or Intel's "connector", right ? When MUX is involved, then you don't have "render on X display on Y" problem at all, but it's only the "dynamic switching" problem. Is my understanding correct ? There are also MUX-less laptops, where you only have case a)/b) that you described above, right ? > So the main problem with taking all this code on-board is it sort of > solves (a), and (b) needs another bunch of work. Now I'd rather not > solve 50% of the issue and have future userspace apps just think they > can ignore the problem. As much as I dislike the whole dual-gpu setups > the fact is they exist and we can't change that, so writing userspace > to ignore the problem because its too hard isn't going to work. So if > I merge this VCRTC stuff I give a lot of people an excuse for not > bothering to fix the harder problems that hotplug and dynamic GPUs put > in front of you. > Point taken. I still think that we are actually dealing with two separate problems, but you have your reasons why you want them solved together. I also have a few more use cases which are solved with VCRTCM only and don't need dynamic switching, so these and also "3D Accel + DisplayLink" one will suffer by having to wait for full solution that covers all use cases, but that's your call and I don't question it. I hope that you are not categorically dismissing an option that the solution can be implemented on the top of VCRTCM and that if I come back with some more code that shows that it addresses your concern that you will be perceptive to another round of review. I do appreciate you taking the time to look at this. I know that you are overbusy with day-to-day patches and merging. I understand that the burden of building an existence proof falls on me and I am perfectly fine with that. BTW, If some poor soul reading this buys into my arguments and wants to join me in some hacking, I'd definitely welcome the collaboration ;-). -- Ilija ^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2011-11-25 5:11 UTC | newest] Thread overview: 15+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2011-11-03 15:59 [RFC] Virtual CRTCs (proposal + experimental code) Ilija Hadzic 2011-11-03 17:21 ` Daniel Vetter 2011-11-03 17:27 ` David Airlie 2011-11-03 17:53 ` Alan Cox 2011-11-03 18:00 ` Ilija Hadzic 2011-11-07 12:58 ` Dave Airlie 2011-11-07 13:52 ` Ilija Hadzic 2011-11-23 11:48 ` Dave Airlie 2011-11-24 5:59 ` Ilija Hadzic 2011-11-24 8:52 ` Dave Airlie 2011-11-24 10:52 ` Daniel Vetter 2011-11-25 5:11 ` Ilija Hadzic 2011-11-24 12:58 ` Alan Cox 2011-11-24 13:48 ` Dave Airlie 2011-11-25 4:08 ` Ilija Hadzic
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).