* [Qemu-devel] console muti-head some more design input @ 2013-11-19 6:24 Dave Airlie 2013-11-19 8:11 ` Gerd Hoffmann 0 siblings, 1 reply; 16+ messages in thread From: Dave Airlie @ 2013-11-19 6:24 UTC (permalink / raw) To: Gerd Hoffmann, qemu-devel@nongnu.org So I've started banging my head against using QemuConsole as the container for a single output, and have been left with the usual 10 ways to design things, but since I don't want to spend ages implementing one way just to be told its unacceptable it would be good to get some more up front design input. Current code is in http://cgit.freedesktop.org/~airlied/qemu/log/?h=virtio-gpu-multiconsole So I felt I had a choice here for sharing a single output surface amongst outputs: a) have multiple QemuConsole reference multiple DisplaySurface wihch reference a single pixman image, b) have multiple QemuConsole reference a single DisplaySurface which reference a single pixman image. In either case we need to store, width/height of the console and x/y offset into the output surface somewhere, as the output dimensions will not correspond to surface dimensions or the surface dimensions won't correspond to the pixman image dimensions So I picked (b) in my current codebase, once I untangled a few lifetimes issues (replace_surface - frees the displaysurface == bad, this is bad in general), I've stored the x/y/w/h in the QemuConsole (reused the text console values for now), Another issue I had is I feel the console layer could do with some sort of subclassing of objects or the ability to store ui layer info in the console objects, e.g. I've added a ui_priv to the DisplaySurface instead of having sdl2.c end up with SDL_Texture array and having to dig around to find it. At the moment this is rendering a two-head console for me, with cursors, with virtio-vga kernel and Xorg modesetting driver persuaded to work, but I'd really like more feedback on the direction this is going, as I get the feeling Gerd you have some specific ideas on how this should all work. Dave. ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] console muti-head some more design input 2013-11-19 6:24 [Qemu-devel] console muti-head some more design input Dave Airlie @ 2013-11-19 8:11 ` Gerd Hoffmann 2013-11-20 2:59 ` Dave Airlie 0 siblings, 1 reply; 16+ messages in thread From: Gerd Hoffmann @ 2013-11-19 8:11 UTC (permalink / raw) To: Dave Airlie; +Cc: qemu-devel@nongnu.org Hi, > So I felt I had a choice here for sharing a single output surface > amongst outputs: > > a) have multiple QemuConsole reference multiple DisplaySurface wihch > reference a single pixman image, This one. > In either case we need to store, width/height of the console and x/y > offset into the output surface somewhere, as the output dimensions > will not correspond to surface dimensions or the surface dimensions > won't correspond to the pixman image dimensions Not needed (well, internal to virtio-gpu probably). Just use qemu_create_displaysurface_from(). That creates a new DisplaySurface (and pixman image) using existing backing storage. Typically backing storage is the guests video memory. In your case it probably is the dma-buf the gpu rendered into (at least in the 3d case, not sure you are there already). You can also use normal guest ram as backing storage (for 2d maybe?). The framebuffer must be linear in guest physical memory (so it is linear in host virtual too) for this to work. Dimensions not matching is no problem, you'll pass the scanline length of the backing storage as linesize parameter. Adjust the data pointer according to x+y offset. > So I picked (b) in my current codebase, once I untangled a few > lifetimes issues (replace_surface - frees the displaysurface == bad, > this is bad in general), Why is this bad? Some background on the design (/me feels like it would be a good idea to brew up a docs/ui.txt from our email discussions ...): There are basically two ways to manage the surfaces. Case one: You don't have the data in a format the qemu ui code can handle. Such as vga text mode, or those funky planar 16 color modes. You'll go create a DisplaySurface for it (using qemu_create_displaysurface, allocating backing storage too). You keep it until the guest switches the video mode. You render into it and you notify qemu about updates using dpy_gfx_update(). Case two: You have the data in a format qemu ui can handle directly. You'll go create a DisplaySurface using qemu_create_displaysurface_from, backing the DisplaySurface using your existing backing storage. If the viewpoint changes (panning, page flipping) just create a new DisplaySurface and replace the old one. As the backing storage doesn't change this is cheap, so it is perfectly fine to do it frequently if needed. > Another issue I had is I feel the console layer could do with some > sort of subclassing of objects or the ability to store ui layer info > in the console objects, You can do that by embedding the DisplayChangeListener into your ui state struct. Spice does it this way: State struct for one display channel: struct SimpleSpiceDisplay { [ ... ] DisplayChangeListener dcl; [ ... ] }; When registering it binds the spice channel / display listener to a console (i.e. don't follow active_console): ssd->dcl.ops = &display_listener_ops; ssd->dcl.con = con; register_displaychangelistener(&ssd->dcl); Then in the callbacks go find the spice display channel state: SimpleSpiceDisplay *ssd = container_of(dcl, SimpleSpiceDisplay, dcl); As there is a fixed relationship between QemuConsole and DisplayChangeListener you can store all the data you need to manage the QemuConsole there. cheers, Gerd ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] console muti-head some more design input 2013-11-19 8:11 ` Gerd Hoffmann @ 2013-11-20 2:59 ` Dave Airlie 2013-11-20 4:06 ` John Baboval 2013-11-20 8:12 ` Gerd Hoffmann 0 siblings, 2 replies; 16+ messages in thread From: Dave Airlie @ 2013-11-20 2:59 UTC (permalink / raw) To: Gerd Hoffmann; +Cc: qemu-devel@nongnu.org On Tue, Nov 19, 2013 at 6:11 PM, Gerd Hoffmann <kraxel@redhat.com> wrote: > Hi, > >> So I felt I had a choice here for sharing a single output surface >> amongst outputs: >> >> a) have multiple QemuConsole reference multiple DisplaySurface wihch >> reference a single pixman image, > > This one. > >> In either case we need to store, width/height of the console and x/y >> offset into the output surface somewhere, as the output dimensions >> will not correspond to surface dimensions or the surface dimensions >> won't correspond to the pixman image dimensions > > Not needed (well, internal to virtio-gpu probably). I think you are only considering output here, for input we definitely need some idea of screen layout, and this needs to be stored somewhere. e.g. SDL2 gets an input event in the right hand window it needs to translate that into an input event on the whole output surface. Have a look the virtio-gpu branch in my repo (don't look at the history, its ugly, just the final state), you'll see code in sdl2.c to do input translation from window coordinates to the overall screen space. So we need at least the x,y offset in the ui code, and I think we need to communicate that via the console. Otherwise I think I've done things the way you've said and it seems to be working for me on a dual-head setup. (oh and yes this all sw rendering only, to do 3D rendering we need to put a thread in to do the GL stuff, but it interacts with the console layer quite a bit, since SDL and the virtio-gpu need to be in the same thread, so things like resize can work). Dave. ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] console muti-head some more design input 2013-11-20 2:59 ` Dave Airlie @ 2013-11-20 4:06 ` John Baboval 2013-11-20 5:17 ` Dave Airlie 2013-11-20 8:12 ` Gerd Hoffmann 1 sibling, 1 reply; 16+ messages in thread From: John Baboval @ 2013-11-20 4:06 UTC (permalink / raw) To: Dave Airlie; +Cc: Gerd Hoffmann, qemu-devel@nongnu.org On Nov 19, 2013, at 9:59 PM, Dave Airlie <airlied@gmail.com> wrote: > On Tue, Nov 19, 2013 at 6:11 PM, Gerd Hoffmann <kraxel@redhat.com> wrote: >> Hi, >> >>> So I felt I had a choice here for sharing a single output surface >>> amongst outputs: >>> >>> a) have multiple QemuConsole reference multiple DisplaySurface wihch >>> reference a single pixman image, >> >> This one. >> >>> In either case we need to store, width/height of the console and x/y >>> offset into the output surface somewhere, as the output dimensions >>> will not correspond to surface dimensions or the surface dimensions >>> won't correspond to the pixman image dimensions >> >> Not needed (well, internal to virtio-gpu probably). > > I think you are only considering output here, for input we definitely > need some idea of screen layout, and this needs to be stored > somewhere. > > e.g. SDL2 gets an input event in the right hand window it needs to > translate that into an input event on the whole output surface. > > Have a look the virtio-gpu branch in my repo (don't look at the > history, its ugly, just the final state), you'll see code in sdl2.c to > do input translation from window coordinates to the overall screen > space. So we need at least the x,y offset in the ui code, and I think > we need to communicate that via the console. > One of the patches I will be submitting as part of this includes bi-directional calls to set the orientation. A HwOp, and a DisplayChangeListenerOp. This allows you to move the display orientation around in the guest (if your driver and backend support it), or to move the orientation around by dragging windows... Either way you have the data you need to get absolute coordinates right, even if you are scaling the guest display in your windows. Whether the orientation offsets end up stored in the QemuConsole or not becomes an implementation detail if you get notifications. > Otherwise I think I've done things the way you've said and it seems to > be working for me on a dual-head setup. > > (oh and yes this all sw rendering only, to do 3D rendering we need to > put a thread in to do the GL stuff, but it interacts with the console > layer quite a bit, since SDL and the virtio-gpu need to be in the same > thread, so things like resize can work). > I also have a patch to add dpy_lock and dpy_unlock hooks to the DisplayChangeListener so that the UI can be in another thread. In fact, on XenClient we run with the bulk of the UI in another process so that multiple VMs can share the same windows and GL textures. Otherwise dom0 doesn't have enough memory for lots of guests with multiple big monitors connected. I wasn't planning on submitting the lock patch since I figured nobody would want our UI that uses it. But if there is interest I can. Eventually I would like to write a GEM/KMS UI for full zero-copy display, and that would need locking hooks anyway. We used to run with a GLX UI that was a thread per display inside qemu. If you'd like I can send you that patch, but I don't have the bandwidth to modernize it. I believe it is qemu 1.0 vintage. (It's on the Citrix website in some obscure location already) > Dave. > ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] console muti-head some more design input 2013-11-20 4:06 ` John Baboval @ 2013-11-20 5:17 ` Dave Airlie 2013-11-20 5:18 ` Dave Airlie 0 siblings, 1 reply; 16+ messages in thread From: Dave Airlie @ 2013-11-20 5:17 UTC (permalink / raw) To: John Baboval; +Cc: Gerd Hoffmann, qemu-devel@nongnu.org >> Have a look the virtio-gpu branch in my repo (don't look at the >> history, its ugly, just the final state), you'll see code in sdl2.c to >> do input translation from window coordinates to the overall screen >> space. So we need at least the x,y offset in the ui code, and I think >> we need to communicate that via the console. >> > > One of the patches I will be submitting as part of this includes bi-directional calls to set the orientation. A HwOp, and a DisplayChangeListenerOp. This allows you to move the display orientation around in the guest (if your driver and backend support it), or to move the orientation around by dragging windows... Either way you have the data you need to get absolute coordinates right, even if you are scaling the guest display in your windows. Whether the orientation offsets end up stored in the QemuConsole or not becomes an implementation detail if you get notifications. Okay I just hacked up something similar with the bidirectional ops, and ran into the fact that DisplayChangeListeners are stored per DisplayState, so when my GPU drivers tries to callback for console number 1, the dcls for both consoles gets called, this doesn't seem so optimal. Dave. ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] console muti-head some more design input 2013-11-20 5:17 ` Dave Airlie @ 2013-11-20 5:18 ` Dave Airlie 0 siblings, 0 replies; 16+ messages in thread From: Dave Airlie @ 2013-11-20 5:18 UTC (permalink / raw) To: John Baboval; +Cc: Gerd Hoffmann, qemu-devel@nongnu.org On Wed, Nov 20, 2013 at 3:17 PM, Dave Airlie <airlied@gmail.com> wrote: >>> Have a look the virtio-gpu branch in my repo (don't look at the >>> history, its ugly, just the final state), you'll see code in sdl2.c to >>> do input translation from window coordinates to the overall screen >>> space. So we need at least the x,y offset in the ui code, and I think >>> we need to communicate that via the console. >>> >> >> One of the patches I will be submitting as part of this includes bi-directional calls to set the orientation. A HwOp, and a DisplayChangeListenerOp. This allows you to move the display orientation around in the guest (if your driver and backend support it), or to move the orientation around by dragging windows... Either way you have the data you need to get absolute coordinates right, even if you are scaling the guest display in your windows. Whether the orientation offsets end up stored in the QemuConsole or not becomes an implementation detail if you get notifications. > > Okay I just hacked up something similar with the bidirectional ops, > and ran into the fact that DisplayChangeListeners are stored per > DisplayState, so when my GPU drivers tries to callback for console > number 1, the dcls for both consoles gets called, this doesn't seem so > optimal. Actually ignore that, I didn't cut-n-paste properly :-) Dave. ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] console muti-head some more design input 2013-11-20 2:59 ` Dave Airlie 2013-11-20 4:06 ` John Baboval @ 2013-11-20 8:12 ` Gerd Hoffmann 2013-11-20 14:32 ` John Baboval 1 sibling, 1 reply; 16+ messages in thread From: Gerd Hoffmann @ 2013-11-20 8:12 UTC (permalink / raw) To: Dave Airlie; +Cc: qemu-devel@nongnu.org Hi, > I think you are only considering output here, for input we definitely > need some idea of screen layout, and this needs to be stored > somewhere. Oh yea, input. That needs quite some work for multihead / multiseat. I think we should *not* try to hack that into the ui. We should extend the input layer instead. The functions used to notify qemu about mouse + keyboard events should get an additional parameter to indicate the source of the event. I think we can use a QemuConsole here. Then teach the input layer about seats, where a seat is a group of input devices (kbd, mouse, tablet) and a group of QemuConsoles. With x+y for each QemuConsole. The input layer will do the event routing: Translate coordinates, send to the correct device. I think initially we just can handle all existing QemuConsole and input devices implicitly as "seat 0". Stick x+y into QemuConsole for now, and have the input layer get it from there. At some point in the future we might want move this to a QemuSeat when we actually go multiseat. Bottom line: please do the coordinates math in input.c not sdl2.c so we don't run into roadblocks in the future. cheers, Gerd ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] console muti-head some more design input 2013-11-20 8:12 ` Gerd Hoffmann @ 2013-11-20 14:32 ` John Baboval 2013-11-20 15:14 ` Gerd Hoffmann 0 siblings, 1 reply; 16+ messages in thread From: John Baboval @ 2013-11-20 14:32 UTC (permalink / raw) To: qemu-devel On 11/20/2013 03:12 AM, Gerd Hoffmann wrote: > Hi, > >> I think you are only considering output here, for input we definitely >> need some idea of screen layout, and this needs to be stored >> somewhere. > Oh yea, input. That needs quite some work for multihead / multiseat. > > I think we should *not* try to hack that into the ui. We should extend > the input layer instead. This would be a contrast to how a real system works. IMO, the UI is the appropriate place for this sort of thing. A basic UI is going to be sending relative events anyway. I think a "seat" should be a UI construct as well. > The functions used to notify qemu about mouse + keyboard events should > get an additional parameter to indicate the source of the event. I > think we can use a QemuConsole here. > > Then teach the input layer about seats, where a seat is a group of input > devices (kbd, mouse, tablet) and a group of QemuConsoles. With x+y for > each QemuConsole. The input layer will do the event routing: Translate > coordinates, send to the correct device. > > I think initially we just can handle all existing QemuConsole and input > devices implicitly as "seat 0". Stick x+y into QemuConsole for now, and > have the input layer get it from there. At some point in the future we > might want move this to a QemuSeat when we actually go multiseat. > > Bottom line: please do the coordinates math in input.c not sdl2.c so we > don't run into roadblocks in the future. > > cheers, > Gerd > > > > ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] console muti-head some more design input 2013-11-20 14:32 ` John Baboval @ 2013-11-20 15:14 ` Gerd Hoffmann 2013-11-20 15:49 ` John Baboval 2013-11-21 0:45 ` Dave Airlie 0 siblings, 2 replies; 16+ messages in thread From: Gerd Hoffmann @ 2013-11-20 15:14 UTC (permalink / raw) To: John Baboval; +Cc: qemu-devel On Mi, 2013-11-20 at 09:32 -0500, John Baboval wrote: > On 11/20/2013 03:12 AM, Gerd Hoffmann wrote: > > Hi, > > > >> I think you are only considering output here, for input we definitely > >> need some idea of screen layout, and this needs to be stored > >> somewhere. > > Oh yea, input. That needs quite some work for multihead / multiseat. > > > > I think we should *not* try to hack that into the ui. We should extend > > the input layer instead. > > This would be a contrast to how a real system works. No. We have to solve problem here which doesn't exist on real hardware in the first place. > IMO, the UI is the > appropriate place for this sort of thing. A basic UI is going to be > sending relative events anyway. > > I think a "seat" should be a UI construct as well. A seat on real hardware is a group of input (kbd, mouse, tablet, ...) and output (display, speakers, ....) devices. In qemu the displays are represented by QemuConsoles. So to model real hardware we should put the QemuConsoles and input devices for a seat into a group. The ui displays some QemuConsole. If we tag input events with the QemuConsole the input layer can figure the correct input device which should receive the event according to the seat grouping. With absolute pointer events the whole thing becomes a bit more tricky as we have to map input from multiple displays (QemuConsoles) to a single absolute pointing device (usb tablet). This is what Dave wants the screen layout for. I still think the input layer is the place to do this transformation. While thinking about this: A completely different approach to tackle this would be to implement touchscreen emulation. So we don't have a single usb-tablet, but multiple (one per display) touch input devices. Then we can simply route absolute input events from this display as-is to that touch device and be done with it. No need to deal with coordinate transformations in qemu, the guest will deal with it. cheers, Gerd ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] console muti-head some more design input 2013-11-20 15:14 ` Gerd Hoffmann @ 2013-11-20 15:49 ` John Baboval 2013-11-22 8:36 ` Gerd Hoffmann 2013-11-21 0:45 ` Dave Airlie 1 sibling, 1 reply; 16+ messages in thread From: John Baboval @ 2013-11-20 15:49 UTC (permalink / raw) To: Gerd Hoffmann; +Cc: qemu-devel On 11/20/2013 10:14 AM, Gerd Hoffmann wrote: > On Mi, 2013-11-20 at 09:32 -0500, John Baboval wrote: >> On 11/20/2013 03:12 AM, Gerd Hoffmann wrote: >>> Hi, >>> >>>> I think you are only considering output here, for input we definitely >>>> need some idea of screen layout, and this needs to be stored >>>> somewhere. >>> Oh yea, input. That needs quite some work for multihead / multiseat. >>> >>> I think we should *not* try to hack that into the ui. We should extend >>> the input layer instead. >> This would be a contrast to how a real system works. > No. We have to solve problem here which doesn't exist on real hardware > in the first place. > >> IMO, the UI is the >> appropriate place for this sort of thing. A basic UI is going to be >> sending relative events anyway. >> >> I think a "seat" should be a UI construct as well. > A seat on real hardware is a group of input (kbd, mouse, tablet, ...) > and output (display, speakers, ....) devices. > > In qemu the displays are represented by QemuConsoles. So to model real > hardware we should put the QemuConsoles and input devices for a seat > into a group. > > The ui displays some QemuConsole. If we tag input events with the > QemuConsole the input layer can figure the correct input device which > should receive the event according to the seat grouping. > > With absolute pointer events the whole thing becomes a bit more tricky > as we have to map input from multiple displays (QemuConsoles) to a > single absolute pointing device (usb tablet). This is what Dave wants > the screen layout for. I still think the input layer is the place to do > this transformation. We solve this problem in our UI now. It's not enough to know the offsets. You also need to know all the resolutions - the display window, the guest, and the device coordinate system of the virtual pointing device (we use a PV event ring instead of a USB tablet). If your UI can scale the guest output, that means you need to also store the UI's window geometry in the QemuConsole to get the math right. Incidentally, XenClient will eventually be moving back to relative coordinates from mice. We will handle seamless transitions by having the guest feed the pointer coordinates back down through an emulated hardware cursor channel. The reason for this is that operating systems like Windows 8 implement various types of "pointer friction" that don't work when you send absolute coordinates. We are still working out the latency kinks. > > > While thinking about this: A completely different approach to tackle > this would be to implement touchscreen emulation. So we don't have a > single usb-tablet, but multiple (one per display) touch input devices. > Then we can simply route absolute input events from this display as-is > to that touch device and be done with it. No need to deal with > coordinate transformations in qemu, the guest will deal with it. > > cheers, > Gerd > > ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] console muti-head some more design input 2013-11-20 15:49 ` John Baboval @ 2013-11-22 8:36 ` Gerd Hoffmann 0 siblings, 0 replies; 16+ messages in thread From: Gerd Hoffmann @ 2013-11-22 8:36 UTC (permalink / raw) To: John Baboval; +Cc: qemu-devel Hi, > > With absolute pointer events the whole thing becomes a bit more tricky > > as we have to map input from multiple displays (QemuConsoles) to a > > single absolute pointing device (usb tablet). This is what Dave wants > > the screen layout for. I still think the input layer is the place to do > > this transformation. > > We solve this problem in our UI now. It's not enough to know the > offsets. You also need to know all the resolutions - the display window, > the guest, and the device coordinate system of the virtual pointing > device (we use a PV event ring instead of a USB tablet). > > If your UI can scale the guest output, that means you need to also store > the UI's window geometry in the QemuConsole to get the math right. qemu input layer uses a 0 -> 0x7fff range for absolute coordinates. So your ui code has to transform the pointer position to that, considering window scaling if needed, then pass it to the input code. Inout code passes it on to the input device, which again will scale it to the dimensions of the usb-tablet / pv device / whatever. For multihead support we need the additional step to transform (display, phys-x, phys-y) to (virtual-x, virtual-y) using the screen layout. IMO the input layer should to that. Quite some changes are need to handle that, but the input layer needs some love anyway. cheers, Gerd ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] console muti-head some more design input 2013-11-20 15:14 ` Gerd Hoffmann 2013-11-20 15:49 ` John Baboval @ 2013-11-21 0:45 ` Dave Airlie 2013-11-22 8:41 ` Gerd Hoffmann 1 sibling, 1 reply; 16+ messages in thread From: Dave Airlie @ 2013-11-21 0:45 UTC (permalink / raw) To: Gerd Hoffmann; +Cc: John Baboval, qemu-devel@nongnu.org On Thu, Nov 21, 2013 at 1:14 AM, Gerd Hoffmann <kraxel@redhat.com> wrote: > On Mi, 2013-11-20 at 09:32 -0500, John Baboval wrote: >> On 11/20/2013 03:12 AM, Gerd Hoffmann wrote: >> > Hi, >> > >> >> I think you are only considering output here, for input we definitely >> >> need some idea of screen layout, and this needs to be stored >> >> somewhere. >> > Oh yea, input. That needs quite some work for multihead / multiseat. >> > >> > I think we should *not* try to hack that into the ui. We should extend >> > the input layer instead. >> >> This would be a contrast to how a real system works. > > No. We have to solve problem here which doesn't exist on real hardware > in the first place. > >> IMO, the UI is the >> appropriate place for this sort of thing. A basic UI is going to be >> sending relative events anyway. >> >> I think a "seat" should be a UI construct as well. > > A seat on real hardware is a group of input (kbd, mouse, tablet, ...) > and output (display, speakers, ....) devices. > > In qemu the displays are represented by QemuConsoles. So to model real > hardware we should put the QemuConsoles and input devices for a seat > into a group. > > The ui displays some QemuConsole. If we tag input events with the > QemuConsole the input layer can figure the correct input device which > should receive the event according to the seat grouping. > > With absolute pointer events the whole thing becomes a bit more tricky > as we have to map input from multiple displays (QemuConsoles) to a > single absolute pointing device (usb tablet). This is what Dave wants > the screen layout for. I still think the input layer is the place to do > this transformation. > > > While thinking about this: A completely different approach to tackle > this would be to implement touchscreen emulation. So we don't have a > single usb-tablet, but multiple (one per display) touch input devices. > Then we can simply route absolute input events from this display as-is > to that touch device and be done with it. No need to deal with > coordinate transformations in qemu, the guest will deal with it. This is a nice dream, except you'll find the guest won't deal with it very well, and you'll have all kinds of guest scenarios to link up that touchscreen a talks to monitor a etc. Dave. ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] console muti-head some more design input 2013-11-21 0:45 ` Dave Airlie @ 2013-11-22 8:41 ` Gerd Hoffmann 2013-11-27 4:29 ` Dave Airlie 0 siblings, 1 reply; 16+ messages in thread From: Gerd Hoffmann @ 2013-11-22 8:41 UTC (permalink / raw) To: Dave Airlie; +Cc: John Baboval, qemu-devel@nongnu.org Hi, > > While thinking about this: A completely different approach to tackle > > this would be to implement touchscreen emulation. So we don't have a > > single usb-tablet, but multiple (one per display) touch input devices. > > Then we can simply route absolute input events from this display as-is > > to that touch device and be done with it. No need to deal with > > coordinate transformations in qemu, the guest will deal with it. > > This is a nice dream, except you'll find the guest won't deal with it > very well, and you'll have all kinds of guest scenarios to link up > that touchscreen a talks to monitor a etc. Ok, scratch the idea then. I don't have personal experience with this, no touch capable displays here. cheers, Gerd ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] console muti-head some more design input 2013-11-22 8:41 ` Gerd Hoffmann @ 2013-11-27 4:29 ` Dave Airlie 2013-11-27 7:11 ` Gerd Hoffmann 2013-11-27 14:23 ` John Baboval 0 siblings, 2 replies; 16+ messages in thread From: Dave Airlie @ 2013-11-27 4:29 UTC (permalink / raw) To: Gerd Hoffmann; +Cc: John Baboval, qemu-devel@nongnu.org On Fri, Nov 22, 2013 at 6:41 PM, Gerd Hoffmann <kraxel@redhat.com> wrote: > Hi, > >> > While thinking about this: A completely different approach to tackle >> > this would be to implement touchscreen emulation. So we don't have a >> > single usb-tablet, but multiple (one per display) touch input devices. >> > Then we can simply route absolute input events from this display as-is >> > to that touch device and be done with it. No need to deal with >> > coordinate transformations in qemu, the guest will deal with it. >> >> This is a nice dream, except you'll find the guest won't deal with it >> very well, and you'll have all kinds of guest scenarios to link up >> that touchscreen a talks to monitor a etc. > > Ok, scratch the idea then. > > I don't have personal experience with this, > no touch capable displays here. Hmm I think we get to unscratch this idea, After looking into this a bit more I think we probably do need something outside the gpu to handle this. The problem is that there are two scenarios for a GPU multi-head, a) one resource - two outputs, the second output has an offset to scanout into the resource b) two resources - two outputs, both outputs have a 0,0 into their respective resources. So the GPU doesn't have this information in all cases on what the input device configuration should be, Neither do we have anyway in the guests to specify this relationship at the driver level. So I think we probably do need treat multi-head windows as separate input devices, and/or have an agent in the guest to do the right thing by configuring multiple input devices to map to multiple outputs. I suppose spice must do something like this already, maybe they can tell me more. Dave. ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] console muti-head some more design input 2013-11-27 4:29 ` Dave Airlie @ 2013-11-27 7:11 ` Gerd Hoffmann 2013-11-27 14:23 ` John Baboval 1 sibling, 0 replies; 16+ messages in thread From: Gerd Hoffmann @ 2013-11-27 7:11 UTC (permalink / raw) To: Dave Airlie; +Cc: John Baboval, qemu-devel@nongnu.org Hi, > So I think we probably do need treat multi-head windows as separate > input devices, and/or have > an agent in the guest to do the right thing by configuring multiple > input devices to map to multiple outputs. > > I suppose spice must do something like this already, maybe they can > tell me more. Spice does it with a guest agent, which basically passes in the display channel id as extra parameter (for the multihead-with-multiple-qxl cards case). Not fully sure how this works with a single qxl card and scanout-rectangle-per-head, I suspect the pointer location is just relative to the common surface backing all heads then. cheers, Gerd ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] console muti-head some more design input 2013-11-27 4:29 ` Dave Airlie 2013-11-27 7:11 ` Gerd Hoffmann @ 2013-11-27 14:23 ` John Baboval 1 sibling, 0 replies; 16+ messages in thread From: John Baboval @ 2013-11-27 14:23 UTC (permalink / raw) To: qemu-devel On 11/26/2013 11:29 PM, Dave Airlie wrote: > On Fri, Nov 22, 2013 at 6:41 PM, Gerd Hoffmann <kraxel@redhat.com> wrote: >> Hi, >> >>>> While thinking about this: A completely different approach to tackle >>>> this would be to implement touchscreen emulation. So we don't have a >>>> single usb-tablet, but multiple (one per display) touch input devices. >>>> Then we can simply route absolute input events from this display as-is >>>> to that touch device and be done with it. No need to deal with >>>> coordinate transformations in qemu, the guest will deal with it. >>> This is a nice dream, except you'll find the guest won't deal with it >>> very well, and you'll have all kinds of guest scenarios to link up >>> that touchscreen a talks to monitor a etc. >> Ok, scratch the idea then. >> >> I don't have personal experience with this, >> no touch capable displays here. > Hmm I think we get to unscratch this idea, > > After looking into this a bit more I think we probably do need > something outside the gpu to handle this. > > The problem is that there are two scenarios for a GPU multi-head, > > a) one resource - two outputs, the second output has an offset to > scanout into the resource > b) two resources - two outputs, both outputs have a 0,0 into their > respective resources. > > So the GPU doesn't have this information in all cases on what the > input device configuration should be, > Neither do we have anyway in the guests to specify this relationship > at the driver level. There is a third, likely option: c) one resource - two outputs, the second output's scanout offset isn't the same as the logical offset from the perspective of the input device There are two possible solutions to this, both of which I have added interfaces for. (I really should hurry up and send out patches....) The individual logical display offsets can be provided by the UI, and pushed into the guest, or the guest driver can push the offsets down. So if the UI provides absolute coordinates normalized to the individual displays, and the offsets are stored in the QemuConsole, then the input layer can do the math as Gerd suggested. It just requires one additional coordinate set transformation. In XenClient, we use both interfaces, so the user can set the display offsets from wherever they care to. The in-guest side is relatively easy to implement in Linux guests that run X. It's more complicated in Windows, since the display driver isn't actually told about the offsets, so there needs to be an additional user-level service running to inform it of changes. > > So I think we probably do need treat multi-head windows as separate > input devices, and/or have > an agent in the guest to do the right thing by configuring multiple > input devices to map to multiple outputs. This is essentially correct, except that only the UI needs to treat it as separate input devices. The rest of the stack should be OK as one input device. > > I suppose spice must do something like this already, maybe they can > tell me more. > > Dave. > ^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2013-11-27 14:24 UTC | newest] Thread overview: 16+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2013-11-19 6:24 [Qemu-devel] console muti-head some more design input Dave Airlie 2013-11-19 8:11 ` Gerd Hoffmann 2013-11-20 2:59 ` Dave Airlie 2013-11-20 4:06 ` John Baboval 2013-11-20 5:17 ` Dave Airlie 2013-11-20 5:18 ` Dave Airlie 2013-11-20 8:12 ` Gerd Hoffmann 2013-11-20 14:32 ` John Baboval 2013-11-20 15:14 ` Gerd Hoffmann 2013-11-20 15:49 ` John Baboval 2013-11-22 8:36 ` Gerd Hoffmann 2013-11-21 0:45 ` Dave Airlie 2013-11-22 8:41 ` Gerd Hoffmann 2013-11-27 4:29 ` Dave Airlie 2013-11-27 7:11 ` Gerd Hoffmann 2013-11-27 14:23 ` John Baboval
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).