* [Qemu-devel] Multi-head support RFC
@ 2013-11-05 20:42 John Baboval
2013-11-06 1:46 ` Dave Airlie
2013-11-06 10:55 ` Gerd Hoffmann
0 siblings, 2 replies; 15+ messages in thread
From: John Baboval @ 2013-11-05 20:42 UTC (permalink / raw)
To: qemu-devel@nongnu.org
Hello,
I am currently the device model maintainer for XenClient Enterprise. As
you may or may not know, we maintain a patch queue on top of QEMU
(currently 1.3) that adds functionality needed to support XCE features.
One of the major things we add is robust multi-head support. This
includes DDC emulation for EDID data, variable VRAM size, monitor
hot-plug support, simulated VSYNC, and guest controlled display
orientation. This includes both the necessary interfaces between the hw
and ui, and a new emulated adapter (with drivers) that exercises the
interfaces.
Between QEMU 1.3 and QEMU 1.6, a lot of changes were made to the
QemuConsole and DisplayState structures that will require significant
changes to our patches. I'd like to adapt to these changes in such a way
that they might make some of our work acceptable upstream. As such, I'd
like to describe how the patches currently work, describe how I'd
propose they work in the new version, and solicit feedback as to if the
plans would be acceptable.
I've stuck our current patch queue on github for convenience. If I refer
to a patch in the below description, you can check it out there:
https://github.com/jbaboval/xce-qemu.pq
This is what we currently do:
In QEMU 1.3, there was a DisplayState list. We used one DisplayState per
monitor. The DisplayChangeListener has a new hw_add_display vector, so
that when the UI requests a second monitor the new display gets attached
to the emulated hardware. (patch: add_display_ptr)
Each DisplayState was given an ID, so that emulated hardware could keep
track of which EDID and other metadata went with with DisplayState.
(patch: display-id-allocation)
A new function, gui_start_updates, starts refresh of the new display.
This seems to be equivalent to the new gui_setup_refresh() that exists
in newer versions. (patch: gui_start_updates)
A new vector, hw_store_edid, was added to DisplayState so that UIs could
tell emulated hardware what the EDID for a given display should be.
(patch: edid-vector)
A new vector, hw_notify, was added to DisplayState so the UIs could tell
the emulated hardware about interesting events, such as monitor hotplugs
(new windows), orientation changes, availability of hardware cursor
functionality, etc... (patch: display-hw-notify)
VRAM size was made configurable, so that more could be allocated to
handle multiple high-resolution displays. (patch: variable-vram-size)
The flow is:
- The user requests a hotplug (user added a "monitor" with a menu, UI
reacted to a udev hotplug or XRandR event, or whatever)
- The UI asks the hw to allocate a new display. If the hw doesn't
support it (vector is NULL), the system continues with one display.
- The hw returns a new DisplayState to the UI with all the hw_ vectors
filled in
- The UI registers its DisplayChangeListener with the new DisplayState
- The UI provides an EDID for the new display with hw_store_edid()
- The UI notifies the guest that an event has occured with hw_notify()
- The UI starts a gui timer for the new DisplayState with
gui_start_updates()
- The guest driver does its normal thing, sets the mode on the new
display, and starts rendering
- The timer handler calls gfx_hw_update for the DisplayState
- The hw calls dpy_update for changes on the new DisplayState
.....
In the latest code, DisplayState isn't a list anymore, and many
operations that apply to a DisplayState apply to a QemuConsole instead.
Also, the DisplaySurface is a property of the QemuConsole now instead of
part of the DisplayState. I'm going to have to make some fairly
fundamental changes to our current architecture to adapt to the latest
upstream changes, and I would appreciate feedback on the options.
I don't think it makes sense to have a QemuConsole per display.
I can use a model similar to what qxl does, and put the framebuffer for
each display inside a single DisplaySurface allocated to be a bounding
rectangle around all framebuffers. This has the advantage of looking
like something that already exists in the tree, but has several
disadvantages. It was convenient - but not necessary - to have a
DisplaySurface per display, since that kept track of the display mode,
allowed different depths per monitor (not very useful, but possible),
etc. If the displays are different resolutions, it leaves a dead space
of wasted memory inside the region. If there's a single DisplaySurface,
some other structure/list/array will need to be added to track the
individual resolutions, offsets, and strides of the sub-regions in order
to be able to display them in separate windows or - more likely - as
full screen on real displays. It also makes hot-plugging more complex,
since adding/removing a display will alter the configuration of the
existing displays.
I could turn the DisplayState and DisplaySurface in QEMU console into
lists, and run much like my existing patches.
I could move the DisplaySurface back into DisplayState and have one
list... (major downside: I'd have to fix every hw and ui, also it must
have been moved out of there for a reason)
Moving my new vectors into the nicely cleaned up
DisplayChangeListenerOps and GraphicsHardwareOps structures seems like a
no-brainer.
......
I'm flexible on how to implement this stuff if it means it gets done in
a way that could be pushed. Ideally I'd like to be able to re-use as
much of the code I already have as possible. But really I can re-do all
of it as long as the end result is acceptable upstream and meets my list
of feature requirements. Which is: Hot-plugging displays, ability to
pass EDIDs, variable VRAM size.
Are these features something that people would want to see in the tree?
If so, I'd appreciate input so I can be working towards something
acceptable to as many people as possible.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] Multi-head support RFC
2013-11-05 20:42 [Qemu-devel] Multi-head support RFC John Baboval
@ 2013-11-06 1:46 ` Dave Airlie
2013-11-06 10:57 ` Gerd Hoffmann
2013-11-06 15:48 ` John Baboval
2013-11-06 10:55 ` Gerd Hoffmann
1 sibling, 2 replies; 15+ messages in thread
From: Dave Airlie @ 2013-11-06 1:46 UTC (permalink / raw)
To: John Baboval; +Cc: qemu-devel@nongnu.org
On Wed, Nov 6, 2013 at 6:42 AM, John Baboval <john.baboval@citrix.com> wrote:
> Hello,
>
> I am currently the device model maintainer for XenClient Enterprise. As you
> may or may not know, we maintain a patch queue on top of QEMU (currently
> 1.3) that adds functionality needed to support XCE features.
>
> One of the major things we add is robust multi-head support. This includes
> DDC emulation for EDID data, variable VRAM size, monitor hot-plug support,
> simulated VSYNC, and guest controlled display orientation. This includes
> both the necessary interfaces between the hw and ui, and a new emulated
> adapter (with drivers) that exercises the interfaces.
I don't think we'd want to lump all these things together though,
I've started looking at doing multi-head support for a new virtio-gpu,
and I've gotten basic multi-head working with SDL2.0 with cursor
support.
It currently just adds multiple DisplaySurfaces to the QemuConsole,
now Gerd said he thought I should be using multiple QemuConsoles but I
really didn't think this was a good idea, and I still think multiple
surfaces makes more sense wrt how best to interact with this.
Why do you need to emulate DDC btw? is this just to fool the guest
vesa code etc?
Dave.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] Multi-head support RFC
2013-11-05 20:42 [Qemu-devel] Multi-head support RFC John Baboval
2013-11-06 1:46 ` Dave Airlie
@ 2013-11-06 10:55 ` Gerd Hoffmann
2013-11-06 15:39 ` John Baboval
1 sibling, 1 reply; 15+ messages in thread
From: Gerd Hoffmann @ 2013-11-06 10:55 UTC (permalink / raw)
To: John Baboval; +Cc: qemu-devel@nongnu.org
Hi,
> In QEMU 1.3, there was a DisplayState list. We used one DisplayState per
> monitor. The DisplayChangeListener has a new hw_add_display vector, so
> that when the UI requests a second monitor the new display gets attached
> to the emulated hardware. (patch: add_display_ptr)
I don't think we want actually add/remove stuff here. On real hardware
your gfx card has a fixed set of display connectors, and I think we are
best of mimic that.
Support for propagating connect/disconnect events and enabling/disabling
displays needs to be added properly. Currently qxl/spice can handle
this, but it uses a private side channel.
> A new vector, hw_store_edid, was added to DisplayState so that UIs could
> tell emulated hardware what the EDID for a given display should be.
> (patch: edid-vector)
Note that multiple uis can be active at the same time.
What happened with the edids then?
> VRAM size was made configurable, so that more could be allocated to
> handle multiple high-resolution displays. (patch: variable-vram-size)
upstream stdvga has this meanwhile.
> I don't think it makes sense to have a QemuConsole per display.
Why not? That is exactly my plan. Just have the virtual graphic card
call graphic_console_init() multiple times, once for each display
connector it has.
Do you see fundamental issues with that approach?
> I can use a model similar to what qxl does, and put the framebuffer for
> each display inside a single DisplaySurface allocated to be a bounding
> rectangle around all framebuffers. This has the advantage of looking
> like something that already exists in the tree, but has several
> disadvantages.
Indeed. I don't recommend that. It is that way for several historical
reasons (one being that the code predates the qemu console cleanup in
the 1.5 devel cycle).
> Are these features something that people would want to see in the tree?
Sure. One of the reasons for the console cleanup was to allow proper
multihead support.
cheers,
Gerd
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] Multi-head support RFC
2013-11-06 1:46 ` Dave Airlie
@ 2013-11-06 10:57 ` Gerd Hoffmann
2013-11-06 23:44 ` Dave Airlie
2013-11-06 15:48 ` John Baboval
1 sibling, 1 reply; 15+ messages in thread
From: Gerd Hoffmann @ 2013-11-06 10:57 UTC (permalink / raw)
To: Dave Airlie; +Cc: John Baboval, qemu-devel@nongnu.org
Hi,
> It currently just adds multiple DisplaySurfaces to the QemuConsole,
> now Gerd said he thought I should be using multiple QemuConsoles but I
> really didn't think this was a good idea,
Why?
cheers,
Gerd
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] Multi-head support RFC
2013-11-06 10:55 ` Gerd Hoffmann
@ 2013-11-06 15:39 ` John Baboval
2013-11-07 13:46 ` Gerd Hoffmann
0 siblings, 1 reply; 15+ messages in thread
From: John Baboval @ 2013-11-06 15:39 UTC (permalink / raw)
To: qemu-devel
On 11/06/2013 05:55 AM, Gerd Hoffmann wrote:
> Hi,
>
>> In QEMU 1.3, there was a DisplayState list. We used one DisplayState per
>> monitor. The DisplayChangeListener has a new hw_add_display vector, so
>> that when the UI requests a second monitor the new display gets attached
>> to the emulated hardware. (patch: add_display_ptr)
> I don't think we want actually add/remove stuff here. On real hardware
> your gfx card has a fixed set of display connectors, and I think we are
> best of mimic that.
I think that's a property of the emulated hardware. Monitors get
connected and disconnected, and that's what the UI cares about.
> Support for propagating connect/disconnect events and enabling/disabling
> displays needs to be added properly. Currently qxl/spice can handle
> this, but it uses a private side channel.
>
>> A new vector, hw_store_edid, was added to DisplayState so that UIs could
>> tell emulated hardware what the EDID for a given display should be.
>> (patch: edid-vector)
> Note that multiple uis can be active at the same time.
> What happened with the edids then?
This is why it seemed to me that we shouldn't have multiple
QemuConsoles. There should be one per UI type. In my current patches,
each DisplayState has a new DisplayType enum, so I can keep track of
which active UI the DisplayState goes with.
As far as the EDID is concerned, there can only be one EDID for a
display+hw pair, or the guest won't know what to do. In my use-case, I
simply pass real EDIDs through, and create a full-screen window for each
real monitor. If you wanted to have two UIs displaying the same
DisplaySurface, the EDID would have to come from one of them, and the
other would have to clip, or scale.
>> VRAM size was made configurable, so that more could be allocated to
>> handle multiple high-resolution displays. (patch: variable-vram-size)
> upstream stdvga has this meanwhile.
>
>> I don't think it makes sense to have a QemuConsole per display.
> Why not? That is exactly my plan. Just have the virtual graphic card
> call graphic_console_init() multiple times, once for each display
> connector it has.
>
> Do you see fundamental issues with that approach?
Currently only one QemuConsole is active at a time, so that would have
to change.... It could certainly be done this way, but it seemed like
more churn. Perhaps we should step back and define what we want each of
these objects to be.
There are things in QemuConsole that we don't really need another copy
of. There is also stuff in the QemuConsole that we don't want to
duplicate just to have multiple displays. Like the CharDriverState.
>> I can use a model similar to what qxl does, and put the framebuffer for
>> each display inside a single DisplaySurface allocated to be a bounding
>> rectangle around all framebuffers. This has the advantage of looking
>> like something that already exists in the tree, but has several
>> disadvantages.
> Indeed. I don't recommend that. It is that way for several historical
> reasons (one being that the code predates the qemu console cleanup in
> the 1.5 devel cycle).
>
>> Are these features something that people would want to see in the tree?
> Sure. One of the reasons for the console cleanup was to allow proper
> multihead support.
>
> cheers,
> Gerd
>
>
>
>
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] Multi-head support RFC
2013-11-06 1:46 ` Dave Airlie
2013-11-06 10:57 ` Gerd Hoffmann
@ 2013-11-06 15:48 ` John Baboval
1 sibling, 0 replies; 15+ messages in thread
From: John Baboval @ 2013-11-06 15:48 UTC (permalink / raw)
To: qemu-devel
On 11/05/2013 08:46 PM, Dave Airlie wrote:
> On Wed, Nov 6, 2013 at 6:42 AM, John Baboval <john.baboval@citrix.com> wrote:
>> Hello,
>>
>> I am currently the device model maintainer for XenClient Enterprise. As you
>> may or may not know, we maintain a patch queue on top of QEMU (currently
>> 1.3) that adds functionality needed to support XCE features.
>>
>> One of the major things we add is robust multi-head support. This includes
>> DDC emulation for EDID data, variable VRAM size, monitor hot-plug support,
>> simulated VSYNC, and guest controlled display orientation. This includes
>> both the necessary interfaces between the hw and ui, and a new emulated
>> adapter (with drivers) that exercises the interfaces.
> I don't think we'd want to lump all these things together though,
I agree. In my current patch set they are all separated out into
individual bits.
> I've started looking at doing multi-head support for a new virtio-gpu,
> and I've gotten basic multi-head working with SDL2.0 with cursor
> support.
>
> It currently just adds multiple DisplaySurfaces to the QemuConsole,
> now Gerd said he thought I should be using multiple QemuConsoles but I
> really didn't think this was a good idea, and I still think multiple
> surfaces makes more sense wrt how best to interact with this.
>
> Why do you need to emulate DDC btw? is this just to fool the guest
> vesa code etc?
Basically, yes. It's convenient if you don't want to install a driver in
the guest, but you want it to boot to a reasonable resolution. There are
two parts to the change. The big part is in the vga BIOS, to add an
INT10(0x4F) handler that reads the EDID from an IO port. The QEMU side
is simply the IO port handler, and a place in VGACommonState to squirrel
away the EDID.
For multi-head, we install a driver anyway (The X.org driver is GPL. The
windows one, we can only distribute the binary for now because it's got
closed-source code from Microsoft in it), and then it doesn't even use
the DDC. There's a shared memory channel for passing EDID.
>
> Dave.
>
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] Multi-head support RFC
2013-11-06 10:57 ` Gerd Hoffmann
@ 2013-11-06 23:44 ` Dave Airlie
2013-11-07 13:00 ` Gerd Hoffmann
2013-11-08 15:20 ` John Baboval
0 siblings, 2 replies; 15+ messages in thread
From: Dave Airlie @ 2013-11-06 23:44 UTC (permalink / raw)
To: Gerd Hoffmann; +Cc: John Baboval, qemu-devel@nongnu.org
On Wed, Nov 6, 2013 at 8:57 PM, Gerd Hoffmann <kraxel@redhat.com> wrote:
> Hi,
>
>> It currently just adds multiple DisplaySurfaces to the QemuConsole,
>> now Gerd said he thought I should be using multiple QemuConsoles but I
>> really didn't think this was a good idea,
>
> Why?
It's a fair question and I haven't tried the other way yet and I fully
intend to do further investigating,
I think my main reason was the current console at least when
interacting with gtk frontend are done on a console per tab, now if we
have multiple outputs I would want them to be visible at the same
time, now I understand we could fix this but we'd need some sort of
console grouping to say this group of consoles represent this set of
outputs,
Though further to that I'm not sure how we'd want to represent
multiple graphic cards I assume we'd want to be able to see them all
at once, but still have some sort of binding between separate outputs
on one card.
Dave.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] Multi-head support RFC
2013-11-06 23:44 ` Dave Airlie
@ 2013-11-07 13:00 ` Gerd Hoffmann
2013-11-08 15:20 ` John Baboval
1 sibling, 0 replies; 15+ messages in thread
From: Gerd Hoffmann @ 2013-11-07 13:00 UTC (permalink / raw)
To: Dave Airlie; +Cc: John Baboval, qemu-devel@nongnu.org
On Do, 2013-11-07 at 09:44 +1000, Dave Airlie wrote:
> On Wed, Nov 6, 2013 at 8:57 PM, Gerd Hoffmann <kraxel@redhat.com> wrote:
> > Hi,
> >
> >> It currently just adds multiple DisplaySurfaces to the QemuConsole,
> >> now Gerd said he thought I should be using multiple QemuConsoles but I
> >> really didn't think this was a good idea,
> >
> > Why?
>
> It's a fair question and I haven't tried the other way yet and I fully
> intend to do further investigating,
>
> I think my main reason was the current console at least when
> interacting with gtk frontend are done on a console per tab, now if we
> have multiple outputs I would want them to be visible at the same
> time,
Sure, that needs to be addressed. multihead is new to qemu, the ui core
can deal with it now. There are still many rougth edges, and not all
user interfaces can deal with it, and it hasn't seen all that much
testing yet.
Current state (master / 1.7 release candidates):
* Spice has basic support. It opens a display channel for each
(graphical) QemuConsole. Sugar on top still missing. For
example you can't disable outputs, so the windows are always
there, even if the guest doesn't enable & use the outputs.
* SDL/vnc: One window. Ctrl-Alt-<nr> switches consoles. Usually
this switches between vga (grapical console 0) and text consoles
(if present, monitor / serial / parallel at 1,2,3). With multihead
you'll get more graphical consoles and you can switch to them too.
Which is obviously not what you want as you can't see all your
heads at the same time ...
Your SDL2 bits address the SDL side. vnc isn't easy to tackle,
unlike spice it has no concept of multiple displays in the protocol,
but given you can simply use spice instead I see little reason to
invest much energy into extending vnc.
* gtk: One window with (by default hidden) tabs. First tab is the
graphical console 0, other tabs have vte widgets for the terminals.
Other grapical consoles are ignored. Opening additional windows
for them isn't hard, I even had a quick&dirty patch for that a while
back. I think the most tricky part here is to design a sensible user
interface.
> now I understand we could fix this but we'd need some sort of
> console grouping to say this group of consoles represent this set of
> outputs,
qemu already links QemuConsoles to devices, you can lookup the
QemuConsole for a given device (qemu_console_lookup_by_device). This
needs to be extended a bit to handle multiple QemuConsoles per device.
We also need to add some more metadata to the QemuConsoles and add
interfaces for the UIs to query them.
> Though further to that I'm not sure how we'd want to represent
> multiple graphic cards I assume we'd want to be able to see them all
> at once, but still have some sort of binding between separate outputs
> on one card.
UIs can decide which QemuConsoles they want display, so we can have
options to filter -- say -- by device. Could look like this on the
command line:
qemu -device VGA,id=video0 \
-device virtio-gpu,id=video1 \
-vnc :0,device=video0 \
-spice port=12345,device=video1
Today none of the UIs can have multiple instances, i.e. you can't have
one vnc server for video0 and a second vnc server for video1. We also
don't have input routing today (i.e. send input from different UIs to
different virtual keyboards / mice), which we will need for multiseat
support in qemu.
Alot of room for improvements. I see multiseat as long-term goal, for
now this is just something to keep in mind when coding up new stuff. We
surely want tackle multihead first.
cheers,
Gerd
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] Multi-head support RFC
2013-11-06 15:39 ` John Baboval
@ 2013-11-07 13:46 ` Gerd Hoffmann
2013-11-07 15:54 ` John Baboval
0 siblings, 1 reply; 15+ messages in thread
From: Gerd Hoffmann @ 2013-11-07 13:46 UTC (permalink / raw)
To: John Baboval; +Cc: qemu-devel
Hi,
> As far as the EDID is concerned, there can only be one EDID for a
> display+hw pair, or the guest won't know what to do. In my use-case, I
> simply pass real EDIDs through, and create a full-screen window for each
> real monitor.
Ok, makes sense.
> If you wanted to have two UIs displaying the same
> DisplaySurface, the EDID would have to come from one of them, and the
> other would have to clip, or scale.
Yes.
> > Why not? That is exactly my plan. Just have the virtual graphic card
> > call graphic_console_init() multiple times, once for each display
> > connector it has.
> >
> > Do you see fundamental issues with that approach?
> Currently only one QemuConsole is active at a time, so that would have
> to change....
That isn't mandatory any more. It is still the default behavior of a
DisplayChangeListener to follow the active_console, for compatibility
reasons. SDL and VNC still behave that way.
You can explicitly bind a DisplayChangeListener to a QemuConsole though,
by setting DisplayChangeListener->con before calling
register_displaychangelistener().
gtk binds to QemuConsole #0.
spice creates a display channel per (graphical) console. Each display
channel has a DisplayChangeListener instance, and each
DisplayChangeListener is linked to a different QemuConsole.
For your UI you probably want follow the spice model. Have a
DisplayChangeListener for each physical monitor of the host, have a
fixed QemuConsole bound to each DisplayChangeListener.
DisplayChangeListeners can come and go at runtime just fine, so you
should be able to create/destroy them on monitor plug/unplug events on
the host.
cheers,
Gerd
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] Multi-head support RFC
2013-11-07 13:46 ` Gerd Hoffmann
@ 2013-11-07 15:54 ` John Baboval
2013-11-15 20:14 ` John Baboval
0 siblings, 1 reply; 15+ messages in thread
From: John Baboval @ 2013-11-07 15:54 UTC (permalink / raw)
To: Gerd Hoffmann; +Cc: qemu-devel
On 11/7/2013 8:46 AM, Gerd Hoffmann wrote:
> Hi,
>
>> As far as the EDID is concerned, there can only be one EDID for a
>> display+hw pair, or the guest won't know what to do. In my use-case, I
>> simply pass real EDIDs through, and create a full-screen window for each
>> real monitor.
> Ok, makes sense.
>
>> If you wanted to have two UIs displaying the same
>> DisplaySurface, the EDID would have to come from one of them, and the
>> other would have to clip, or scale.
> Yes.
>
>>> Why not? That is exactly my plan. Just have the virtual graphic card
>>> call graphic_console_init() multiple times, once for each display
>>> connector it has.
>>>
>>> Do you see fundamental issues with that approach?
>> Currently only one QemuConsole is active at a time, so that would have
>> to change....
> That isn't mandatory any more. It is still the default behavior of a
> DisplayChangeListener to follow the active_console, for compatibility
> reasons. SDL and VNC still behave that way.
>
> You can explicitly bind a DisplayChangeListener to a QemuConsole though,
> by setting DisplayChangeListener->con before calling
> register_displaychangelistener().
>
> gtk binds to QemuConsole #0.
>
> spice creates a display channel per (graphical) console. Each display
> channel has a DisplayChangeListener instance, and each
> DisplayChangeListener is linked to a different QemuConsole.
>
> For your UI you probably want follow the spice model. Have a
> DisplayChangeListener for each physical monitor of the host, have a
> fixed QemuConsole bound to each DisplayChangeListener.
> DisplayChangeListeners can come and go at runtime just fine, so you
> should be able to create/destroy them on monitor plug/unplug events on
> the host.
I think the best thing for me to do at this point is to just start
implementing with multiple QemuConsole, then, and post again when I have
a better idea of what it ends up looking like. I'll keep you posted.
>
> cheers,
> Gerd
>
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] Multi-head support RFC
2013-11-06 23:44 ` Dave Airlie
2013-11-07 13:00 ` Gerd Hoffmann
@ 2013-11-08 15:20 ` John Baboval
1 sibling, 0 replies; 15+ messages in thread
From: John Baboval @ 2013-11-08 15:20 UTC (permalink / raw)
To: qemu-devel
On 11/06/2013 06:44 PM, Dave Airlie wrote:
> On Wed, Nov 6, 2013 at 8:57 PM, Gerd Hoffmann <kraxel@redhat.com> wrote:
>> Hi,
>>
>>> It currently just adds multiple DisplaySurfaces to the QemuConsole,
>>> now Gerd said he thought I should be using multiple QemuConsoles but I
>>> really didn't think this was a good idea,
>> Why?
> It's a fair question and I haven't tried the other way yet and I fully
> intend to do further investigating,
>
> I think my main reason was the current console at least when
> interacting with gtk frontend are done on a console per tab, now if we
> have multiple outputs I would want them to be visible at the same
> time, now I understand we could fix this but we'd need some sort of
> console grouping to say this group of consoles represent this set of
> outputs,
>
> Though further to that I'm not sure how we'd want to represent
> multiple graphic cards I assume we'd want to be able to see them all
> at once, but still have some sort of binding between separate outputs
> on one card.
Multiple graphic cards is actually a reasonable argument for the
multiple QemuConsole route, since each would have its own GraphicHwOps,
so it would simply be a matter of having the UI give the user a
mechanism for choosing which card to attach a new display to.
>
> Dave.
>
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] Multi-head support RFC
2013-11-07 15:54 ` John Baboval
@ 2013-11-15 20:14 ` John Baboval
2013-11-19 6:57 ` Gerd Hoffmann
0 siblings, 1 reply; 15+ messages in thread
From: John Baboval @ 2013-11-15 20:14 UTC (permalink / raw)
To: Gerd Hoffmann; +Cc: qemu-devel
I've dug into this further, and basically gotten what I had before
working against the latest code. So now the ugly bits are making
themselves known.
In my existing patch set, dpy_refresh is called for each display,
however since hw_update takes an opaque pointer to the hw info
structure, it has to be called per-hw, not per-display. So the ui's
dpy_refresh implementation calls hw_update only for the first display,
and the hw updates all displays each time it is called. Not only is this
ugly, but it requires a mechanism to loop through all the consoles in
the hw code, which can't be done without breaking all the nice
protections that have been set up between the ui and hw.
The lack of a per-display hook in the HwOps calls is cumbersome in other
places too. For example in the hook I'm adding for store_edid, the EDID
needs to be for a specific console, so I have to pass an index.
I think it would be better if the HwOps calls all took a QemuConsole
instead of the opaque structure. The hw implementations can dig their
opaque structure out from there. (Of course I'd have to go update all
the emulated cards to handle the new model... But that's the price of
progress I suppose.) I'd love you're opinion on the matter before I
spend the time making the changes though...
On 11/7/2013 10:54 AM, John Baboval wrote:
> On 11/7/2013 8:46 AM, Gerd Hoffmann wrote:
>> Hi,
>>
>>> As far as the EDID is concerned, there can only be one EDID for a
>>> display+hw pair, or the guest won't know what to do. In my use-case, I
>>> simply pass real EDIDs through, and create a full-screen window for
>>> each
>>> real monitor.
>> Ok, makes sense.
>>
>>> If you wanted to have two UIs displaying the same
>>> DisplaySurface, the EDID would have to come from one of them, and the
>>> other would have to clip, or scale.
>> Yes.
>>
>>>> Why not? That is exactly my plan. Just have the virtual graphic card
>>>> call graphic_console_init() multiple times, once for each display
>>>> connector it has.
>>>>
>>>> Do you see fundamental issues with that approach?
>>> Currently only one QemuConsole is active at a time, so that would have
>>> to change....
>> That isn't mandatory any more. It is still the default behavior of a
>> DisplayChangeListener to follow the active_console, for compatibility
>> reasons. SDL and VNC still behave that way.
>>
>> You can explicitly bind a DisplayChangeListener to a QemuConsole though,
>> by setting DisplayChangeListener->con before calling
>> register_displaychangelistener().
>>
>> gtk binds to QemuConsole #0.
>>
>> spice creates a display channel per (graphical) console. Each display
>> channel has a DisplayChangeListener instance, and each
>> DisplayChangeListener is linked to a different QemuConsole.
>>
>> For your UI you probably want follow the spice model. Have a
>> DisplayChangeListener for each physical monitor of the host, have a
>> fixed QemuConsole bound to each DisplayChangeListener.
>> DisplayChangeListeners can come and go at runtime just fine, so you
>> should be able to create/destroy them on monitor plug/unplug events on
>> the host.
>
> I think the best thing for me to do at this point is to just start
> implementing with multiple QemuConsole, then, and post again when I
> have a better idea of what it ends up looking like. I'll keep you posted.
>
>>
>> cheers,
>> Gerd
>>
>>
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] Multi-head support RFC
2013-11-15 20:14 ` John Baboval
@ 2013-11-19 6:57 ` Gerd Hoffmann
2013-11-19 14:17 ` John Baboval
0 siblings, 1 reply; 15+ messages in thread
From: Gerd Hoffmann @ 2013-11-19 6:57 UTC (permalink / raw)
To: John Baboval; +Cc: qemu-devel
Hi,
> I think it would be better if the HwOps calls all took a QemuConsole
> instead of the opaque structure. The hw implementations can dig their
> opaque structure out from there.
QemuConsole is private to ui/console.c though (and I prefer to keep it
this way). So we need either a helper function to query con->hw or we
keep the opaque and pass QemuConsole as additional parameter.
When going this route.
Alternative approach:
struct my_gfx_card_state {
PCIDevice pci;
[ ... ]
struct my_head_state {
QemuConsole *con;
[ ... ]
} heads[MAX_HEADS];
}
Then instead of
graphic_console_init(..., &state);
call
graphic_console_init(..., &state->heads[i]);
so you can figure the head in the callbacks.
cheers,
Gerd
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] Multi-head support RFC
2013-11-19 6:57 ` Gerd Hoffmann
@ 2013-11-19 14:17 ` John Baboval
2013-11-20 7:39 ` Gerd Hoffmann
0 siblings, 1 reply; 15+ messages in thread
From: John Baboval @ 2013-11-19 14:17 UTC (permalink / raw)
To: Gerd Hoffmann; +Cc: qemu-devel
On 11/19/2013 1:57 AM, Gerd Hoffmann wrote:
> Hi,
>
>> I think it would be better if the HwOps calls all took a QemuConsole
>> instead of the opaque structure. The hw implementations can dig their
>> opaque structure out from there.
> QemuConsole is private to ui/console.c though (and I prefer to keep it
> this way). So we need either a helper function to query con->hw or we
> keep the opaque and pass QemuConsole as additional parameter.
>
> When going this route.
>
> Alternative approach:
>
> struct my_gfx_card_state {
> PCIDevice pci;
> [ ... ]
> struct my_head_state {
> QemuConsole *con;
> [ ... ]
> } heads[MAX_HEADS];
> }
>
> Then instead of
>
> graphic_console_init(..., &state);
>
> call
>
> graphic_console_init(..., &state->heads[i]);
>
> so you can figure the head in the callbacks.
What I had prototyped (I got impatient) was to add a helper:
qemu_console_hw_opaque() to console.c so the HwOps could query the
opaque pointer. I don't like it because it requires an additional
function call at the top of frequently called operations... However I'm
not a huge fan of the head state array either, since it will require
either pointer duplication, or pointer acrobatics, or the same helper
function to get the card state.
A third option would be to use the console index as a parameter to each
HwOp. graphic_console_init and QemuConsole could remain unchanged, as
could all the graphic_hw_*() helpers. If the HwOp needs the QemuConsole
for any given operation, it could then use
qemu_console_lookup_by_index(). Many operations would only need the
index though.
> cheers,
> Gerd
>
>
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] Multi-head support RFC
2013-11-19 14:17 ` John Baboval
@ 2013-11-20 7:39 ` Gerd Hoffmann
0 siblings, 0 replies; 15+ messages in thread
From: Gerd Hoffmann @ 2013-11-20 7:39 UTC (permalink / raw)
To: John Baboval; +Cc: qemu-devel
> > graphic_console_init(..., &state->heads[i]);
> >
> > so you can figure the head in the callbacks.
>
> What I had prototyped (I got impatient) was to add a helper:
> qemu_console_hw_opaque() to console.c so the HwOps could query the
> opaque pointer. I don't like it because it requires an additional
> function call at the top of frequently called operations... However I'm
> not a huge fan of the head state array either, since it will require
> either pointer duplication, or pointer acrobatics, or the same helper
> function to get the card state.
Yes, easiest way to get the device state struct from the head state
struct is a pointer in each the head state (this is what you mean with
pointer duplication, right?).
I still think it is the best way. You get a direct pointer to your head
state. Device state is only one pointer dereference away.
Anything else requires extra effort to lookup the head state from
whatever information (QemuConsole, index, ...) we pass to the hwops
function.
cheers,
Gerd
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2013-11-20 7:39 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-11-05 20:42 [Qemu-devel] Multi-head support RFC John Baboval
2013-11-06 1:46 ` Dave Airlie
2013-11-06 10:57 ` Gerd Hoffmann
2013-11-06 23:44 ` Dave Airlie
2013-11-07 13:00 ` Gerd Hoffmann
2013-11-08 15:20 ` John Baboval
2013-11-06 15:48 ` John Baboval
2013-11-06 10:55 ` Gerd Hoffmann
2013-11-06 15:39 ` John Baboval
2013-11-07 13:46 ` Gerd Hoffmann
2013-11-07 15:54 ` John Baboval
2013-11-15 20:14 ` John Baboval
2013-11-19 6:57 ` Gerd Hoffmann
2013-11-19 14:17 ` John Baboval
2013-11-20 7:39 ` Gerd Hoffmann
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).