From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60866) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Ve0mN-0006rx-Vi for qemu-devel@nongnu.org; Wed, 06 Nov 2013 05:56:13 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Ve0mE-0006dU-UK for qemu-devel@nongnu.org; Wed, 06 Nov 2013 05:56:07 -0500 Received: from mx1.redhat.com ([209.132.183.28]:59945) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Ve0mE-0006dN-LW for qemu-devel@nongnu.org; Wed, 06 Nov 2013 05:55:58 -0500 Message-ID: <1383735351.1739.57.camel@nilsson.home.kraxel.org> From: Gerd Hoffmann Date: Wed, 06 Nov 2013 11:55:51 +0100 In-Reply-To: <52795824.3090105@citrix.com> References: <52795824.3090105@citrix.com> Content-Type: text/plain; charset="UTF-8" Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] Multi-head support RFC List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: John Baboval Cc: "qemu-devel@nongnu.org" Hi, > In QEMU 1.3, there was a DisplayState list. We used one DisplayState per > monitor. The DisplayChangeListener has a new hw_add_display vector, so > that when the UI requests a second monitor the new display gets attached > to the emulated hardware. (patch: add_display_ptr) I don't think we want actually add/remove stuff here. On real hardware your gfx card has a fixed set of display connectors, and I think we are best of mimic that. Support for propagating connect/disconnect events and enabling/disabling displays needs to be added properly. Currently qxl/spice can handle this, but it uses a private side channel. > A new vector, hw_store_edid, was added to DisplayState so that UIs could > tell emulated hardware what the EDID for a given display should be. > (patch: edid-vector) Note that multiple uis can be active at the same time. What happened with the edids then? > VRAM size was made configurable, so that more could be allocated to > handle multiple high-resolution displays. (patch: variable-vram-size) upstream stdvga has this meanwhile. > I don't think it makes sense to have a QemuConsole per display. Why not? That is exactly my plan. Just have the virtual graphic card call graphic_console_init() multiple times, once for each display connector it has. Do you see fundamental issues with that approach? > I can use a model similar to what qxl does, and put the framebuffer for > each display inside a single DisplaySurface allocated to be a bounding > rectangle around all framebuffers. This has the advantage of looking > like something that already exists in the tree, but has several > disadvantages. Indeed. I don't recommend that. It is that way for several historical reasons (one being that the code predates the qemu console cleanup in the 1.5 devel cycle). > Are these features something that people would want to see in the tree? Sure. One of the reasons for the console cleanup was to allow proper multihead support. cheers, Gerd