From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:32881) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TBQwD-0007Ph-Mr for qemu-devel@nongnu.org; Tue, 11 Sep 2012 09:55:43 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TBQw4-0000d5-1J for qemu-devel@nongnu.org; Tue, 11 Sep 2012 09:55:37 -0400 Received: from mx3-phx2.redhat.com ([209.132.183.24]:60482) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TBQw3-0000cx-ON for qemu-devel@nongnu.org; Tue, 11 Sep 2012 09:55:27 -0400 Date: Tue, 11 Sep 2012 09:55:27 -0400 (EDT) From: Alon Levy Message-ID: <1122221044.33124789.1347371727001.JavaMail.root@redhat.com> In-Reply-To: <504F3BA2.20306@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 3/3] hw/qxl: support client monitor configuration via device List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Hans de Goede Cc: Gerd Hoffmann , qemu-devel@nongnu.org > Hi, > > On 09/11/2012 03:05 PM, Alon Levy wrote: > >>> ok, I'm missing something here. (and trying to catch up via Vol > >>> 3A > >>> is taking too long). > >>> I thought the order is: > >>> (1) qemu raises interrupt > >>> (2) qemu calls kvm ioctl > >>> (3) guest interrupt handler > >>> (4) guest clears interrupt by writing ~0 to qxl > >>> ram_header->int_mask. > >>> (5) qemu detects this next time it raises interrupt. > >> > >>> so where does qemu/hw/qxl.c get a chance to see this masking > >>> *immediately* after it raises the interrupt, i.e. before (2) > >>> above, > >>> since otherwise there is a timeout here, you need to add a > >>> callback, > >>> it gets complicated, and then the unconditional two way sending > >>> looks > >>> much better. (I'm already on the same page with you on not > >>> needing > >>> guest capabilities at this point, even though for the future it > >>> did > >>> look like a good thing to have). > >> > >> There are two registers: > >> > >> (1) the interrupt enable register (aka ram->int_mask) > >> (2) the interrupt status register (aka ram->int_pending) > >> > >> qemu sets the irq bit in the status register each time the irq > >> condition > >> is meet. qemu actually raises an irq in case the guest has the > >> irq > >> bit > >> set in the enable register. guest acks the irq by clearing the > >> irq > >> bit > >> in the status register (then issue QXL_IO_UPDATE_IRQ to notify > >> qemu > >> that > >> it touched interrupt registers, which we need because our > >> registers > >> in > >> memory not mmio space). > >> > >> So qxl can simply look at the enable register bit to figure > >> whenever > >> the > >> guest is interested in specific interrupts or not. > > > > Hans and myself discussed offline the current windows driver > > implementation. In short, it sets ram->int_mask to ~0, thereby > > claiming to support all 32 interrupts (including those we haven't > > thought of yet..). > > Right, thinking more about this, this means that the don't send it to > the agent when QXL_INTERRUPT_CLIENT_MONITORS_CONFIG is set in mask > trick > won't work, for windows with an older driver. > > I suggest rather then doing the whole capabilities dance, we simply > detect > the (older) windows driver (mask == ~0), and then treat that as > QXL_INTERRUPT_CLIENT_MONITORS_CONFIG not being set in mask, a bit of > hack > but still much simpler then adding a full capabilities interface. > > If windows ever wants to actually support CLIENT_MONITORS_CONFIG > through > the driver rather then trough the agent, the driver will need > updating > anyways and we can then drop the ~0 replacing it with the proper > mask. That sounds good, so I'll make sure the kms driver doesn't have this bug. Updated patches for spice-protocol, spice & qemu coming up. > > Regards, > > Hans > > > > > > >