From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:60726) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TBOet-0006Tv-CY for qemu-devel@nongnu.org; Tue, 11 Sep 2012 07:29:41 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TBOer-00013Z-UD for qemu-devel@nongnu.org; Tue, 11 Sep 2012 07:29:35 -0400 Received: from mx4-phx2.redhat.com ([209.132.183.25]:48720) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TBOer-00013E-Lg for qemu-devel@nongnu.org; Tue, 11 Sep 2012 07:29:33 -0400 Date: Tue, 11 Sep 2012 07:29:32 -0400 (EDT) From: Alon Levy Message-ID: <1638054245.33035745.1347362972455.JavaMail.root@redhat.com> In-Reply-To: <504F1B06.4020508@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 3/3] hw/qxl: support client monitor configuration via device List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Gerd Hoffmann Cc: qemu-devel@nongnu.org > Hi, > > >> I don't think an explicit handshake via > >> QXL_IO_CLIENT_MONITORS_CONFIG_DONE is a good idea. > > > > Why? I don't see the below as being better - it just moves the > > checking to the guest, and racily. > > It is more robust. I suggested a way for it to be as robust - I take robust to mean "no lost messages except for intermediate ones", which both solutions suffer from anyway. > We don't have to keep state in qxl for the > handshake You mean it's preferable to have state on the device (QXLRom::client_monitors_config_updating) rather then private qxl device state? > one less opportunity for a buggy guest driver to screw up > things. The logic you outline can be screwed up as well, the logic for writing this io is basically: qxl_display_copy_rom_client_monitors_config(qdev); qxl_crtc_set_from_monitors_config(qdev); outb(0, qdev->io_base + QXL_IO_CLIENT_MONITORS_CONFIG_DONE); > It is also closer to how real hardware handles this. I don't know how it is in real hardware. > > Yes, there is a race, but we detect and handle that case, so it is no > problem. > > >> > >> How about this update protocol: > >> > >> qemu: > >> (1) set QXLRom::client_monitors_config_updating > >> (2) fill QXLRom::client_monitors_config > >> (3) raise QXL_INTERRUPT_CLIENT_MONITORS_CONFIG > >> (4) clear QXLRom::client_monitors_config_updating > >> > >> guest: > >> (1) clear QXL_INTERRUPT_CLIENT_MONITORS_CONFIG bit in irq status > >> (2) wait until QXLRom::client_monitors_config_updating is clear > >> (3) parse QXLRom::client_monitors_config > >> (4) check QXL_INTERRUPT_CLIENT_MONITORS_CONFIG bit in irq status > >> (a) when set, goto (1). > >> (b) when clear we are done. > >> > > > > Hmm, you are making the guest more complicated instead of the host > > side, don't know if that's better. > > > > Point (2) is a busy wait, no? > > The guest doesn't have to spin, it can yield(). But yes, better > don't > do that in a IRQ handler. > > > Also, guest will have to handle half old / half new configurations: > > It should not. > > > qemu(1) > > qemu(2) start > > guest(1) > > guest(2) > > guest(3) reads half old half new > > No, guest will wait at (2). > > But I just see it can happen the other way around though: guest > starts > reading and qemu starts updating in parallel. We need one additional > step (3a): The guest needs to check > QXLRom::client_monitors_config_updating is clear after parsing the > data. > > When qemu updated the data while the guest was reading it one of the > two > conditions will be true: either (3a) if qemu is still busy updating > or > (4) if qemu finished updating. Making things more complicated in the host, qemu, means making the kernel driver in the guest simpler, so even though you have a good solution for the race you discovered I don't see why the alternative is worse. (answered point to point above). > > >> We might want to notify spice-server when the guest flips the > >> QXL_INTERRUPT_CLIENT_MONITORS_CONFIG bit in the irq mask, so we > >> can > >> route the event accordingly. > > > > That would mean even more complex logic, to delay spice-server from > > sending the monitors command to the guest while more data could be > > incoming (which can be multiple chunks comprising one message, so > > we > > must wait for it to complete before pushing the > > VDAgentMonitorsConfig > > message). > > I don't understand. Why is the client capability bit fundamentally > different from the irq mask? I'd expect a guest driver which > supports > it would set the irq mask bit and the capability bit at the same > time, no? You are right, so we can do away with the guest capabilities field, but remain with the guest capabilities api in spice, because spice depends on it for the routing decision. > > >> Or we just route it unconditionally both ways and let the guest > >> sort it (i.e. make vdagent ignore monitors config when the qxl kms > >> driver is active). > > > Routing it only one way is simpler in spice code. In other words, I > > had a buggy version doing that and decided that I should just do it > > one way or the other to not bang my head on it. But it's also > > simpler > > to handle - what order are the events going to happen in the guest? > > Also, not only spice-vdagent plus our driver, but with, for > > instance, > > gnome-settings-daemon listening to the uevent from the kernel it > > will > > do modesetting by itself, racing with spice-vdagent. > > I was thinking about spice-vdagent detecting the qxl kms driver is > active, then just ignore all VDAgentMonitorsConfig messages > unconditionally if this happens to be the case, so there will be no > race. But if you think it is too complicated, no problem. This sounds like a good idea, didn't think of it. But that leaves me fixing the vdagent code now :) Also, I guess I still think doing just one message looks simpler. > Was just an idea. Notifying spice-server which way to route is fine with me too. OK, if it's all the same to you I'll stick with spice-server routing the message. Overall, if you find this tedious I will switch to your suggestion since it isn't such a big deal. > > cheers, > Gerd > >