From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:39279) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1R05P6-00035U-Kh for qemu-devel@nongnu.org; Sun, 04 Sep 2011 01:38:01 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1R05P5-0003qJ-BM for qemu-devel@nongnu.org; Sun, 04 Sep 2011 01:38:00 -0400 Received: from mx1.redhat.com ([209.132.183.28]:47182) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1R05P5-0003qA-3q for qemu-devel@nongnu.org; Sun, 04 Sep 2011 01:37:59 -0400 Message-ID: <4E630EC3.7020303@redhat.com> Date: Sun, 04 Sep 2011 08:38:11 +0300 From: Yonit Halperin MIME-Version: 1.0 References: <1314794254-11624-1-git-send-email-yhalperi@redhat.com> <20110901193623.GN10989@redhat.com> In-Reply-To: <20110901193623.GN10989@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v2 1/2] qxl: send interrupt after migration in case ram->int_pending != 0, RHBZ #732949 List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: alevy@redhat.com, qemu-devel@nongnu.org, spice-devel@freedesktop.org, kraxel@redhat.com On 09/01/2011 10:36 PM, Michael S. Tsirkin wrote: > On Wed, Aug 31, 2011 at 03:37:33PM +0300, Yonit Halperin wrote: >> if qxl_send_events was called from spice server context, and then >> migration had completed before a call to pipe_read, the target >> guest qxl driver didn't get the interrupt. > > This is a general issue with interrupt migration, and PCI core has code > to handle this, migrating interrupts. So rather than work around this > in qxl I'd like us to first understand whether there really exists such > a problem, since if yes it would affect other devices. > > Could you help with that please? > I think this issue is spice-specific: the problem is that when a spice_server thread issues a request for interrupt, the request is passed to the qemu thread through a pipe. This pipe status is not saved during migration. Thus, any pending interrupt request are purged when migration completes. >> In addition, >> qxl_send_events ignored further interrupts of the same kind, since >> ram->int_pending was set. > > Maybe this is the only issue? > A way to check would be to call > uint32_t pending = le32_to_cpu(d->ram->int_pending); > uint32_t mask = le32_to_cpu(d->ram->int_mask); > int level = !!(pending& mask); > qxl_ring_set_dirty(d); > > instead of qxl_set_irq, and see if that is enough. > I was talking about the check in qxl_send_events > Note: I don't object to reusing qxl_set_irq in > production, just let us make sure we don't hide bugs. > >> As a result, the guest driver was stacked >> or very slow (when the waiting for the interrupt was with timeout). > > You need to sign off :) > >> --- >> hw/qxl.c | 9 +++++++-- >> 1 files changed, 7 insertions(+), 2 deletions(-) >> >> diff --git a/hw/qxl.c b/hw/qxl.c >> index b34bccf..c7edc60 100644 >> --- a/hw/qxl.c >> +++ b/hw/qxl.c >> @@ -1362,7 +1362,6 @@ static void pipe_read(void *opaque) >> qxl_set_irq(d); >> } >> >> -/* called from spice server thread context only */ >> static void qxl_send_events(PCIQXLDevice *d, uint32_t events) >> { >> uint32_t old_pending; >> @@ -1463,7 +1462,13 @@ static void qxl_vm_change_state_handler(void *opaque, int running, int reason) >> PCIQXLDevice *qxl = opaque; >> qemu_spice_vm_change_state_handler(&qxl->ssd, running, reason); >> >> - if (!running&& qxl->mode == QXL_MODE_NATIVE) { >> + if (running) { >> + /* >> + * if qxl_send_events was called from spice server context before >> + * migration ended, qxl_set_irq for these events might not have been called >> + */ >> + qxl_set_irq(qxl); >> + } else if (qxl->mode == QXL_MODE_NATIVE) { >> /* dirty all vram (which holds surfaces) and devram (primary surface) >> * to make sure they are saved */ >> /* FIXME #1: should go out during "live" stage */ >> -- >> 1.7.4.4 >>