qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Yonit Halperin <yhalperi@redhat.com>, Gerd Hoffmann <kraxel@redhat.com>
Subject: [Qemu-devel] [PATCH 3/6] qxl: send interrupt after migration in case ram->int_pending != 0, RHBZ #732949
Date: Wed,  7 Sep 2011 09:38:32 +0200	[thread overview]
Message-ID: <1315381115-6171-4-git-send-email-kraxel@redhat.com> (raw)
In-Reply-To: <1315381115-6171-1-git-send-email-kraxel@redhat.com>

From: Yonit Halperin <yhalperi@redhat.com>

if qxl_send_events was called from spice server context, and then
migration had completed before a call to pipe_read, the target
guest qxl driver didn't get the interrupt. In addition,
qxl_send_events ignored further interrupts of the same kind, since
ram->int_pending was set. As a result, the guest driver was stacked
or very slow (when the waiting for the interrupt was with timeout).

Signed-off-by: Yonit Halperin <yhalperi@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 hw/qxl.c |   10 ++++++++--
 1 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/hw/qxl.c b/hw/qxl.c
index 1fe0b53..7bb2560 100644
--- a/hw/qxl.c
+++ b/hw/qxl.c
@@ -1362,7 +1362,6 @@ static void pipe_read(void *opaque)
     qxl_set_irq(d);
 }
 
-/* called from spice server thread context only */
 static void qxl_send_events(PCIQXLDevice *d, uint32_t events)
 {
     uint32_t old_pending;
@@ -1459,7 +1458,14 @@ static void qxl_vm_change_state_handler(void *opaque, int running, int reason)
     PCIQXLDevice *qxl = opaque;
     qemu_spice_vm_change_state_handler(&qxl->ssd, running, reason);
 
-    if (!running && qxl->mode == QXL_MODE_NATIVE) {
+    if (running) {
+        /*
+         * if qxl_send_events was called from spice server context before
+         * migration ended, qxl_set_irq for these events might not have been
+         * called
+         */
+         qxl_set_irq(qxl);
+    } else if (qxl->mode == QXL_MODE_NATIVE) {
         /* dirty all vram (which holds surfaces) and devram (primary surface)
          * to make sure they are saved */
         /* FIXME #1: should go out during "live" stage */
-- 
1.7.1

  parent reply	other threads:[~2011-09-07  7:38 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-09-07  7:38 [Qemu-devel] [PULL] spice patch queue Gerd Hoffmann
2011-09-07  7:38 ` [Qemu-devel] [PATCH 1/6] spice-qemu-char.c: Use correct printf format char for ssize_t Gerd Hoffmann
2011-09-07  7:38 ` [Qemu-devel] [PATCH 2/6] hw/qxl: Fix format string errors Gerd Hoffmann
2011-09-07  7:38 ` Gerd Hoffmann [this message]
2011-09-07  7:38 ` [Qemu-devel] [PATCH 4/6] qxl: s/qxl_set_irq/qxl_update_irq/ Gerd Hoffmann
2011-09-07  7:38 ` [Qemu-devel] [PATCH 5/6] spice: set qxl->ssd.running=true before telling spice to start, RHBZ #733993 Gerd Hoffmann
2011-09-07  7:38 ` [Qemu-devel] [PATCH 6/6] spice: workaround a spice server bug Gerd Hoffmann
2011-09-08 14:24 ` [Qemu-devel] [PULL] spice patch queue Anthony Liguori

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1315381115-6171-4-git-send-email-kraxel@redhat.com \
    --to=kraxel@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=yhalperi@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).