From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:39785) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VTYGL-0002Gf-Fz for qemu-devel@nongnu.org; Tue, 08 Oct 2013 10:27:55 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VTYGF-0006Ny-GY for qemu-devel@nongnu.org; Tue, 08 Oct 2013 10:27:49 -0400 Received: from mx1.redhat.com ([209.132.183.28]:8846) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VTYGF-0006Nm-8O for qemu-devel@nongnu.org; Tue, 08 Oct 2013 10:27:43 -0400 Message-ID: <5254165A.2050304@redhat.com> Date: Tue, 08 Oct 2013 16:27:38 +0200 From: Hans de Goede MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Current qemu-master hangs when used with qxl + linux guest List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: spice-devel , "qemu-devel@nongnu.org" Hi All, I'm having this weird problem with qemu master + spice/qxl using guests. As soon as the guest starts Xorg, I get the following message from qemu: main-loop: WARNING: I/O thread spun for 1000 iterations And from then on the guest hangs and qemu consumes 100% cpu. The qemu console still works, and I can quit qemu that way. Doing ctrl+c + a thread apply all bt in qemy shows one cpu thread waiting for the iothread-lock, and all other threads waiting in poll. This happens both with non kms guests (tried RHEL-6.5, older Fedoras) as well as with kms guests (tried a fully up2date F-19). Since I've not seen any similar reports, I assume it is something with my setup ... I've tried changing various things: -removing the spice agent channel -changing the number of virtual cpus (tried 1 and 2 virtual cpus) -upgrading spice-server to the latest git master But all to no avail. This is with qemu-master build from source on a fully up2date F-20 system, using the F-20 seabios files. If someone has any clever ideas I'll happily try debugging this further. Regards, Hans