From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:49089) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QEeDq-0003Fm-3H for qemu-devel@nongnu.org; Tue, 26 Apr 2011 05:06:19 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QEeDo-00062w-Ne for qemu-devel@nongnu.org; Tue, 26 Apr 2011 05:06:17 -0400 Received: from goliath.siemens.de ([192.35.17.28]:25398) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QEeDo-00062p-EN for qemu-devel@nongnu.org; Tue, 26 Apr 2011 05:06:16 -0400 Message-ID: <4DB68B05.1090600@siemens.com> Date: Tue, 26 Apr 2011 11:06:13 +0200 From: Jan Kiszka MIME-Version: 1.0 References: <4D68F20D.2020401@web.de> <4DB687F0.20605@redhat.com> In-Reply-To: <4DB687F0.20605@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] kvm crashes with spice while loading qxl List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Gerd Hoffmann Cc: xming , qemu-devel , kvm@vger.kernel.org On 2011-04-26 10:53, Gerd Hoffmann wrote: > Hi, > > [ ... back online now ... ] > >>> /var/tmp/portage/app-emulation/qemu-kvm-0.14.0/work/qemu-kvm-0.14.0/qemu-kvm.c:1724: >>> >>> kvm_mutex_unlock: Assertion `!cpu_single_env' failed. > >> That's a spice bug. In fact, there are a lot of >> qemu_mutex_lock/unlock_iothread in that subsystem. I bet at least a few >> of them can cause even more subtle problems. >> >> Two general issues with dropping the global mutex like this: >> - The caller of mutex_unlock is responsible for maintaining >> cpu_single_env across the unlocked phase (that's related to the >> abort above). > > This is true for qemu-kvm only, right? Nope, this applies to both implementations. > > qemu-kvm specific patches which add the cpu_single_env tracking (not > polished yet) are here: > > http://cgit.freedesktop.org/spice/qemu/log/?h=spice.kvm.v28 Cannot spot that quickly: In which way are they specific to qemu-kvm? If they are, try to focus on upstream first. The qemu-kvm differences are virtually deprecated, and I hope we can remove them really soon now (my patches are all ready). > >> - Dropping the lock in the middle of a callback is risky. That may >> enable re-entrances of code sections that weren't designed for this > > Hmm, indeed. > >> Spice requires a careful review regarding such issues. Or it should >> pioneer with introducing its own lock so that we can handle at least >> related I/O activities over the VCPUs without holding the global mutex >> (but I bet it's not the simplest candidate for such a new scheme). > > spice/qxl used to have its own locking scheme. That didn't work out > though. spice server is threaded and calls back into qxl from spice > thread context, and some of these callbacks need access to qemu data > structures (display surface) and thus lock protection which covers more > than just the spice subsystem. > > I'll look hard again whenever I can find a way out of this (preferably > drop the need for the global lock somehow). For now I'm pretty busy > with the email backlog though ... Yeah, I can imagine... Jan -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux