From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58104) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1a1W4P-0008TD-76 for qemu-devel@nongnu.org; Wed, 25 Nov 2015 04:08:57 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1a1W4K-0006ea-8P for qemu-devel@nongnu.org; Wed, 25 Nov 2015 04:08:57 -0500 Received: from mail.sysgo.com ([176.9.12.79]:43499) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1a1W4K-0006eU-1Y for qemu-devel@nongnu.org; Wed, 25 Nov 2015 04:08:52 -0500 From: David Engraf References: <565485A0.5090902@sysgo.com> <5654883F.9050509@redhat.com> Message-ID: <56557AA2.9080100@sysgo.com> Date: Wed, 25 Nov 2015 10:08:50 +0100 MIME-Version: 1.0 In-Reply-To: <5654883F.9050509@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH] qemu_mutex_iothread_locked not correctly synchronized List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini , qemu-devel@nongnu.org Cc: Fam Zheng Hi Paolo, Am 24.11.2015 um 16:54 schrieb Paolo Bonzini: > On 24/11/2015 16:43, David Engraf wrote: >> Commit afbe70535ff1a8a7a32910cc15ebecc0ba92e7da introduced the function >> qemu_mutex_iothread_locked to avoid recursive locking of the iothread >> lock. The iothread_locked variable uses the __thread attribute which >> results in a per thread storage location whereas the qemu_global_mutex >> is not thread specific. This leads to race conditions because the >> variable is not in sync between threads. > > Frankly, this makes no sense. You're modifying the > qemu_mutex_iothread_locked function to return whether _some_ thread has > taken the mutex, but the function tells you whether _the calling_ thread > has taken the mutex (that's the point about recursive locking). This > breaks the callers of qemu_mutex_iothread_locked completely. > > The variable need not be in sync between the different threads; each > thread only needs to know about itself. The current code works because: > > 1) the iothread mutex is not locked before qemu_mutex_lock_iothread > > 2) the iothread mutex is not locked after qemu_mutex_lock_iothread > > 3) qemu_cond_wait doesn't matter because the thread does not run during > a qemu_cond_wait. But this is my issue on Windows: qemu_tcg_cpu_thread_fn -> qemu_tcg_wait_io_event -> qemu_cond_wait acquires the mutex qemu_tcg_cpu_thread_fn -> tcg_exec_all -> tcg_cpu_exec -> cpu_exec -> cpu_exec ends up in calling prepare_mmio_access prepare_mmio_access uses qemu_mutex_iothread_locked to check if the lock is already held for this thread, but it returns 0 because qemu_cond_wait doesn't set iothread_locked but the mutex is locked. Thus the function calls qemu_mutex_lock_iothread which tries to acquire the mutex again. On Windows this results in an assertion: Assertion failed: mutex->owner == 0, file util/qemu-thread-win32.c, line 60 I'm using a self compiled, cross gcc-5.2.0 mingw compiler. David