From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56581) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z5cyY-0005Cv-HO for qemu-devel@nongnu.org; Thu, 18 Jun 2015 12:47:39 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Z5cyS-0002UI-AP for qemu-devel@nongnu.org; Thu, 18 Jun 2015 12:47:38 -0400 Received: from mx1.redhat.com ([209.132.183.28]:51354) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z5cyS-0002U5-60 for qemu-devel@nongnu.org; Thu, 18 Jun 2015 12:47:32 -0400 From: Paolo Bonzini Date: Thu, 18 Jun 2015 18:47:19 +0200 Message-Id: <1434646046-27150-3-git-send-email-pbonzini@redhat.com> In-Reply-To: <1434646046-27150-1-git-send-email-pbonzini@redhat.com> References: <1434646046-27150-1-git-send-email-pbonzini@redhat.com> Subject: [Qemu-devel] [PATCH 2/9] main-loop: introduce qemu_mutex_iothread_locked List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: jan.kiszka@siemens.com This function will be used to avoid recursive locking of the iothread lock whenever address_space_rw/ld*/st* are called with the BQL held, which is almost always the case. Tracking whether the iothread is owned is very cheap (just use a TLS variable) but requires some care because now the lock must always be taken with qemu_mutex_lock_iothread(). Previously this wasn't the case. Outside TCG mode this is not a problem. In TCG mode, we need to be careful and avoid the "prod out of compiled code" step if already in a VCPU thread. This is easily done with a check on current_cpu, i.e. qemu_in_vcpu_thread(). Hopefully, multithreaded TCG will get rid of the whole logic to kick VCPUs whenever an I/O event occurs! Signed-off-by: Paolo Bonzini --- cpus.c | 9 +++++++++ include/qemu/main-loop.h | 10 ++++++++++ stubs/iothread-lock.c | 5 +++++ 3 files changed, 24 insertions(+) diff --git a/cpus.c b/cpus.c index 2e807f9..9531d03 100644 --- a/cpus.c +++ b/cpus.c @@ -1116,6 +1116,13 @@ bool qemu_in_vcpu_thread(void) return current_cpu && qemu_cpu_is_self(current_cpu); } +static __thread bool iothread_locked = false; + +bool qemu_mutex_iothread_locked(void) +{ + return iothread_locked; +} + void qemu_mutex_lock_iothread(void) { atomic_inc(&iothread_requesting_mutex); @@ -1133,10 +1140,12 @@ void qemu_mutex_lock_iothread(void) atomic_dec(&iothread_requesting_mutex); qemu_cond_broadcast(&qemu_io_proceeded_cond); } + iothread_locked = true; } void qemu_mutex_unlock_iothread(void) { + iothread_locked = false; qemu_mutex_unlock(&qemu_global_mutex); } diff --git a/include/qemu/main-loop.h b/include/qemu/main-loop.h index 62c68c0..6b74eb9 100644 --- a/include/qemu/main-loop.h +++ b/include/qemu/main-loop.h @@ -270,6 +270,16 @@ int qemu_add_child_watch(pid_t pid); #endif /** + * qemu_mutex_iothread_locked: Return lock status of the main loop mutex. + * + * The main loop mutex is the coarsest lock in QEMU, and as such it + * must always be taken outside other locks. This function helps + * functions take different paths depending on whether the current + * thread is running within the main loop mutex. + */ +bool qemu_mutex_iothread_locked(void); + +/** * qemu_mutex_lock_iothread: Lock the main loop mutex. * * This function locks the main loop mutex. The mutex is taken by diff --git a/stubs/iothread-lock.c b/stubs/iothread-lock.c index 5d8aca1..dda6f6b 100644 --- a/stubs/iothread-lock.c +++ b/stubs/iothread-lock.c @@ -1,6 +1,11 @@ #include "qemu-common.h" #include "qemu/main-loop.h" +bool qemu_mutex_iothread_locked(void) +{ + return true; +} + void qemu_mutex_lock_iothread(void) { } -- 1.8.3.1