From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50088) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZPsJG-0002nQ-1o for qemu-devel@nongnu.org; Thu, 13 Aug 2015 09:12:42 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZPsJA-0000a1-DX for qemu-devel@nongnu.org; Thu, 13 Aug 2015 09:12:41 -0400 Received: from mx1.redhat.com ([209.132.183.28]:50433) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZPsJA-0000Zq-93 for qemu-devel@nongnu.org; Thu, 13 Aug 2015 09:12:36 -0400 References: <1439397664-70734-1-git-send-email-pbonzini@redhat.com> <1439397664-70734-4-git-send-email-pbonzini@redhat.com> <55CC8ADF.6040002@greensocs.com> From: Paolo Bonzini Message-ID: <55CC97BF.9030408@redhat.com> Date: Thu, 13 Aug 2015 15:12:31 +0200 MIME-Version: 1.0 In-Reply-To: <55CC8ADF.6040002@greensocs.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] [PATCH 03/10] replace spinlock by QemuMutex. List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Frederic Konrad , qemu-devel@nongnu.org Cc: mttcg@greensocs.com On 13/08/2015 14:17, Frederic Konrad wrote: >> diff --git a/linux-user/main.c b/linux-user/main.c >> index fdee981..fd06ce9 100644 >> --- a/linux-user/main.c >> +++ b/linux-user/main.c >> @@ -107,7 +107,7 @@ static int pending_cpus; >> /* Make sure everything is in a consistent state for calling >> fork(). */ >> void fork_start(void) >> { >> - pthread_mutex_lock(&tcg_ctx.tb_ctx.tb_lock); >> + qemu_mutex_lock(&tcg_ctx.tb_ctx.tb_lock); >> pthread_mutex_lock(&exclusive_lock); >> mmap_fork_start(); >> } >> @@ -129,11 +129,11 @@ void fork_end(int child) >> pthread_mutex_init(&cpu_list_mutex, NULL); >> pthread_cond_init(&exclusive_cond, NULL); >> pthread_cond_init(&exclusive_resume, NULL); >> - pthread_mutex_init(&tcg_ctx.tb_ctx.tb_lock, NULL); >> + qemu_mutex_init(&tcg_ctx.tb_ctx.tb_lock); >> gdbserver_fork(thread_cpu); >> } else { >> pthread_mutex_unlock(&exclusive_lock); >> - pthread_mutex_unlock(&tcg_ctx.tb_ctx.tb_lock); >> + qemu_mutex_unlock(&tcg_ctx.tb_ctx.tb_lock); > We might want to use tb_lock/unlock in user code as well instead of > calling directly qemu_mutex_* ? You cannot do that because of the recursive locking assertions; the child is not using qemu_mutex_unlock, it's using qemu_mutex_init. So I would have to add some kind of tb_lock_reset_after_fork() function which is a bit ugly. >> @@ -676,6 +709,7 @@ static inline void code_gen_alloc(size_t tb_size) >> CODE_GEN_AVG_BLOCK_SIZE; >> tcg_ctx.tb_ctx.tbs =3D >> g_malloc(tcg_ctx.code_gen_max_blocks * >> sizeof(TranslationBlock)); >> + qemu_mutex_init(&tcg_ctx.tb_ctx.tb_lock); > Maybe we can initialize the mutex only for CONFIG_USER_ONLY? It's okay, it doesn't consume system resources. Paolo