From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:45835) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WVklM-0008I9-As for qemu-devel@nongnu.org; Thu, 03 Apr 2014 12:45:13 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1WVklL-0003Ya-Fj for qemu-devel@nongnu.org; Thu, 03 Apr 2014 12:45:12 -0400 Received: from mnementh.archaic.org.uk ([2001:8b0:1d0::1]:47468) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WVklL-0003Ww-9o for qemu-devel@nongnu.org; Thu, 03 Apr 2014 12:45:11 -0400 From: Peter Maydell Date: Thu, 3 Apr 2014 17:45:08 +0100 Message-Id: <1396543508-12280-3-git-send-email-peter.maydell@linaro.org> In-Reply-To: <1396543508-12280-1-git-send-email-peter.maydell@linaro.org> References: <1396543508-12280-1-git-send-email-peter.maydell@linaro.org> Subject: [Qemu-devel] [PATCH for-2.0? 2/2] cpu-exec: Unlock tb_lock if we longjmp out of code generation List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Riku Voipio , Richard Henderson , "Andrei E. Warkentin" , patches@linaro.org If the guest attempts to execute from unreadable memory, this will cause us to longjmp back to the main loop from inside the target frontend decoder. For linux-user mode, this means we will still hold the tb_ctx.tb_lock, and will deadlock when we try to start executing code again. Unlock the lock in the return-from-longjmp code path to avoid this. Signed-off-by: Peter Maydell --- cpu-exec.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/cpu-exec.c b/cpu-exec.c index 0914d3c..02168d9 100644 --- a/cpu-exec.c +++ b/cpu-exec.c @@ -227,6 +227,8 @@ int cpu_exec(CPUArchState *env) TranslationBlock *tb; uint8_t *tc_ptr; uintptr_t next_tb; + /* This must be volatile so it is not trashed by longjmp() */ + volatile bool have_tb_lock = false; if (cpu->halted) { if (!cpu_has_work(cpu)) { @@ -600,6 +602,7 @@ int cpu_exec(CPUArchState *env) cpu_loop_exit(cpu); } spin_lock(&tcg_ctx.tb_ctx.tb_lock); + have_tb_lock = true; tb = tb_find_fast(env); /* Note: we do it here to avoid a gcc bug on Mac OS X when doing it in tb_find_slow */ @@ -621,6 +624,7 @@ int cpu_exec(CPUArchState *env) tb_add_jump((TranslationBlock *)(next_tb & ~TB_EXIT_MASK), next_tb & TB_EXIT_MASK, tb); } + have_tb_lock = false; spin_unlock(&tcg_ctx.tb_ctx.tb_lock); /* cpu_interrupt might be called while translating the @@ -692,6 +696,9 @@ int cpu_exec(CPUArchState *env) #ifdef TARGET_I386 x86_cpu = X86_CPU(cpu); #endif + if (have_tb_lock) { + spin_unlock(&tcg_ctx.tb_ctx.tb_lock); + } } } /* for(;;) */ -- 1.9.0