From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41189) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1c4UKJ-0000gC-M7 for qemu-devel@nongnu.org; Wed, 09 Nov 2016 09:58:12 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1c4UKE-0002JC-Up for qemu-devel@nongnu.org; Wed, 09 Nov 2016 09:58:11 -0500 Received: from mail-wm0-x234.google.com ([2a00:1450:400c:c09::234]:37069) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1c4UKE-0002IY-O7 for qemu-devel@nongnu.org; Wed, 09 Nov 2016 09:58:06 -0500 Received: by mail-wm0-x234.google.com with SMTP id t79so305617040wmt.0 for ; Wed, 09 Nov 2016 06:58:06 -0800 (PST) From: =?UTF-8?q?Alex=20Benn=C3=A9e?= Date: Wed, 9 Nov 2016 14:57:36 +0000 Message-Id: <20161109145748.27282-8-alex.bennee@linaro.org> In-Reply-To: <20161109145748.27282-1-alex.bennee@linaro.org> References: <20161109145748.27282-1-alex.bennee@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Subject: [Qemu-devel] [PATCH v6 07/19] tcg: enable tb_lock() for SoftMMU List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: pbonzini@redhat.com Cc: qemu-devel@nongnu.org, mttcg@listserver.greensocs.com, fred.konrad@greensocs.com, a.rigo@virtualopensystems.com, cota@braap.org, bobby.prani@gmail.com, nikunj@linux.vnet.ibm.com, mark.burton@greensocs.com, jan.kiszka@siemens.com, serge.fdrv@gmail.com, rth@twiddle.net, peter.maydell@linaro.org, claudio.fontana@huawei.com, =?UTF-8?q?Alex=20Benn=C3=A9e?= , Peter Crosthwaite tb_lock() has long been used for linux-user mode to protect code generation. By enabling it now we prepare for MTTCG and ensure all code generation is serialised by this lock. The other major structure that needs protecting is the l1_map and its PageDesc structures. For the SoftMMU case we also use tb_lock() to protect these structures instead of linux-user mmap_lock() which as the name suggests serialises updates to the structure as a result of guest mmap operations. Signed-off-by: Alex Bennée Reviewed-by: Richard Henderson --- v4 - split from main tcg: enable thread-per-vCPU patch --- translate-all.c | 18 +++++------------- 1 file changed, 5 insertions(+), 13 deletions(-) diff --git a/translate-all.c b/translate-all.c index 2c8baf5..cf828aa 100644 --- a/translate-all.c +++ b/translate-all.c @@ -82,7 +82,11 @@ #endif #ifdef CONFIG_SOFTMMU -#define assert_memory_lock() do { /* nothing */ } while (0) +#define assert_memory_lock() do { \ + if (DEBUG_MEM_LOCKS) { \ + g_assert(have_tb_lock); \ + } \ + } while (0) #else #define assert_memory_lock() do { \ if (DEBUG_MEM_LOCKS) { \ @@ -146,9 +150,7 @@ TCGContext tcg_ctx; bool parallel_cpus; /* translation block context */ -#ifdef CONFIG_USER_ONLY __thread int have_tb_lock; -#endif static void page_table_config_init(void) { @@ -172,30 +174,24 @@ static void page_table_config_init(void) void tb_lock(void) { -#ifdef CONFIG_USER_ONLY assert(!have_tb_lock); qemu_mutex_lock(&tcg_ctx.tb_ctx.tb_lock); have_tb_lock++; -#endif } void tb_unlock(void) { -#ifdef CONFIG_USER_ONLY assert(have_tb_lock); have_tb_lock--; qemu_mutex_unlock(&tcg_ctx.tb_ctx.tb_lock); -#endif } void tb_lock_reset(void) { -#ifdef CONFIG_USER_ONLY if (have_tb_lock) { qemu_mutex_unlock(&tcg_ctx.tb_ctx.tb_lock); have_tb_lock = 0; } -#endif } #ifdef DEBUG_LOCKING @@ -204,15 +200,11 @@ void tb_lock_reset(void) #define DEBUG_TB_LOCKS 0 #endif -#ifdef CONFIG_SOFTMMU -#define assert_tb_lock() do { /* nothing */ } while (0) -#else #define assert_tb_lock() do { \ if (DEBUG_TB_LOCKS) { \ g_assert(have_tb_lock); \ } \ } while (0) -#endif static TranslationBlock *tb_find_pc(uintptr_t tc_ptr); -- 2.10.1